Ransomeware Horror Story


Posted on 12/1/2017




A True Ransomware Horror Story (and the Crucial Lessons Learned).

Original article by Rebecca Dobrzynski Oct 2016
Prologue

Not so many years ago, there were rumors of frightening things beyond the (fire)wall who could inflict terror and destruction upon any organization, even holding our electronic property for ransom. There were whispers that these attackers were the worst anyone had yet encountered, that we were defenseless, and soon, we learned the name of the feared monster: CryptoLocker.

At the time when even a local police department gave up trying to recover its files and just paid the ransom, I was overseeing operations for a 70-person organization that had six locations. “Operations” was shorthand for “wears many hats of disparate styles” — my duties ranged from writing to supervising administrative staff to, yes, maintaining the organization’s technology infrastructure.

This was not cutting-edge stuff. Most of the demand for technology support was around supplying equipment to new hires, troubleshooting printer errors, and scrubbing PUPs and other junk from computers a handful of times a month using free software tools. Easily something that any reasonably tech-savvy millennial could handle with the help of persistence and IT forums on the internet. Most of the time.

When I took the reins from my predecessor, she assured me I needed no IT expertise, despite a dubious relationship with our primary IT consultant and a casual comment about never having utilized the “other guys we have a contract with.”

"It’s mostly just making sure not everything falls apart at the same time,” she assured me.


Part I: The Attack

I started getting phone calls and people in my office. They couldn’t open some of the files they used daily that resided on the in-house server. It took no time at all to notice that there was something very wrong.

It came to pass one day that I hired a new admin support person for my team during a flurry of other projects. On her second day on the job, a few hours into the morning, I started getting phone calls and people in my office. They couldn’t open some of the files they used daily that resided on the in-house server. It took no time at all to notice that there was something very wrong, and that entire folders and directories were corrupted.

The only person with any substantive IT experience on staff was a half-time project manager for a new records system implementation, who luckily was in the building and noticed the problems on his own around the same time I was initially investigating.

The two of us took our two in-house servers offline and tried to communicate to all staff that we were investigating a problem. Unfortunately, EVERYTHING lived on those two servers. Client and project information, finance databases and billing information, our VOIP and email servers… all running on Windows Server 2003. We had backups in place set up by an independent IT consultant who was great at doing things cheaply, but not great at standardizing our environment, being reliably responsive, or being proactive… so, it turns out, the backups had not been completing properly.

Rapidly realizing that this was beyond the realm of my problem-solving abilities, I called the other IT company with whom we had a contractual, but not a real-life, relationship.

The rest of the day was a blur. We eventually discerned the extent of the damage (many dozens of folders on shared drives, with a lot of important data that didn’t live anywhere else). We also discovered that the secondary server had reached the end of its life when it failed to start back up again after taking it offline and trying to reboot it. And, finally, I found the culprit — my new admin staffperson. Another team member reported to me she had had an “Oh… uh oh…” moment around mid-morning, to which she responded “oh… nothing…” when an office-mate asked what was wrong. Said office-mate saw the unattended computer a little while later, frozen with a pop-up on the screen and an email window behind it… and the same computer also blue-screened when a reboot was attempted.

Before the new employee left for the day, I asked about it, and she said she had opened an email that said it had a shipping invoice (from a company we would never have used) and that she “couldn’t remember” if she had opened the attached .zip file. My heart sank. Her role literally required “comfort with technology” as in, “maybe Google an error message occasionally, and help the people you support by not doing anything unwise like opening suspicious email attachments.”


Part II: The Aftermath

I didn’t leave until 11pm that night, and for the next five weeks, I lived and breathed server, network, and backup research, data recreation, and disaster communications.

I gave the Big Boss an update around 6pm and was given no real guidance other than not to fire the employee who caused the problem and to keep figuring out what we should do. I didn’t leave until 11pm that night, and for the next five weeks, I lived and breathed server, network, and backup research, data recreation, and disaster communications. I earned more comp hours in just over a month than I did in the previous eight months combined.

All of my to-do lists and project timelines were scrambled and replaced with buying and installing a new server, upgrading databases and other software that were incompatible with the newer operating system, trying to rebuild the network from scratch (with better group policies and permissions), and attempting not to dig the organization into too deep a financial hole by suggesting a more secure infrastructure and backup environment. That battle was only partially won, but even though we were not as secure and recoverable as we may have desired, we ended up in a better place than prior to the disaster.


Lessons learned

  1. Sometimes the scary things that go bump in the night are totally real.
  2. Always try to fix broken things right away, even if they were broken by someone else before you inherited them.
  3. The pain of forking over some time and cash to set things up right is nothing compared to scrambling to recover from a disaster.
  4. Use multi-layered security and backups!


Original Article: https://blog.barkly.com/ransomware-horror-story-lessons-learned

Click here for Free Assessment


For more more information, contact:

Pete Groman, President
Namorgy Network Solutions - GeekByTheWeek[TM]
pete@namorgy.com
972-454-0029
#NNSIT #ISpeakGeekDOTBIZ #GeekByTheWeek[TM]

Our Sponsor

Namorgy Network Solutions is dedicated to providing cost-effective IT Managed Services to small and midsize businesses that want to improve their productivity. With our comprehensive approach to Managed Services, we are your single source for all things IT, fully committed to customer service excellence. Our fast and friendly team of experts is always thinking ahead to deliver the best service possible.



Pete Groman, President
Namorgy Network Solutions - GeekByTheWeek[TM]
pete@namorgy.com
972-454-0029
#NNSIT #ISpeakGeekDOTBIZ #GeekByTheWeek[TM]
CHAT
Have questions? Feel free to click the "Live Chat" button at the bottom right during business hours. Or the "Contact us" button after hours. Thanks!
ADVERTISEMENT