You delivered. Just in time for Halloween, our third compendium of network nightmares submitted by you from around the web is here.
Shall we begin?
Here’s some tales from our very own team at Auvik to start things off:
Ben Freebody, Customer Success Operations Manager
It was circa 2003 and I was working as an infrastructure analyst at Star Internet, managing the MessageLabs infrastructure. Customers would send their email to us and we would scan it and if deemed “worthy” would relay it on. We’d had a number of virus ‘outbreaks’ in recent memory, we had also dealt with a number of DOS attacks. However, what we witnessed that day was akin to watching a snowball slowly pick up momentum down Everest and literally destroy everything in its path…
I am paraphrasing and I don’t fully understand what happened but it went something like this… “John” decided he wasn’t happy at his current company and he put together his CV, and fired off a hopeful copy to some unsuspecting HR distro. There must have been something seriously wrong with the receiving organization’s email DL, as it went back to itself, a circular process with no end in sight. Unfortunately, and not unexpectedly, people who were on the DL started responding asking others to stop sending them the same message again— which resulted in more copies of John’s resume hitting the server and being relayed again.
At some point and I have no idea how (maybe it really was a virus) John’s resume hit another DL with the same issue and the issue again was magnified. All told, something like 2 million copies of John’s resume had hit and consumed several thousand mailboxes by mid-morning.
Eventually, the plug was pulled across the receiving company’s mail servers, and everything went quiet. I’m not sure if John ever got the job though.
Trent Cutler, Product Manager @ Metageek
Our new friends at Metageek have a great throwback horror video to share. Trent shares his tips on how to survive the zombie apocalypse and still have good WiFi.
[As if emo hair from 2011 isn’t a horror story unto itself…]
Nick Hilderman, Manager, IT Security
Always have a backup plan… this was pre-cloud and happened a while back, but I still wanted to share. At one of my old companies, we used to manage the network. It was a larger organization that operated globally. One of our NOC folks pushed a change in the off hours which disabled the management port to our assets and killed the network. The network was down and we had no ability to remotely access the routers/firewalls. The company was dead, and our data center was in a different country.
Talk about an “oh shit” moment. We ended up finding someone to go on-site and roll back the changes (thankfully) so the config could be updated properly. We added an out-of-band option shortly after that with a KVM and separate network line. Change management became a much larger discussion after this!
Always have a secured backup plan and review all changes before they are pushed. It was a hard lesson learned having one change disable a multi-billion dollar enterprise for several hours.
From our Readers
Our Auvik users always share some bone-chilling stories and anecdotes. From tripped-up cables to unknown devices, we’ve all been there.
Oh, and if you’re a tech who watched the Roger’s outage take out half of Canada’s internet services for days, get in touch!
Sean Barry, White Mountain IT Services
The worst experience was during an initial deployment of our tools to a newly onboarded client. Within the first hours of rolling out of tools, including Auvik, their server had to be rebooted for the deployment of our backup software. It never came back online.
We were eventually able to resolve the issue and get them a new server. Ironically, we had identified that server as an issue during the onboarding phase! [And hey, while Auvik wasn’t able to assist in this example, it has assisted greatly in the many scenarios of misconfigured networks and network loops!]
Client called saying the network “just stopped” while he was at lunch. A quick glance into Auvik showed the port with the highest utilization at almost 100%! Turns out, the same client had thrown in an extra switch to grab a data port to copy some files off an old NAS. That switch put the entire server room behind 100 Mbps. He started the copy before he left for lunch. In this case, Auvik had the answer.
When asking users to restart their modem for internet issues, they end up pulling their entire UPS causing all servers to non-gracefully shutdown!
From Reddit & Twitter
Our readers pointed us at this Reddit round up of IT mistakes, and there were some doozies. Here are some of the highest voted disasters they shared, as well as a few comments sent directly to our Twitter account.
From @ezra611: “Coworker was putting in a new Firewall at a client. Schedules it to happen during their lunch to minimize downtime. He sets them up in parallel, gets the confirmation good to go, and then goes to physically pull the LAN plug on the old and move it to the new.
The moment he plugs the firewall into the LAN, there’s a loud “BOOM”, and all the power goes out. The owner and some other employees are all “What did you do?!?”
A car hit a telephone pole in front of the office at precisely the same time he moved the cable.”
From @administrativebox: “New to [education] IT administration, I pushed a patch out and goofed the Intune/Updates switches for a mandatory reboot, at 9 am on the day of exams… felt so bad 🙁 My change management request was approved, so not entirely my fault, but still…
So far as I know only one student lost their progress and their instructor was kind to allow them a reattempt later that day. READ-ONLY FRIDAY’S/EXAM DAY FOLKS!”
From @avgjoegeek: “New SysAdmin’s first day on the job. I explained to him that we at “RinkyDink” company really believe in the whole “if you can redneck your network together and it works? Don’t mess with it” philosophy. For some reason, all the routers were daisy chained onto power strips, which were zip tied to the rack instead of I dunno? Spending the $$ to get proper power to them?
The SysAdmin blows me off with his holier-than-thou attitude even though I was the guy who’d kept things afloat until they hired him. I just take in a deep breath and walk away to see how this plays out.
I watch him from my desk as he wanders into the data center. It’s the centerpiece of our office so everyone can see what is happening. Sure as shit – I see him over by the rack where all the routers are. I’d already given tech support a heads-up before the guy even touches anything. He’s scratching his head looking at the gordian’s knot of power plugs trying to trace down the one he wants.
I guess he was bored or tired of chasing down the correct one because he suddenly just reaches out and grabs a plug and yanks on it. This is when I see the whole rack go dark. I shake my head and tell Tech Support to start making calls. My boss comes out of the office and I just point to the DataCenter where the new SysAdmin is still standing next to the rack with a plug in-hand trying to figure out what he did wrong.
I decided it was a great time for lunch. So with my boss charging toward the DataCenter, the Tech Support office lines rang off the hook. Red Robin sounds like a great place to eat today. I calmly get in my car and drive away from that hot mess of a day at work.”
From @dudester99: “We have one area in our office that me and a colleague have determined is haunted. At least, the wireless peripherals don’t always work…
Wireless keyboards and mice will stop working after a few hours or days, even with brand-new batteries. TV remotes will only work for one TV one day and another the next. One TV actually just stopped turning on. Brought it to our office and it worked fine.”
From @ITMANAGERIT: “I was doing my usual cable runs in the office. All the walls have two pipes in each office next to each other. One pipe goes to the LAN side and one side goes to the old telephone line side which is being repurposed for LAN. I notice in this run there are only cables on the LAN side but none on the phone side. Since there was no cable, I did not know which was the correct pipe from the patch panel, so I thought it was a good idea to go from the office side.
I was snaking my way in asking someone else to go to the comm room and see when the snake came out or if they heard anything. Turns out that was the worst idea I’ve ever had on my job, the pipe ran into the high-voltage breaker box and it only got stopped a couple of inches from reaching the contact points. Needless to say, I no longer do this kind of guesswork and now I have a fiber/plastic snake. I have never thought about the possibility of this happening since all pipes leading to a breaker box are always used for electricity runs already… but apparently there is always a chance where it isn’t and you shouldn’t take that chance.”
Have your own terrifying tale to share? We’re putting together our next edition now, so add your network nightmares to the comments below, or email us social[at]auvik.com!
Your Guide to Selling Managed Network Services
Get templates for network assessment reports, presentations, pricing & more—designed just for MSPs.