I had homeland security reach out to a customer of mine warning them that they had Gootloader in their network. That was a fun day. Luckily no commands were sent to the infected machine and it didn’t traverse the network.
A city truck hit a powerline near us on Wednesday that caused a power blip. That was enough to kill off a couple of old computers. So that consumed the first half of my Thursday. I'm hoping I can leverage this to finally get approval to refresh our old inventory, which is most of our inventory.
I also have to work with a 3rd party dev company that's less than worthless. For a group of people who's jobs require solid analytical thinking, it's amazing that they're able to function in life without having a handler for day to day tasks, like getting dressed. They're incapable of completing a task without explicit instructions, and seem to think that my IT services extend to them.
The joys of being the sole IT person at a small/medium size business. I need a bigger head for all of the hats I have to wear.
Trying to cut over to a new router/core switch stack and have been running into issues with some of our customers. Their workflows are dependent on WAN circuits that are being deprecated in the cutover and they just don't want to adjust because they're afraid of the downtime. Not that it would even be that much downtime, since the NAT/Trunks/Gateways are already configured and the LAGs just need hardware and the routing table scripted in.
People just don't understand that occasionally we have to go dark for 30-60 minutes for critical upgrades that just can't be done with HA due to the outdated hardware we are trying to move off of.
Their unwillingness to play ball is what keeps us so out of date in the first place.
Last month we had a failover cluster that we had to upgrade from 2012 R2 to 2016. Ever since the upgrade the storage performance (connected to a SAN via iSCSI) had tanked and backups were basically causing the whole house of cards to tumble.
Eventually found that the MPIO policy on the cluster disk had been reset by the in-place upgrade (bad practice to upgrade in place I know but we are a small shop and I don’t exactly have the time to rebuild another cluster from scratch)
But that week after the upgrade was a total shitshow
We were down from 17 admin go only 5 because of vacations, sickness and trainings. Had some issues with logins after renewing citrix certificates, mail sync to mobile devices and missing calendars in MS Teams. And that was just today.
It's been a hellish week, with customers writing more tickets than ever for the last 2 - 3 weeks. The few people that were there worked well together though, so we managed to get a lot of work done. But at that rate, people will burn out within a few months.
Optimizing app for Postgres heavy duty writes. Finally got all VMs set up, configured to my liking, and now I can just mess with everything to see how it affects performance.