You probably have to do that today but this was 20 years ago.
The downtime was ridiculous and there was only one minor incident of hacking when the servers were not behind a firewall, using public IPs.
Services would sometimes fail to come up and there is always a chance that the server itself would crash, etc. For the email server, this was very dangerous to my job. The last thing I wanted to do is spend the weekend in the office rebuilding it from backups and backups are funny.
I've seen a number of instances where backups didn't work. If backups always worked, ransomware wouldn't be a thing.
Hacking was more of an imaginary threat like those people who say the earth is about to crash into the sun or something. I would just say, oh yeah, definitely. Meanwhile I'm adding that guy to the idiot category.
Best thing to do would be to give death penalty for these things, hunt them down like dogs.
Most denial of service is coming from these stupid MFA logins, overcomplicated permission schemes, helpdesk calls from password expiration/reset, unexpected downtime and other bullshit that is costing more than the ransomware.
Even as a developer, this is even costing me a lot of time.
"The password to circumvent MFA isn't working, someone should look into that, shit that someone is also me, now I have to call support and deal with some idiot from Sri Lanka who sends me support articles I've already read. Meanwhile, people are asking me why my work isn't done..." lol
Yeah, I get it, back then it wasn't possible to scan the entire internet for vulnerable boxes. Now its fairly easy, patching your exposed boxes, for the big easy exploits as fast as possible is critical. Otherwise the threat actors will patch them for you so they don't show up on vuln scans, and can buy themselves time to lateral.
(post is archived)