Too kind. Well obviously we don't run Windows servers. Although it's not been trouble free Treking all the time, we've had our fair share of DDOS and so forth which are difficult to mitigate without going to a large network provider who can soak up the traffic. So I do feel a bit sorry for the poor folk on the ground trying to unpick the mess.Luckily, TrekBBS has @EricF
Actually I initially assumed there were two issues. I saw an alert about the Crowdstrike update screwup, but the news channels were reporting a "Microsoft outage" - so I assumed one of the Azure DCs was having major issues. Seems it was actually just the one problem badly reported.
Amusing that Crowdstrike had recently published this https://go.crowdstrike.com/rs/281-OBQ-266/images/report-2024-state-of-app-security-report.pdf

As ever the rule is if you can't afford the outage then plan for it. I have a lot more sympathy for the smaller companies impacted who can't afford to do things "properly". If you're systems are critical then well maybe they shouldn't autoupdate without having secondary systems available that are not autoupdating, but are not open to the same threat profile? That said we rely on third parties for some blocklisting for bad IP ranges, I do at least filter them against our own whitelists before deploying however so they are checked to some extent before being loaded (well actually a duff list can't be loaded so there is that).