What I couldn't possibly get is.. how?? so humongous a world-wide 24/7 network can tolerate the sum-of all these complexities and their individual quirks: indefinitely.

How could one 'model' such a dynamic overall-System so as to anticipate.. plan for work-arounds for? ...
some next-thing not ever seen before? From malicious intent through so-so coding, honest errors, insanely-great new browser features
(with an undetected Gotcha) and so on. It's a perpetual moving target.

(I mean--I'd eschew a ride on an Airbus, simply for concluding that it is fatally flawed in One (I call paramount) design-idiocy: the bloody control yokes do NOT move,
signal-in-tandem, so that both pilots Know what each is doing.) As in Flt. 447(?)

The web isn't an airplane; in terms of sheer #connections, it is an unprecedented behemoth, and I couldn't remotely-guess about n-failure modes
it may or may not already capably anticipate. Can anyone? (especially about clever/malevolent attacks by the clued-in.)

[HTF could you even shut it All down for, say a day of roll-out testing/preventive tests etc, what with all this cloud-storage techno as would just Stop.]
Guess we'll find out (surely there are Web-II prototypes already beyond alpha testing. Shirley.)
But ever using such would dwarf the problems of changing driving habits instantly from L-lane --> R-lane, as when Sweden switched.