cthia wrote:In any complicated system, as it grows even more complicated threatens to become more vulnerable, because with each iteration of complication causes you to lose a bit of control. So you engineer in a cutout. Which makes it vulnerable. Again, it is the ever present human element. If you don't design with this element in mind, then you'll fall prey to it, because everywhere you are, there it is.
E.g., what happens if the premise of this thread bears fruit and "voila" a completely secure system? There are implications.
Scenario, suddenly our government is locked out of their main computers. Some idiot forgot the password. And this time, of course, he decided to heed your warning about writing it down.
The [reset] and [power] buttons act somewhat like cutouts now.
My niece rang me and added...
An unforeseen implication of defeating Godel and the halting problem and using it to design a completely secure system, are the powerful A.I. programs that will instantly become a possibility. Suddenly a coder can write programs to test its own validity and that entails being able to design programs which can break into any system and the A.I can prove its validity.
What happens when a completely secure system that is "proven correct" and written by an A.I. * goes up against an A.I. system written to be "foolproof in compromising other systems" and proven correct by the A.I.
In other words you have an impenetrable system going up against... itself.
* Essentially a computer with this type of programming, programmed by Richard Pryor, was used to analyze and find the weaknesses and to kill the impenetrable Superman. The human element is ultimately what saved Superman, IIRC. Richard Pryor couldn't stomach the thought of being the one who killed Superman.