cthia wrote:Somtaaw wrote:That'd give a fair illusion of, what most other sci-fi's would refer to as a "stupid AI", useful in a very restricted/limited way, and unable to (or uninterested to) learn other things that don't relate to what it was made to do.
The stumbling block there and a seemingly insurmountable obstacle is
Gödel's incompleteness theorems and the Halting problem.All AI, in my book, are going to be stupid in that context. Clearing that hurdle may lead to the kind of AI that will outpace our niche for it. A truly thinking AI would engineer its own Manifest Destiny.
There was a thread on A.I. post visited by RFC.
viewtopic.php?f=1&t=5679&hilit=artificial+intelligence
Hmm, some good stuff there. Maybe I'm a bit biased by my love of the HALO universe, Dahak series, and the Weber books playing around with the
Live Free or Die series.
Off the HALO wiki anyways:
Smart AIs, or A.I.s that are not confined to their one purpose, have a normal operational life span of about seven years. Because the "Smart" A.I. is subject to an established memory core which cannot be replaced, the more the A.I. collects data, the less "thinking" space it has to work with. An A.I. will literally "think" itself to death. Dumb A.I.s do not have this problem as they do not learn anything that is outside of their set limits of a dynamic memory processing matrix. They are quite useful in their particular field of expertise, but very limited. Smart A.I.s can function and learn as long as they are active.
Now that's just the HALO universe that gives a hard limit on how long a so-called smart AI lives, and a limitation of memory storage (which could be argued in honorverse is terrible).
edit: after poking through the thread a bit, you actually covered more or less how I define AI's cthia, you just label them as weak and strong. I label them as dumb, and smart. Dumb being very very specific (chess master AI I'd call dumb, but you'd call weak... we're still talking the same thing though)
The pearl, (
http://infodump.thefifthimperium.com/en ... gton/142/1) really boils down to trying to suggest that the computers in Honoverse have AI influence, rather than sophisticated programming that gives the illusion of AI. Even the dumbest AI in most other sci-fi's is vastly brighter than computers in Honoverse.
If you threw a smart AI, such as Cortana, who can, does, and until she finally goes rampant and 'dies' will continue to take independent action on her own initiative. Rather than sit and report to an organic and wait for instructions to take advantage of X weakness, such as the Thunder of God versus CA Fearless example of the pearl
The problem Thunder of God had in Honor of the Queen was that her AI was far inferior to Fearless' to begin with and that her tactical officers didn't really understand when and how to get out of the way. When they finally made the decision to hand over to computer control, effectively completely, their offensive fire was vastly more effective than it had been while half-trained human operators were getting in the way. Where they got into trouble was that they were unable -- because of their inexperience -- to recognize that their idiot Havenite artificial so-called intelligence was repeating its defensive tactics in a predictable cycle, which Fearless' systems recognized and brought to Rafe Cardones' attention, requesting human operator input to decide what to do about it. At which point Rafe blew the crap out of the vastly more powerful ship.
A 'smart' AI would have just done it, a 'dumb' AI, if it was setup in such a way that... for example, being at action stations allows it to enter the decision queue if it picks up a weakness that can be exploited (like the above predictable weakness) it can make the orders itself, thus freeing the tactical officer's attention slightly for other considerations.
And reading through the thread a bit, it looks like there is indeed a general ban on AI period, or at least the sort of AI we (or maybe just me) defines as AI, while calling the generic programming AI. After all, it's not like talking to the tactical computers for example, would lead anywhere except maybe getting 'error, input not understood' message