cthia wrote:
...
The problem is to write programs that the programming can prove correct. Impossible to do in such a complex system being attempted. That's why the strategy is to attack it in pieces -- to prove parts of the system foolproof. Akin to the area under the curve solution and mapping.
There are lots of problems that were "too complex or too big to solve" that have yielded to time and various work. "Too big to solve" is simply an admission of insufficient knowledge of history and a sense of humility in the face of what's known, and more importantly, unknown.
JohnRoth wrote:The objective isn't to try to prove the correctness of software which was constructed by Klingon Software Development and which looks like the illegitimate offspring of Cthulhu and the Flying Spaghetti Monster. It's to create software that works and is bulletproof. That may require throwing what we've got out and starting over.
cthia wrote:I agree, as does most others in the field. (There was a heated debate on that at the symposium.) Yet, start over with what, so that the inherent problems of the beast won't still be there?
Well, starting over with a language that has provably correct strong typing might be a start: the C language is at the root of a lot of hacks. It's just too hard to do all the manual checking that its complete lack of type safety mandates.
JohnRoth wrote:By the way: I regard LISP as one of the major reasons why the original approaches to artificial intelligence failed. (It's not the only one, or even the biggest.) Revisiting some of that with the advantage of another 45 years of development in linguistics is somewhere on my bucket list. It will not be in LISP.
cthia wrote:Suit yourself.
Huh? LISP is not why the original approaches to AI failed. LISP and the lambda was what originally made it possible. The lack of hardware that could run such a powerful language on at the time and to digest such am ambitious project was the failure. This was 1958! LISP MACHINES was a start. But, the hardware and tech at the time was horribly lagging behind a language that hasn't changed since its conception, because it is so fundamentally and intrinsically sound that it doesn't have too. You don't change lambda. Lambda changes you.
Hardware and tech is ready for LISP now, and the power of LISP is still there. Unfortunately, so is the immobilizing fear of parenthesis.
Natural language understanding is the hard problem of Artificial Intelligence. Lisp does nothing to address this, other than its proponents claiming it can do anything, like the fabled sonic screwdriver.
Lisp is too powerful. Seriously. Lisp was created to take advantage of the IBM 709 and 7090 hardware, which had two address and a few additional bits in each word. Lists were its only data structure because that was all it could do.
There were a number of reasons the original AI enterprise failed; Lisp was not among them. One was, as you note, the fact that machines weren't powerful enough. Another was Noam Chomsky.
I know that last statement is going to be rather contentious. Now that Noam has walled himself off in a self-referential echo chamber, where he and his acolytes simply don't recognize that anyone else has any valid opinions on language they need to recognize, let alone discuss, it's possible to begin to assess the damage he's done to the field of linguistics.
That's a strong statement, but it's not stronger than what a lot of properly-qualified linguists are thinking, and quite a few are saying. During Chomsky's reign as the High Priest of linguistics he focused on syntax to the essential exclusion of the rest of the field: semantics (the study of meaning), pragmatics (the reason why "yes" is not a proper answer to "Could you pass me the salt"), socio-linguistics, (the study of language as it's used in real linguistic communities) and phonology (the study of sound systems). There are other subtopics as well.
Chomsky's early work included the idea that language was too complex to have evolved; it originated in a single burst of something or other, complete with a "language acquisition device," because it was too complex for children to learn otherwise. "The poverty of the stimulus" is the standard phrase. Chomsky was a creationist! Who knew?
Needless to say, that first formulation failed. He's now on his fourth complete revision and it's showing all the same signs of failure as the other three. Just for starters, he's never given up on the idea of recursion being fundamental, while all the evidence is that human beings simply don't do the kind of recursion he favors.
Meanwhile, he sucked all the oxygen out of the field for other researchers and other approaches.
Making progress on natural language understanding requires studying meaning, that is, semantics, not syntax. I've run across two things in that area that seem promising. One is Frame Semantics, due to Charles Fillmore. See Framenet for one approach. That's pretty standard. The other is Natural Semantic Metalanguage, which seems to have actually solved one of the long-standing hard problems in linguistics by finding an actual semantic core that underlies all human languages.
That's what I'll be tackling at some point, God willing and the creek don't rise.
cthia wrote:You will not be able to hide from Lisps data structures of "lists." Even in the description of the goals, lists are used. Unavoidable. Unless there's a miracle language waiting in the wings.
I don't need to hide. Lisp is headed for the Dumpster of History now that MIT has finally bowed to the inevitable and quit requiring that all its undergraduates learn it. The field has moved on.
cthia wrote:Say what you may. Godel and the halting problem is like an awaiting croc with its mouth open -- if a truly secure, and not somewhat secure system is the goal. You can bank on it, or leave your currency under a mattress.
Come back in a hundred years, and we'll discuss it then. That's the appropriate time frame for statements like that.