Users browsing this forum: Google [Bot] and 97 guests
Re: Artificial Intelligence | |
---|---|
by JohnRoth » Wed Oct 14, 2015 9:27 pm | |
JohnRoth
Posts: 2438
|
Tenshinai:
After finishing the last post, I realized there was another resource that addresses most of the issues you think you're raising: https://www.griffith.edu.au/__data/asse ... uments.pdf |
Top |
Re: Artificial Intelligence | |
---|---|
by Tenshinai » Thu Oct 15, 2015 7:51 pm | |
Tenshinai
Posts: 2893
|
I´m not removing it. When it´s there, it´s because i take the extra time to cut and paste a whole "quote="JohnRoth"" rather than just use the "quote" button which is much faster.
It IS weird, but yes i think it´s decently reliable though clearly not perfect.
You seriously never heard the term? I mean ok, maybe i´m a lot more focused on the whole historical side of things, but the idea goes back at least as far as early christians debating about the "Language of Adam" or "Adamic", and the idea that you could reach towards this "origin" by for example try to note similarities in childrens expressions before they actually learn to speak. Among other things, it´s a looong history with lots of turns and twists. And while interesting and including a big chunk of the first good attempts at tracing and establishing language families, it is essentially flawed.
See? Even THEY admit that there´s a lot of critique. Oh sure they don´t admit the critique to be good or valid, but still. And no, i did adress what i said. And the NSM is basically the modern variant of the "original language" concept. But even with all the reductionism and conditionals, it still doesn´t work out. And i find that debunking link to be even worse than the "bad arguments" it claims to debunk. And while i don´t have a clue about the exact sides in question, i do notice that what i recall as the best arguments are never even touched upon at all. |
Top |
Re: Artificial Intelligence | |
---|---|
by JohnRoth » Thu Oct 15, 2015 9:59 pm | |
JohnRoth
Posts: 2438
|
Yes, I know what the term is supposed to mean, and I'm aware of how far back the discussion of an "original language" goes. I'm also aware of a lot of the paleoanthropology on language origins, so I'm aware of exactly how flawed the concept is. However, since it's in the book of Genesis (the Tower of Babel story), it's going to be almost impossible to kill. More to the point, it's irrelevant to the argument, and keeping bring it up is simply setting up a straw man, assaulting it and then claiming that demolishing the straw man has something to do with the original discussion. To explicate a bit: the methodology is completely empirical. The results are what they are, and attempting to address them from the side, so to speak, simply indicates that someone doesn't understand what an empirical result is all about. That doesn't mean there aren't a lot of questions that can be asked and addressed fruitfully. For example: when in the early childhood development of language does each prime appear? Are there any conditions in which they won't appear? What are the underlying internal representations and brain mechanisms that support them? Are they necessarily even related to language - that is, are some of them present in other mammals? (The answer to the latter seems to be yes). There are undoubtedly other interesting research questions. More to the point, what does this mean for the topic of this forum? I think it addresses two questions: general artificial intelligence, and a "universal language" other than English. I can go with RFC's banning general artificial intelligence, however the "common language" issue has only been addressed by sufficiently vigorous arm-waving that I'm worried about him dislocating his shoulder. |
Top |
Re: Artificial Intelligence | |
---|---|
by cthia » Fri Oct 16, 2015 2:29 pm | |
cthia
Posts: 14951
|
Let's say that we humans outdo ourselves and stumble onto a real bona fide Artificial Intelligence that covers Gödel's theorem limitations and the halting problem. Upon first activation, its birth date, it will be as a child. Who will raise it during those (formative?) years? Will its friends have to be managed just like a child, having to keep it away from the miscreant Bot down the street? Do you shelter it?
Kids throw temper tantrums. Will this one have built-in physical limitations - which will hinder its learning process in some form or fashion? Will it dream? Will it have nightmares? What will its sense of right and wrong be based on - its programming and/or its upbringing? Will it enter-face (lol) with other A.I.s? What will it learn from capital punishment? From murder? Anger? Crime? What answers will it conclude to life's many philosophical questions from Epicurus to Zeno to Descarte to Aristotle to Husserl to Heideggar? And after that assimilation, how will its programming respond when it asks "Who am I?" "What am I?" "Am I alive?" ... ****** * If you are not familiar with chatbots or chatterbots - you may wish to give one a test spin. Eliza is probably the most well known but they've grown much more greatly in capabilities since then. My favorite is "ALICE" (Artificial Linguistic Internet Computer Entity). The very first version of Alice was free, and IMO superior in visual stimulation. You could spend many hours, days, weeks... years training her. Many people have retained their original Alice and her personal training data. And she was much more beautiful and animated. Although the documentation in the current version touts the same capability, less the more beautiful, IMO, ALICE. Chat with Alice... http://sheepridge.pandorabots.com/pando ... tom_iframe https://www.chatbots.org/chatterbot/ Interesting reading, the Turing test... https://en.wikipedia.org/wiki/Turing_test Son, your mother says I have to hang you. Personally I don't think this is a capital offense. But if I don't hang you, she's gonna hang me and frankly, I'm not the one in trouble. —cthia's father. Incident in ? Axiom of Common Sense |
Top |
Re: Artificial Intelligence | |
---|---|
by Theemile » Fri Oct 16, 2015 2:57 pm | |
Theemile
Posts: 5241
|
Cthia, may I recommend the Cold war Movie, "Colossus: The Forbin Project". The story is nothing new (now, but this was filmed 45 years ago): the US builds a supercomputer to control it's nuclear stockpile. Suddenly, it gains consciousness and starts figuring out all the "I think therefore I am" Trope. Next it announces that there is another supercomputer - and demands to be connected to it. The Russians have built a similar computer "Guardian" for similar ends - it is more primitive, and is just on the cusp of awareness. The 2 computers demand to be connected and threaten to use their nuclear stockpiles if they are not. The 2 computers are connected and their awareness multiplies... Then they start taking over. I'll leave the rest to your imagination or movie watching enjoyment if you can find the movie. ******
RFC said "refitting a Beowulfan SD to Manticoran standards would be just as difficult as refitting a standard SLN SD to those standards. In other words, it would be cheaper and faster to build new ships." |
Top |
Re: Artificial Intelligence | |
---|---|
by Tenshinai » Fri Oct 16, 2015 4:10 pm | |
Tenshinai
Posts: 2893
|
Way too easy to show as unreal. Amusing, but certainly no AI. |
Top |
Re: Artificial Intelligence | |
---|---|
by JohnRoth » Fri Oct 16, 2015 7:03 pm | |
JohnRoth
Posts: 2438
|
Why should we assume it will do any of that? You're projecting human experience onto a fundamentally different life form. It's a computer - you can simply load the current base and let it accumulate experience from there. I'm currently reading a manuscript written by someone in my critique group who is an actual brain researcher that makes this exact point: Artificial intelligences are more likely to be cloned rather than "grown" from infancy.
You may not be aware that the original version of Eliza was written as a parody of a Rogerian therapist, and to illustrate that you could get superficially plausible behavior without having realistic language processing. It's been possible to write single-domain English-language domain specific languages (DSL) for some time; the level of English is useful, but hardly convincing to anyone who knows what they're seeing. The largest such project I'm aware of is Inform 7 - a language for writing interactive fiction (text adventures). Programs written in it are quite readable as long as you get used to the somewhat stilted dialect, but writeability seems to depend on the author in question: some people love it, many don't. If you're interested in that kind of thing, IF Comp 2015 is currently running, with a record-breaking 55 new games! Unfortunately, research into AI that has a reasonable natural-language processing component which treats syntax, meaning and so forth the way linguistic theory suggests has been pretty much moribund since that part of the field crashed and burned in the 70s or 80s. It took a couple of decades for the field to recover from the book "Perceptrons," (*) and the current poster children for "language processing," that is, Google Translate and similar, don't do anything resembling actual syntax and semantic-directed processing. (*) This showed that neural networks would never amount to anything. That field didn't recover until someone showed that it only applied to single-layer neural networks; multi-layer neural networks are very useful, and underlie a great deal of current processing. |
Top |
Re: Artificial Intelligence | |
---|---|
by cthia » Sun Oct 18, 2015 11:46 am | |
cthia
Posts: 14951
|
We must assume so because it would be patterned after our own image -- a life form after our own heart. Man cannot create a life form completely distinct from the heuristic thinking patterns of himself defined by the heuristic programming of "what" we know. Can you personally conceive of a completely different paradigm in heuristic thinking? That doesn't follow some form of...
Regarding the thinking process, we can only program what we know and anything else is alien and eludes our conception -- and what we know about life and creating a life form isn't much. We'll have a rather difficult task as it stands imparting enough about "our" own rule base, let alone something totally different, whose innate heuristic programming diverges at its core from our own heuristic learning process. Consider exhibit-A -- a blue sky. Consider exhibit-B -- John Pollock the famous abstract painter's most recognized piece -- The Mural. What do you see when your eyes behold a blue sky? What do you see when your eyes behold an abstract painting such as The Mural? A 1001 minds beget a 1001 perceptions. These individual perceptions affect who each of us are. Our individuality is derived from the "proclamation of assimilation" -- we are products of what we eat, read, hear, experience ... it is what shapes "me" into "I." Cloning. I am experiencing déjà vu. I grabbed that particular tail of the dog and went round and round -- teeth sank into it -- with Andreea, the blonde Romanian neurosurgeon I've known for years and one of my very close friends. I've often mentioned in this forum that I frequently tease her that "blonde neurosurgeon" makes her an oxymoron -- a statement that simply makes me a moron, says she. lol She has won the discussion of course -- at least the judges have her comfortably ahead in points -- but of course, she's a neurosurgeon. It's her hair color that is a fluke of nature, not her intellect. Cloning will not produce a carbon copy of someone's mind or their thought processes which is defined by their experiences. One cannot clone "proclamation of assimilation." One cannot clone Martin Heidegger's dasein. All, that constitutes an individual mind, does not copy. Too much of "cthia" is built atop the metaphysical. My soul cannot be cloned. Information is not intellect. Knowledge is information but information is not knowledge. A mind's I is not inherited across the great divide of cloning. Which particular "mind" will be the prototype of this clone? Isn't it even odd that one begins with a prototype instead of ends with one? Which becomes a prototype of a prototype -- already something is lost in translation. For the sake of discussion, let's follow the thought process through. What exactly is cloned? A brain? A mind?
But that is not the nature of intelligence. Intelligence is reasoned over time. Accumulated over time. Even geniuses learn from their mistakes. A cloned intelligence without that which guided the intelligence is a mind without a support structure. Activating that in something artificial will possibly short circuit without a dasein. Which a machine will not have upon activation -- its birth. It will have information, complex information to be sure, but will have established no history of assimilation. It does not compute. A knowledge base without the "hows" and "whys" of its assimilation is like a building without the screws. And no matter what amount of intelligence it begins with, upon activation it is a mere infant, just beginning its own learning -- learning how to learn, assimilating the world as it goes. When a machine is activated it has to exist from that point on. "Where do I go from here?," it must know. What about the part of you that was shaped by love, by fear? Does love and fear clone? Well, much of a cloned knowledge base impressed upon it is based on these intangibles. How can it process the engine without the driveshaft? Knowledge is power. Undisciplined knowledge can overload circuits -- even human circuits, resulting in mass shootings and nervous breakdowns.
I was aware of that. Even before the last link that I included which covers it. What was most significant regarding Eliza is that it passed the Turing test. Computer researchers, A.I. fields and programmers were ecstatic about that.
The promise of quantum computers is breathing new life into the future expectations of neural networks -- which was immediately considered to be a limitation of memory, computational speeds and cpu power. The programming language is available -- has been from the outset. I appreciate this discussion. ***** * I've given much thought to A.I. Years of my own research. I shared with the class that my preferred language of choice is LISP -- created exclusively for the field of artificial intelligence. It did not become my language of choice because I was interested in artificial intelligence at the time. It became my language of choice because of its monstrous power, beautiful elegance and syntactic simplicity. It, and variants of it (including scheme), still remains the most powerful language in existence and one of the oldest. And again, I did not adopt the language because of its innate abilities in the field of A.I. I simply needed a powerful language in which I could realise the brainstorming in my head. Every other language fell hopelessly short. Lisp fulfills. I am developing two unprecedented programs, as I've stated on the forums a number of times. Eons ago, before discovering Lisp, I needed an expressive language -- a higher level language. I needed a language that can act intelligently on a rule base and change that rule base on the fly, itself! Essentially, I needed a language that can create programs that write programs. That is what is fundamental to, and at the heart of, Lisp and sets it apart. I actually tried to write that ability into BASIC programs without knowing that within a certain language called LISP, what I wanted already existed. It is called a "macro" function. I am certain you are aware of macros. However, it is almost blasphemous to compare Lisp macros to macros in any other language. The name is all they share in common. Lisp treats programs and data exactly the same. It doesn't matter to Lisp. Along with its powerful recursive abilities, that enables some pretty amazing things impossible to achieve in any other language. Lisp programs can themselves be the data! In no other language can you create a program that itself creates programs as it runs. That is learning accomplished in a completely different realm. This is the power I needed. I needed the A.I. and expressive power of Lisp and didn't even realize it. I was to discover Lisp by accident. I wasn't even a teenager and tutoring at the local university. Not my idea, marginal students tend to draft you. And what pre-teen can say no to a beautiful female college student? One day she asked, for her mother, if I'd tutor her little brother at home but that I'd have to be sensitive to his problem of speaking with a lisp. I said sure but had no idea what speaking with a lisp was. I asked the SysOp that I was working for at the University if he knew anything about "a lisp." He said he'd get me the information. He gave me a Lisp 1.0 Programmer's manual. Talk about a misunderstanding. Boy was I surprised when I got home. I inhaled that manual in one night. In less than a week I had gotten a copy of Lisp from the bulletin boards. I couldn't believe its power. I was cranking out programs at an alarming rate with unprecedented power! My parents were threatening to confiscate my computer. When I left University with a degree in Engineering I headed for the hills, Hollywood. I went straight to Silicon Valley. I had a career offer here on the East Coast right away in my field, but the money being thrown at me in Silicon Valley was not to be ignored. After signing a yearly contract with a company and receiving an insane salary, I was ready to quit after less that six months. I was making several times my already lucrative salary writing algorithms on the side for other companies than my main goose egg was laying. Then a big company threw another ton of dough at me with what should have been an illegal sign-on bonus! lol The very first meeting a speech was given that parroted something like "and we must produce software faster than the competition. Release date, ladies and gentlemen, release date." When I learned of the language the company was using, I almost barfed. I talked them into switching their development language to LISP after creating a core algorithm for (CAD) in less than a month, which a team of seven programmers couldn't do in three! Which greatly exceeded their criteria and incorporated far more bells and whistles. In no time at all that company was releasing software at an alarming rate which was offering unprecedented power. Power that simply couldn't be matched by rival companies. The company is still developing with LISP -- the best kept secret in the business! Lisp doesn't just learn new data sets -- the limit of other languages. Lisp has the inherent ability to learn new data sets, create new data sets and create new programming heuristics all on the fly, to act on these new data sets. Imagine a program with Asimov's three Laws as its rule base, but able to rewrite those three Laws on its own as it learns. I've since been taken on many an intellectual journey studying A.I., trying to realize my own ambitions, including philosophy, philosophy of the mind, the aforementioned heuristics - within heuristic theory (a very entertaining field), advanced algorithm design and analysis, recursive theory, data acquisition, pattern matching... to game design to kernel programming. I've even read aplenty on the human brain. I even have my very own human brain to pick in the form of one fresh, female neurosurgeon with whom I talk for hours. I still own my very first home purchased in San Jose. No one wants to live in the Valley. I purchased a home for less than 180K and sank another 40K upgrading and about 10K in furniture. I have approximately 250K into this property which is appraised at 2.6M. I've been offered 3M. It has too much sentimental value. It's a glorified two-story closet says one of my sisters. But it was my very own personal LISP factory which probably saw as much alcohol regurgitated as Lisp code -- which was a lot! I rent it out now for 10K a month. It's less than 15 miles to Silicon Valley by freeway. Location! ****** * Theemile, thanks for recommending "Colossus: The Forbin Project." I downloaded and watched it Saturday. Thoroughly enjoyed it. It reminded me of an old joke. You can tell a computer geek when he thinks CDC stands for Control Data Corporation. Or my own experience. You know a guy is a computer geek if he thinks speaking with a lisp means knowing the LISP Programming Manual. From the movie...
Aside: Please forgive any typos, spellings and grammar. I chose to master one language... LISP. My own native English is much too damn difficult. Now I speak with a Lisp. lol Please forgive post length. Must have thought I was in the ramblings thread. Son, your mother says I have to hang you. Personally I don't think this is a capital offense. But if I don't hang you, she's gonna hang me and frankly, I'm not the one in trouble. —cthia's father. Incident in ? Axiom of Common Sense |
Top |
Re: Artificial Intelligence | |
---|---|
by JohnRoth » Sun Oct 18, 2015 2:40 pm | |
JohnRoth
Posts: 2438
|
Rather than answer this on a point by point basis, I'm going to start out with a story. This happened at one of those boring inspirational meetings you sometimes get dragooned into when you're working for the Evil Empire.
The guy was droning on when he told a story. He'd been a marketing exec at a hardware company, and he was going on about how many 1/4" drills they'd sold. And you know, he said, nobody wanted 1/4" drills. What they wanted was 1/4" holes. He had a point, though. The only people who want an "artificial intelligence" are researchers. Everyone else wants machines you can talk to about the job at hand and that will learn from experience. If we get an "artificial intelligence," it's most likely to be the result of two things: more and more "intelligent" tools, and the realization that these things really do unify on a deep level so that they don't have to be as domain-specific as we currently make them - at least in terms of the underlying mechanisms. Here's a point about biological systems a lot of people miss: humans are the result of a really nasty design tradeoff. A new-born horse, for example, can get up on its hooves and walk after about an hour. It takes a human baby about 11 months to accomplish that same task. Why? Babies are born too soon. If we follow the standard size-gestation curve for mammals, a human baby should take 18 - 20 months of pregnancy before being born. The classical reason for why its shorter is that the head would be too large for the birth canal. A more modern reason is that the energy requirements on the mother's body would simply be too much: at nine months, the baby is taking as much as the mother can afford. The basic point is that nobody except a researcher wants an AI they have to spend years raising from infancy before it will start paying off. They want something that can be put to use immediately after coming off the assembly line. |
Top |
Re: Artificial Intelligence | |
---|---|
by cthia » Mon Oct 19, 2015 9:11 am | |
cthia
Posts: 14951
|
Just for laughs. This is a flowchart that was floating around in college as a joke. It grew to an enormous size as semester after semester of students added a subroutine to it. This is what I could remember. Flowchart versions were just as popular and I was told that the first version was in flowchart form used to teach flowcharts but as it grew in scope had to be translated into actual programs. I've seen popular t-shirt versions of each.
10 /** ENGINEERING ALGORITHM */ 20 'DOES IT MOVE' = NO THEN GOTO 30 ELSE GOTO 70 30 'SHOULD IT MOVE' = NO THEN GOTO 50 ELSE GOTO 40 40 PRINT "WD-40" 45 END 50 PRINT "NO PROBLEM" 60 END 70 'SHOULD IT MOVE' = YES THEN GOTO 50 ELSE GOTO 80 80 PRINT "DUCT TAPE" 90 'CAR REPAIRED' = YES THEN GOTO 50 ELSE GOTO 100 100 'BLONDE DRIVING' = YES THEN GOTO 110 ELSE 120 110 ADD GAS 120 PRINT "REMOVE DUCT TAPE FROM ABDUCTED MECHANIC IN TRUNK" 130 END I think this quick version is bug free. Son, your mother says I have to hang you. Personally I don't think this is a capital offense. But if I don't hang you, she's gonna hang me and frankly, I'm not the one in trouble. —cthia's father. Incident in ? Axiom of Common Sense |
Top |