Foto do autor

Hector J. Levesque

Autor(a) de Knowledge Representation and Reasoning

5+ Works 131 Membros 4 Reviews

Obras de Hector J. Levesque

Associated Works

Johan van Benthem on Logic and Information Dynamics (2014) — Contribuinte — 3 cópias

Etiquetado

Conhecimento Comum

Nacionalidade
Canada

Membros

Resenhas

A good introduction on AI. It examines topics beyond AI, like neuroscience and philosophy, which shows you how broad and difficult the questions that AI is attempting to answer are (if there even is any at all). Worth a pickup for the well-written end notes itself.
 
Marcado
georgeybataille | outras 2 resenhas | Jun 1, 2021 |
Clear and concise without being dry. Doesn't go into much detail, but doesn't try to dazzle with jargon or grand pronouncements either, and what is explained is explained clearly. Not greatly satisfying, though, and I'm still not sure exactly what the author's purpose was. His main arguments seem most relevant to specialists, but the book stays very much on the surface, and in terms of depth and difficulty it's clearly targeted at non-specialists (and not even particularly engaged non-specialists). I also found the (brief) discussion of AI risk in the final chapter rather disappointing. The author doesn't really engage with the main arguments of the doomsayers, nor even demonstrate that he understands them, so his reassurances are not very comforting, and the platitudinous closing sentence rings rather hollow. Still, this was a quick and easy read, mostly enjoyable and quite interesting in parts. Not much of it was really new to me, though, so I'd recommend it more to people who haven't read much about AI, even at the pop-science magazine/nerdy blogosphere level.… (mais)
 
Marcado
matt_ar | outras 2 resenhas | Dec 6, 2019 |
“Its is not true that we have only one life to live; if we can read, we can live as many more lives and as many kinds of lives as we wish.”

S. I. Hayakawa, quoted by Hector J. Levesque In "Common Sense, the Turing Test, and the Quest for Real AI"

The problem here is in the frequent ambiguity of the English language caused by its excessively simplistic grammar, made so by the collision between Germanic and Romance that produced the English language of today, essentially a creole construction. It would not arise in a language that is less mixed and more precise, e.g., German (my favourite language for rigorous thoughts and statements). Yet, it should be easy enough to fix, by making the parser look up idiomatic expressions and test them against the context of the conversation. The devices of gender and declension, present in German, allow for quite precise associations. How would the parser work in German? Imagine I want to use the following sentence: "I want to get a case for my camera; it should be strong." (Die Kamera, die Kameratasche; the same definite article for both nouns.) If I change the sentence to "I want to get a case for my camera; it should be protected." then the "it" in English refers to camera instead of case; can gender and declension help us in German? Well, for starters, I could associate "case" and "strong" by using accusative for “case” in German, and I'd use dative for "camera". So here, again, the declension gets me out of trouble. What about "Ich möchte eine Tasche für meine Kamera kaufen; sie soll stark sein." The pronoun "sie" is the subject of the second clause; so how can declension help us at all? The solution would be to say "Ich möchte eine starke Tasche für meine Kamera kaufen." In this case, indeed, the declension doesn't help because both nouns, “Tasche” and “Kamera”, are in the same case, the accusative. But you could drop "für" and use dative for “Kamera” instead, "meiner Kamera," although this would not sound natural in the present day German. But this is the meaning of dative. The "für" is implicit in the case. It would work with a human object, e.g., “meine Frau”. On the other hand, if we were to stick to "Tasche," my wife might want something more glamorous, like a Hermes Birkin Handbag handle bag. Another solution would be to not use a pronoun: “the case must be strong”. But that wasn't the point of Levesque's examples, which was that conversational English is hard for a computer to understand and it is. People don't want to have to think about whether the computer will find their sentences ambiguous; we want the computers to understand us as well as another human.

One issue that I have with the working definition of intelligence implied by the use of a Winograd schema to determine the level of artificial intelligence a system has attained is we have a particular answer to the question posed that we consider correct. Take for instance the second example question posed above "João comforted Manuel because he was so upset. Who was upset?" Naturally the implied answer to this question is Manuel. We assume that if João is doing the comforting then clearly he is not the person who is upset because then Manuel would be doing the comforting if that were so. I based my take to this question on the most probable outcome of possible situations; it would be unlikely that João would comfort Manuel if he was upset unless he derived some comfort out of comforting someone else who furthermore didn't need it. This interpretation is not wrong but simply not statistically probable and we (as humans) would probably say that this interpretation was wrong. However I could see an artificially intelligent system arriving at both conclusions. So according to the Winograd schema to be intelligent you need working knowledge of common social and physical situations as well as a database of outcomes that resulted. Then the system needs to be able to determine which possibility is most likely to occur based on the information it has. But if you use this as a test we could find that the AI system can consistently predict the answer to the question and still not be intelligent; and what's more could misunderstand the context of the situation this question is simulating. In addition you may also find that if you gave this test to a group of humans that they may get some of the questions "wrong" as well. So then what does that say about those humans?

What if instead we gave an AI system a set of seemingly random objects and no instructions on what to do with those objects. If those "seemingly" random objects assembled into something recognizable and the AI system could without instruction create that recognizable object then I think one could state that system was intelligent. For example if you gave a robot, with AI, a set of Lego's that assembled into the shape of a box. If the AI system could without instruction assemble that box that would show some signs of intelligence. Alternatively, like when a child is given a set of Lego's with or without instructions they will often times make other things besides what is shown in the instructions. Thus an even more intelligent system could not only create the aforementioned box but may also exercise creativity and create something recognizable with the parts provided that was never intended to be created. In that case I think the AI system would need to recognize the objects that it creates and possibly describe what they made and why they made it.

Actually all this is about depth. if you go in enough depth, the computer will get to the end if its rule base, rather fast. One does not need to invoke Godel and incompleteness! 10**9 rules can be transited in just 9 question, or far less, and humans know exactly what questions to ask. Its human magic. 10***12 rules, an impossible large rule base to maintain, can be transited in 12 questions or less, 10 deep at a time. If you invoke Godel, then a human can go to infinite depth, leaving the computer far, far behind. Human intelligence is truly infinite, and computer intelligence is finite. That is an idea from Aristotle, and Incompleteness is the formal statement of that.

One of the problems with Jepardy is that I got bored with it years ago. it seemed like a contest that could be played by a computer ( I thought years ago) so not very interesting. What is interesting is this: human might learn rules, perhaps 10**9 rules for navigating the world, but get very bored with invoking the rules over and over, roboticly. Once a human has truly learned a rule, he or she never wants to actually apply it. Boring. Humans want to transience (spell checker error that I will leave in to make the point) the rules, break out of the box, and do something different and creative. Always. So... let the computer perform those 10**9 rules so humans can go out much farther. Is the ability to perform a task a measure of intelligence? Let suppose that an AI device can be trained to perform a task, like facial recognition, better than a human.

That performance is likely. A car can roll faster than a human can walk; Most machines can out perform a human at what they are designed for. That is why the machines exists. But we don’t call a fast car intelligent, even if it can drive itself around a race course faster than an untrained human, which it can. Intelligence is the ability to drive the car, write an advertisement, hire a new DJ for your nightclub, then make a sculpture out of ice, all within minutes. Show me a computer that comes has one chance in a million. And the human has not even warmed up. Next he or she will invent a new blender recipe, create a programming language, and find the connection between Aristotle and Kurt Godel, all within a few hours. Where is the computer on all this? Its way back, by a factor so great we cannot even estimate it. Is the factor 10**(-6)? 10**(-9)? 10**(-12)? 10**(-15)? More important, is that factor getting smaller and smaller, the computer falling farther behind, as the human intellect is turned loose, in part by computers?

From (I think) Terry Winograd. Can you parse this? “Fruit flies like a banana.” :-) Obviously, there are multiple interpretations of this sentence. What is the intended inference? Aside from programming the options, can an AI determine what they are? I think that the Winograd schemas may be a viable alternative to the Turing test, if they are able to verify that the system under test is able to use all the emerging rules. So if it can move from the basic representation to gradually more abstract (and more flexible), if it is able to find the regularities that are present in them, and if it is able to extract from these the rules that can be used for generate forecasts and plan actions and behaviors.

Then what do we think? Can we believe in the definition of AI proposed by Levesque, “AI is the study of how to make computers behave the way they do in movies.”?

Levesque's book is more concerned with science aspects of AI than with its technology. The means we won't find any bright ideas on how to REALLY implement the Winograd schemas.

NB: I recommend also reading Turing's original paper before reading Levesque's book. It’ll give you an idea on how the Winograd schemas can improve the way we use the Turing test.

PS. GOFAI = Good Old Fashioned AI; AML = Adaptive Machine Learning.
… (mais)
1 vote
Marcado
antao | outras 2 resenhas | Aug 13, 2018 |

You May Also Like

Associated Authors

Estatísticas

Obras
5
Also by
1
Membros
131
Popularidade
#154,467
Avaliação
3.8
Resenhas
4
ISBNs
18

Tabelas & Gráficos