Strong AI
From Wikipedia, the free encyclopedia
In the philosophy of artificial intelligence, strong AI is the supposition that some forms of artificial intelligence can truly reason and solve problems; strong AI supposes that it is possible for machines to become sapient, or self-aware, but may or may not exhibit human-like thought processes. The term strong AI was originally coined by John Searle, who writes:
- "...according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind"[1]
The term "artificial intelligence" would equate to the same concept as what we call "strong AI" based on the literal meanings of "artificial" and "intelligence". However, initial research into artificial intelligence was focused on narrow fields such as pattern recognition and automated scheduling, in hopes that they would eventually allow for an understanding of true intelligence. The term "artificial intelligence" thus came to encompass these narrower fields ("weak AI") as well as the idea of strong AI. Some suggest that the term Synthetic intelligence would be a more accurate name for what is expected of strong AI.[1]
Contents |
[edit] Weak artificial intelligence
In contrast to strong AI, weak AI refers to the use of software to study or accomplish specific problem solving or reasoning tasks that do not encompass (or in some cases, are completely outside of) the full range of human cognitive abilities. An example of weak AI software would be a chess program such as Deep Blue. Unlike strong AI, a weak AI does not achieve self-awareness or demonstrate a wide range of human-level cognitive abilities, and at its finest is merely an intelligent, more specific problem-solver.
Some argue that weak AI programs cannot be called "intelligent" because they cannot really think. In response to claims that weak AI software such as Deep Blue are not really thinking, Drew McDermott wrote:
- "Saying Deep Blue doesn't really think about chess is like saying an airplane doesn't really fly because it doesn't flap its wings." [2]
He argued that Deep Blue does possess intelligence and is simply lacking breadth of intelligence.
Others note that Deep Blue is merely a powerful, heuristic search tree, stating that claims of it "thinking" about chess are similar to claims of single cells "thinking" about protein synthesis; both are unaware of anything at all, and both merely follow a program which has been encoded within them. Many among these critics are proponents of Weak AI, claiming that machines can never be truly intelligent, while Strong AI proponents simply state that true self-awareness and thought as we know it may require a specific kind of "program" designed to observe and take into account the processes of one's own brain. Many evolutionary psychologists point out that humans may have developed just such a program especially strongly for the purpose of social interaction or perhaps even deception. Many humans say that their skill at these behaviors are superior to other animals.[citation needed]
[edit] General artificial intelligence
General artificial intelligence research aims to create AI that can replicate human intelligence completely, often called an Artificial General Intelligence (AGI) to distinguish from less ambitious AI projects. As yet, researchers have devoted little attention to AGI, many claiming intelligence is too complex to be completely replicated.[citation needed] Some small groups of computer scientists are doing some AGI research, however. Organizations pursuing AGI include the Adaptive AI, Artificial General Intelligence Research Institute (AGIRI), CCortex, CodeSimian, Novamente LLC and the Singularity Institute for Artificial Intelligence. One recent addition is Numenta, a project based on the theories of Jeff Hawkins, the creator of the Palm Pilot. While Numenta takes a computational approach to general intelligence, Hawkins is also the founder of the RedWood Neuroscience Institute, which explores conscious thought from a biological perspective.
By most measures, actual progress towards strong AI has been limited. No system can pass a full Turing test for unlimited amounts of time, although some AI systems can at least fool some people initially now (see the Loebner prize winners). Few active AI researchers are prepared to publicly predict whether, or when, such systems will be developed, perhaps due to the failure of bold, unfulfilled predictions for AI research progress in past years. There is also the problem of the AI effect, where any achievement by a machine tends to be deprecated as a sign of true intelligence.
[edit] Strong artificial intelligence
Beyond general AI we speak of strong AI if a machine approaches or supersedes human intelligence, if it can do typically human tasks, if it can apply a wide range of background knowledge and has some degree of self-consciousness.
Since human-bound definitions of measurable intelligence, like the IQ, cannot easily be applied to machine intelligence, a proposal to define a more easily quantifiable measure of artificial intelligence is:
Intelligence is the possession of a model of reality and the ability to use this model to conceive and plan actions and to predict their outcomes. The higher the complexity and precision of the model, the plans, and the predictions, and the less time needed, the higher is the intelligence.[3]
[edit] Philosophy of strong AI and consciousness
John Searle and most others involved in this debate address whether a machine that works solely through the transformation of encoded data could be a mind, not the wider issue of monism versus dualism (i.e., whether a machine of any type, including biological machines, could contain a mind).
Searle states in his Chinese room argument that information processors carry encoded data which describe other things. The encoded data itself is meaningless without a cross reference to the things it describes. This leads Searle to assert that there is no meaning or understanding in an information processor itself. As a result Searle claims that even a machine that passed the Turing test would not necessarily be conscious in the human sense.
Some philosophers hold that if Weak AI is possible then Strong AI must also be possible. Daniel C. Dennett argues in Consciousness Explained that if there is no magic spark or soul, then Man is just a machine, and he asks why the Man-machine should have a privileged position over all other possible machines when it comes to intelligence or 'mind'. In the same work, he proposes his Multiple Drafts Model of consciousness. Simon Blackburn in his introduction to philosophy, Think, points out that you might appear intelligent but there is no way of telling if that intelligence is real (i.e., a 'mind'). However, if the discussion is limited to strong AI rather than artificial consciousness it may be possible to identify features of human minds that do not occur in information processing computers.
Many strong AI proponents believe the mind is subject to the Church-Turing thesis. This belief is seen by some as counter-intuitive and even problematic, because an information processor can be constructed out of balls and wood. Although such a device would be very slow and failure-prone, it could do anything that a modern computer can do. If the mind is Turing-compatible, it implies that, at least in principle, a device made of rolling balls and wooden channels can contain a conscious mind.
Roger Penrose attacked the applicability of the Church-Turing thesis directly by drawing attention to the halting problem in which certain types of computation cannot be performed by information systems yet are alleged to be performed by human minds. However, there persist several halting problems that have resisted millennia of human resolution, and the veracity of this claim is debated.
Ultimately the truth of Strong AI depends upon whether information processing machines can include all the properties of minds such as consciousness. However, Weak AI is independent of the Strong AI problem and there can be no doubt that many of the features of modern computers such as multiplication or database searching might have been considered 'intelligent' only a century ago.
[edit] Methods of production
[edit] Computer simulating human brain model
This is seen by many as the quickest means of achieving Strong AI, as it doesn't require complete understanding of how intelligence works. Basically, a very powerful computer would simulate a human brain, often in the form of a network of neurons. For example, given a map of all (or most) of the neurons in a functional human brain, and a good understanding of how a single neuron works, it would be possible for a computer program to simulate the working brain over time. Given some method of communication, this simulated brain might then be shown to be fully intelligent. The exact form of the simulation varies: instead of neurons, a simulation might use groups of neurons, or alternatively, individual molecules might be simulated. It's also unclear which portions of the human brain would need to be modelled: humans can still function while missing portions of their brains, and areas of the brain are associated with activities (such as breathing) that might not be necessary to think.
This approach would require three things:
- Hardware. An extremely powerful computer would be required for such a model. Futurist Ray Kurzweil estimates 1 million MIPS. If Ray Kurzweil's predictions are correct, this will be available for $1,000 by 2020. At least one special-purpose petaflop computer has already been built (the Riken MDGRAPE-3 ) and there are nine current computing projects (such as BlueGene/P ) to build more general purpose petaflop computers all of which should be completed by 2008, if not sooner. [4]
- Software. Software to simulate the function of a brain would be required. This assumes that the human mind is the central nervous system and is governed by physical laws. Basically, constructing the simulation would require a great deal of knowledge about the physical and functional operation of the human brain, and might require detailed information about a particular human brain's structure. Information would be required both of the function of different types of neurons, and of how they are connected. Note that the particular form of the software dictates the hardware necessary to run it. For example, an extremely detailed simulation including molecules or small groups of molecules would require enormously more processing power than a simulation that models neurons using a simple equation, and a more accurate model of a neuron would be expected to be much more expensive computationally than a simple model. The more neurons in the simulation, the more processing power it would require.
- Understanding. Finally, it requires sufficient understanding thereof to be able to model it mathematically. This could be done either by understanding the central nervous system, or by mapping and copying it. Neuroimaging technologies are improving rapidly, and Kurzweil predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.
Once such a model is built, it will be easily altered and thus open to trial and error experimentation. This is likely to lead to huge advances in understanding, allowing the model's intelligence to be improved/motivations altered.
The Blue Brain project aims to use one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to simulate a single neocortical column consisting of approximately 60,000 neurons and 5km of interconnecting synapses. The eventual goal of the project is to use supercomputers to simulate an entire brain.
[edit] The direct approach
In opposition to human-brain simulation, the direct approach attempts to achieve AI directly without imitating nature. By comparison, early attempts to construct flying machines modelled them after birds, but modern aircraft do not look like birds.
The main question in the direct approach is: "What is AI?". The most famous definition of AI was the operational one proposed by Alan Turing in his "Turing test" proposal. There have been very few attempts to create such definition since (some of them are in the AI Project). John McCarthy stated in his work What is AI? that we still do not have a solid definition of intelligence.
[edit] Neuro-Silicon Interfaces
Neuro-silicon interfaces have also been proposed [5] [6].
[edit] Prospective applications
[edit] Seed AI/technological singularity
A strong AI which performs recursive improvement would increase in intelligence indefinitely and exponentially, starting on the human level. The vastly superhuman intelligence thus produced would be capable of developing technology at a far faster rate than human scientists. Arguably it would be impossible for humans of relatively minuscule intelligence to predict what it would come up with - thus the term singularity.
[edit] The Arts
A strong AI may be able to produce original works of music, art, literature and philosophy. However, a strong AI is not a necessary requirement for the creation of novel works of art. There have already been weak AI painting programs created that have been able to manipulate a paintbrush through external hardware in order to paint original, non-random and interesting pieces of art. AARON is one example of such software. More information can be found here: http://www.stanford.edu/group/SHR/4-2/text/cohen.html
[edit] Cognitive Robotics
Cognitive Robotics involves applying various fields of Artificial Intelligence to Robotics. Strong AI in particular would be a great asset to this field.
[edit] Comparison of computers to the human brain
[edit] Parallelism vs speed
The brain gets its power from performing many parallel operations, a computer from performing operations very quickly.
The human brain has roughly 100 billion neurons operating simultaneously, connected by roughly 100 trillion synapses. Although estimates of the brain's processing power put it at around 1014 neuron updates per second,[2] it is expected that the first unoptimized simulations of a human brain will require a computer capable of 1018 FLOPS. By comparison a general purpose CPU (circa 2006) operates at a few GFLOPS (109 FLOPS).
However, a neuron is estimated to spike 200 times per second (this giving an upper limit on the number of operations).[citation needed] Signals between them are transmitted at a maximum speed of 150 meters per second. A modern 2GHz processor operates at 2 billion cycles per second or 10,000,000 times faster than a human neuron and signals in electronic computers travel at roughly the speed of light (300 000 kilometres per second).
[edit] Perception
If nanotechnology allowed the construction of devices of similar size and parallelism to the brain running at the speed of a modern computer, then a human model within would experience time as if it were occurring more slowly than it really was (relative to how humans experience time). Thus, an artificial brain could experience the elapsing of 1 minute as actually taking much longer, perhaps as if it were several hours. However, since the perception of how long something takes is different from the actual duration of the time period, how the artificial brain perceives the time period would depend entirely on the calculations and specific type of cognition during that time period.
[edit] See also
[edit] External links
- World Lectures from Tokyo hosted by Rolf Pfeifer. — (English speakers please click the second link, "Film z wykladu", to download the .avi files.)
- Artificial General Intelligence Research Institute
- The Genesis Group at MIT's CSAIL — Modern research on the computations that underlay human intelligence
- Essentials of general intelligence, article at Adaptive AI.
- Problems with Thinking Robots
[edit] References
- ^ J. Searle in Minds, Brains and Programs. The Behavioral and Brain Sciences, vol. 3, 1980.
- ^ "Artificial Intelligence, A Modern Approach", Stuart Russel and Peter Norvig, Prentice Hall, Inc. 1995.
- Goertzel, Ben; Pennachin, Cassio (Eds.) Artificial General Intelligence. Springer: 2006. ISBN 3-540-23733-X.
- John Searle: Minds, Brains and Programs Behavioral and Brain Sciences 3 (3): 417-457 1980.
- Kurzweil, Ray; The Singularity is Near. ISBN 0-670-03384-7.
- Singularity Institute for Artificial Intelligence
- Expanding Frontiers of Humanoid Robots
- How Intelligent is Deep Blue? by Drew McDermott. New York Times, May 14, 1997