A more or less flexible or efficient approach can be taken depending on the requirements established, which influences how artificial the intelligent behavior appears. AI is generally associated with Computer Science, but it has many important links with other fields such as Mathematics, Psychology, Cognition, Biology and Philosophy, among many others. Our ability to combine knowledge from all these fields will ultimately benefit our progress in the quest of creating an intelligent artificial being. Introduction: Artificial intelligence is that field of computer usage which attempts to construct computational mechanisms for activities that are considered to require intelligence when performed by humans” -DEREK PARTRIDGE The most widely spread definition of AI is the so called Turing’s test. Alan Turing was a British mathematician famous for the invention of the theoretical Turing machine and for the deciphering of the German codes during World War II. The Turing’s test is quite simple. We place something behind a curtain and it speaks with us. If we can’t make difference between it and a human being then it will be AI.
However, this definition exists from more than fifty years, so we are going to create a newer and a more up-to-date one. Turing’s definition suggests that, an Intellect is a person with knowledge gained through the years. If this is so, then what about a newly born baby? Is it an Intellect? Our answer will be “yes”. Our definition of an intellect will be: a thing that knows nothing but it can learn. What is AI? What is AI? Artificial intelligence is (and some what circularly) defined. Is concerned with intelligent behavior in artifacts.
Intelligent behavior in turn involves perception, reasoning, learning, communicating, and acting in complex environments. AI has one of its long term goals the development of machines that can do these things as well as human can, or possibly even better. Another goal of AI is to under stand this kind of behavior whether it occurs in machines or in humans or other animals. Thus, AI has both engineering and scientific goals. AI has surrounded by controversy. A question “Can machines think? ” Some people think that to make a machine which can think complexly is impossible.
For an example to creating weather phenomenon consisting of sun clouds rain and many more of its richness is impossible task. Similarly creating full-scale human intelligence is very much complex and impossible. The idea to built human level thinking machines is still undecided, and progress to it has been set up which is much more steady. To people machine is rather thing on remembering it, the only thing which comes to their mind is gears, clutches, clanking sounds, metal, steam and many more. Consider a for example a simple virus called E6 bacteriophage as shown in the diagram. pic] The virus attaches itself to the cell wall of the bacterium with its tail fibers, punches the wall and injects its DNA in to the bacterium. Then it is manufactured to thousands of copies which forms new virus and repeats the process. This complete assembly of nature looks like an automated machine injecting DNA why this should not be called an machine made up of cells and proteins. What is an artificial neural network? An artificial neural network is a system based on the operation of biological neural networks, in other words, is an emulation of biological neural system.
Why would be necessary the implementation of artificial neural networks? Although computing these days is truly advanced, there are certain tasks that a program made for a common microprocessor is unable to perform; even so a software implementation of a neural network can be made with their advantages and disadvantages. Advantages: • A neural network can perform tasks that a linear program can not. • When an element of the neural network fails, it can continue without any problem by their parallel nature. • A neural network learns and does not need to be reprogrammed. It can be implemented in any application. • It can be implemented without any problem. Disadvantages: • The neural network needs training to operate. • The architecture of a neural network is different from the architecture of microprocessors therefore needs to be emulated. • Requires high processing time for large neural networks. Logical AI . Logical AI involves representing knowledge of an agent’s world, its goals and the current situation by sentences in logic. The agent decides what to do by inferring that a certain action or course of action is appropriate to achieve the goals.
We characterize briefly a large number of concepts that have arisen in research in logical AI. Reaching human-level AI requires programs that deal with the common sense informatic situation. Human-level logical AI requires extensions to the way logic is used in formalizing branches of mathematics and physical science. It also seems to require extensions to the logics themselves, both in the formalism for expressing knowledge and the reasoning used to reach conclusions. A large number of concepts need to be studied to achieve logical AI of human level.
This article presents candidates. The references, though numerous, to articles concerning these concepts are still insufficient, and I’ll be grateful for more, especially for papers available on the web. search AI programs often examine large numbers of possibilities, e. g. moves in a chess game or inferences by a theorem proving program. Discoveries are continually made about how to do this more efficiently in various domains. Search is inherent to the problems and methods of artificial intelligence (AI). That is because AI problems are intrinsically complex.
Efforts to solve problems with computers which humans can routinely solve by employing innate cognitive abilities, pattern recognition, perception and experience, invariably must turn to considerations of search. All search methods essentially fall into one of two categories 1) exhaustive (blind) methods and 2) heuristic or informed methods. The methods covered will include (for non-optimal, uninformed approaches) State-b. Space Search, Generate and Test, Means-ends Analysis, Problem Reduction, And/or Trees, Depth First Search and Breadth First Search.
Under the umbrella of heuristic (informed methods) are. / Hill Climbing, Best First Search, Bidirectional Search, The Branch and Bound Algorithm, and the Bandwidth Search . Tree Searching algorithms for games have proven to be a rich source of study and empirical data about heuristic methods. Methods covered include the minimax procedure, the alpha-beta algorithm, iterative deepening, the SSS* algorithm, and SCOUT pattern recognition When a program makes observations of some kind, it is often programmed to compare what it sees with a pattern.
For example, a vision program may try to match a pattern of eyes and a nose in a scene in order to find a face. More complex patterns, e. g. in a natural language text, in a chess position, or in the history of some event are also studied. These more complex patterns require quite different methods than do the simple patterns that have been studied the most. [pic] Representation Facts about the world have to be represented in some way. Usually languages of mathematical logic are used. In mathematical terms, these values would pronounce more like this: [pic]
In simple terms this means that an object is an instance of a concept u if the conditions a, b and c hold true simultaneously. So, for instance, we are given with a concept that says, one is eligible to drive a car if one has a license, a car and a knowledge of driving. To state this in propositional calculus, you’d have to state this in the following mathematical notation: [pic] However simple to denote, the zero-order logic is only capable of describing concepts in a limited context and do not hold much of a descriptive power. Inference From some facts, others can be inferred.
Mathematical logical deduction is adequate for some purposes, but new methods of non-monotonic inference have been added to logic since the 1970s. The simplest kind of non-monotonic reasoning is default reasoning in which a conclusion is to be inferred by default, but the conclusion can be withdrawn if there is evidence to the contrary. For example, when we hear of a bird, we men infer that it can fly, but this conclusion can be reversed when we hear that it is a penguin. It is the possibility that a conclusion may have to be withdrawn that constitutes the non-monotonic character of the reasoning.
Ordinary logical reasoning is monotonic in that the set of conclusions that can the drawn from a set of premises is a monotonic increasing function of the premises. Circumscription is another form of non-monotonic reasoning. Common sense knowledge and reasoning This is the area in which AI is farthest from human-level, in spite of the fact that it has been an active research area since the 1950s. While there has been considerable progress, e. g. in developing systems of non-monotonic reasoning and theories of action, yet more new ideas are needed.
The Cyc system contains a large but spotty collection of common sense facts. Learning from experience Programs do that. The approaches to AI based on connectionism and neural nets specialize in that. There is also learning of laws expressed in logic. [Mit97] is a comprehensive undergraduate text on machine learning. Programs can only learn what facts or behaviors their formalisms can represent, and unfortunately learning systems are almost all based on very limited abilities to represent information. Planning
Planning programs start with general facts about the world (especially facts about the effects of actions), facts about the particular situation and a statement of a goal. From these, they generate a strategy for achieving the goal. In the most common cases, the strategy is just a sequence of actions. Epistemology This is a study of the kinds of knowledge that are required for solving problems in the world. Ontology Ontology is the study of the kinds of things that exist. In AI, the programs and sentences deal with various kinds of objects, and we study what these kinds are and what their basic properties are.
Emphasis on oncology begins in the 1990s. Heuristics A heuristic is a way of trying to discover something or an idea imbedded in a program. The term is used variously in AI. Heuristic functions are used in some approaches to search to measure how far a node in a search tree seems to be from a goal. Heuristic predicates that compare two nodes in a search tree to see if one is better than the other, i. e. constitutes an advance toward the goal, and may be more useful. [My opinion]. Genetic programming
Genetic programming is a technique for getting programs to solve a task by mating random Lisp programs and selecting fittest in millions of generations. It is being developed by John Koza’s group http://www. cs. cmu. edu/Groups/AI/html/faqs/ai/top. html http://www. dobrev. com/AI/definition. html http://www. a-i. com/alan1/ Theories of human intelligence: Psychometric Psychometrics is the field of study concerned with the theory and technique of educational and psychological measurement, which includes the measurement of knowledge, abilities, attitudes, and personality traits.
The field is primarily concerned with the construction and validation of measurement instruments, such as questionnaires, tests, and personality assessments. Multiple intelligences The theory of multiple intelligences was proposed by Howard Gardner in 1983 to explore and articulate various forms or expressions of intelligence available to cognition. Gardner argues that, as traditionally defined in psychometrics, intelligence does not sufficiently encompass the wide variety of abilities humans display.
In his conception, a child who masters multiplication easily is not necessarily more intelligent overall than a child who is stronger in another kind of intelligence and therefore 1) may best learn the given material through a different approach, 2) may excel in a field outside of mathematics, or 3) may even be looking at the multiplication process at a fundamentally deeper level, which can result in a seeming slowness that hides a mathematical intelligence that is potentially higher than that of a child who easily memorizes the multiplication table.
And these are some other theories used in human intelligence Triarchic Theory of Intelligence, Developmental approach, Emotional intelligence, PASS Theory, Empirical evidence, Environmental Biological. Attention Attention is the selection of important information. The human mind is bombarded with millions of stimuli and it must have a way of deciding which of this information to process. Attention is sometimes seen as a spotlight, meaning one can only shine the light on a particular set of information.
Experiments that support this metaphor include the dichotic listening task (Cherry, 1957) and studies of inattentional blindness (Mack and Rock, 1998). In the dichotic listening task, subjects are bombarded with two different messages, one in each ear, and told to focus on only one of the messages. At the end of the experiment, when asked about the content of the unattended message, subjects cannot report it. Knowledge, and Processing, of Language The ability to learn and understand language is an extremely complex process.
Language is acquired within the first few years of life, and all humans under normal circumstances are able to acquire language proficiently. A major driving force in the theoretical linguistic field is discovering the nature that language must have in the abstract in order to be learned in such a fashion. Some of the driving research questions in studying how the brain itself processes language include: (1) To what extent is linguistic knowledge innate or learned? , (2) Why is it more difficult for adults to acquire a second-language than it is for infants to acquire their first-language? and (3) How are humans able to understand novel sentences? The study of language processing ranges from the investigation of the sound patterns of speech to the meaning of words and whole sentences. Linguistics often divides language processing into orthography, phonology and phonetics, morphology, syntax, semantics, andpragmatics. Many aspects of language can be studied from each of these components and from their interaction. The study of language processing in cognitive science is closely tied to the field of linguistics.
Linguistics was traditionally studied as a part of the humanities, including studies of history, art and literature. In the last fifty years or so, more and more researchers have studied knowledge and use of language as a cognitive phenomenon, the main problems being how knowledge of language can be acquired and used, and what precisely it consists of. Linguists have found that, while humans form sentences in ways apparently governed by very complex systems, they are remarkably unaware of the rules that govern their own speech.
Thus linguists must resort to indirect methods to determine what those rules might be, if indeed rules as such exist. In any event, if speech is indeed governed by rules, they appear to be opaque to any conscious consideration. Learning and development Learning and development are the processes by which we acquire knowledge and information over time. Infants are born with little or no knowledge (depending on how knowledge is defined), yet they rapidly acquire the ability to use language, walk, and recognize people and objects.
Research in learning and development aims to explain the mechanisms by which these processes might take place. A major question in the study of cognitive development is the extent to which certain abilities are innate or learned. This is often framed in terms of the nature versus nurture debate. Thenativist view emphasizes that certain features are innate to an organism and are determined by its genetic endowment. The empiricist view, on the other hand, emphasizes that certain abilities are learned from the environment.
Although clearly both genetic and environmental input is needed for a child to develop normally, considerable debate remains about how genetic information might guide cognitive development. In the area of language acquisition, for example, some (such as Steven Pinker) have argued that specific information containing universal grammatical rules must be contained in the genes, whereas others (such as Jeffrey Elman and colleagues in Rethinking Innateness) have argued that Pinker’s claims are biologically unrealistic.
They argue that genes determine the architecture of a learning system, but that specific “facts” about how grammar works can only be learned as a result of experience. Memory Memory allows us to store information for later retrieval. Memory is often thought of consisting of both a long-term and short-term store. Long-term memory allows us to store information over prolonged periods (days, weeks, years). We do not yet know the practical limit of long-term memory capacity. Short-term memory allows us to store information over short time scales (seconds or minutes).
Memory is also often grouped into declarative and procedural forms. Declarative memory–grouped into subsets of semantic and episodic forms of memory–refers to our memory for facts and specific knowledge, specific meanings, and specific experiences (e. g. , Who was the first president of the U. S. A.? , or “What did I eat for breakfast four days ago? ). Procedural memory allows us to remember actions and motor sequences (e. g. how to ride a bicycle) and is often dubbed implicit knowledge or memory .
Cognitive scientists study memory just as psychologists do, but tend to focus in more on how memory bears on cognitive processes, and the interrelationship between cognition and memory. One example of this could be, what mental processes does a person go through to retrieve a long-lost memory? Or, what differentiates between the cognitive process of recognition (seeing hints of something before remembering it, or memory in context) and recall (retrieving a memory, as in “fill-in-the-blank”)? Programming languages used in artificial intelligence: IPL was the first language developed for artificial intelligence.
LISP is a practical mathematical notation for computer programs based on lambda calculus. Prolog is a declarative language where programs are expressed in terms of relations, and execution occurs by running queries over these relations. Prolog is particularly useful for symbolic reasoning, database and language parsing applications. Prolog is widely used in AI today. STRIPS is a language for expressing automated planning problem instances. It expresses an initial state, the goal states, and a set of actions Planner is a hybrid between procedural and logical languages