Applied Sciences

Artificial Intelligence


ARTIFICIAL INTELLIGENCE: DEFINED AND EXPLAINED

Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create this machine intelligence.  AI textbooks define the field as "the study and design of intelligent agents" where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy coined the term in 1956 and defined AI as "the science and engineering of making intelligent machines."  McCarthy(1927-2011) was an American computer scientist and cognitive scientist. He developed the Lisp programming language family, significantly influenced the design of the ALGOL programming language, popularized timesharing, and was very influential in the early development of AI. McCarthy received many accolades and honors, including the Turing Award for his contributions to the topic of AI, the United States National Medal of Science, and the Kyoto Prize.

AI research is highly technical and specialized, deeply divided into subfields that often fail to communicate with each other. Subfields have grown up around particular institutions, the work of individual researchers, the solution of specific problems and the application of widely differing tools. The central problems of AI include such traits as reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects. General intelligence, or "strong AI," is still among the field's long term goals.

The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—can be so precisely described that it can be simulated by a machine. This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings, issues which have been addressed by myth, fiction and philosophy since antiquity. Artificial intelligence has been the subject of optimism, but has also suffered setbacks and, today, has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science.


SOME HISTORY: Part I

Thinking machines and artificial beings appear in Greek myths, such as Talos of Crete, the bronze robot of Hephaestus, and Pygmalion's Galatea. Human likenesses believed to have intelligence were built in every major civilization: animated cult images were worshipped in Egypt and Greece and humanoid automatons were built by Yan Shi, Hero of Alexandria and Al-Jazari.  It was also widely believed that artificial beings had been created by Jābir ibn Hayyān, Judah Loew and Paracelsus. By the 19th and 20th centuries, artificial beings had become a common feature in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R. (Rossum's Universal Robots).Pamela McCorduck argues that all of these are examples of an ancient urge, as she describes it, "to forge the gods". Stories of these creatures and their fates discuss many of the same hopes, fears and ethical concerns that are presented by artificial intelligence.

SOME HISTORY: PART II

Mechanical or "formal" reasoning has been developed by philosophers and mathematicians since antiquity. The study of logic led directly to the invention of the programmable digital electronic computer, based on the work of mathematician Alan Turing and others. Turing's theory of computation suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable(Imaginable) act of mathematical deduction.This, along with concurrent discoveries in neurology, information theory and cybernetics, inspired a small group of researchers to begin to seriously consider the possibility of building an electronic brain.

The field of AI research was founded at a conference on the campus of Dartmouth College in the summer of 1956. The attendees, including John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, became the leaders of AI research for many decades. They and their students wrote programs that were, to most people, simply astonishing: Computers were solving word problems in algebra, proving logical theorems and speaking English. By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense and laboratories had been established around the world. AI's founders were profoundly optimistic about the future of the new field: Herbert Simon predicted that "machines will be capable, within twenty years, of doing any work a man can do" and Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".


They had failed to recognize the difficulty of some of the problems they faced. In 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off all undirected exploratory research in AI. The next few years, when funding for projects was hard to find, would later be called the "AI winter".

In the early 1980s, AI research was revived by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of one or more human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research in the field. However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer lasting AI winter began.

ARTIFICIAL INTELLIGENCE: SEVERAL PAPERS

The Handbook of Artificial Intelligence gives the following definition of artificial intelligence (AI):Artificial Intelligence is the part of computer science concerned with designing intelligent computer systems, that is, systems that exhibit the characteristics we associate with intelligence in human behavior--understanding language, learning, reasoning, solving problems, and so on. This characterization, & the technological future it imagines, are both products of recent decades, and highly dependent on the birth and development of the digital computer. But the broad aspirations and ambitions underlying AI existed long before. As a partially autonomous academic discipline, AI is very young, as evinced by its post-1950 computational tools, the distinctive set of human resources it employs, and the particular "world view" it largely adopts. But at a deeper level the roots of AI reach back over many centuries, not only in academic thinking, but also in the public imagination. Far from being new, the questions that AI aspires to answer have a long distinguished career in the history of intellectual thought. The V. 4, N. 2 of the Stanford Humanities Review is devoted to the exploration of convergences and dissonances between Artificial Intelligence and the Humanities. For this series of papers on this subject go to: http://web.stanford.edu/group/SHR/4-2/text/toc.html

VOICE RECOGNITION PROGRAMS

Using an artificial intelligence technique inspired by theories about how the brain recognizes patterns, technology companies are reporting startling gains in fields as diverse as computer vision, speech recognition and the identification of promising new molecules for designing drugs. A student team led by the computer scientist Geoffrey E. Hinton used deep-learning technology to design software. The advances have led to widespread enthusiasm among researchers who design software to perform human activities like seeing, listening and thinking. They offer the promise of machines that converse with humans and perform tasks like driving cars and working in factories, raising the specter of automated robots that could replace human workers.  For more of this story, 23/11/'12 in the New York Times, go to: http://www.nytimes.com/2012/11/24/science/scientists-see-advances-in-deep-learning-a-part-of-artificial-intelligence.html

​A QUESTION ANSWERING MACHINE

For the last three years, I.B.M. scientists have been developing what they expect will be the world’s most advanced “question answering” machine. The machine is able to understand a question posed in everyday human elocution — “natural language,” as computer scientists call it — and respond with a precise, factual answer. In other words, it must do more than what search engines like Google and Bing do, which is merely point to a document where you might find the answer. It has to pluck out the correct answer itself. Technologists have long regarded this sort of artificial intelligence as a holy grail, because it would allow machines to converse more naturally with people, letting us ask questions instead of typing keywords. Software firms and university scientists have produced question-answering systems for years, but these have mostly been limited to simply phrased questions. Nobody ever tackled “Jeopardy!” because experts assumed that even for the latest artificial intelligence, the game was simply too hard: the clues are too puzzling and allusive, and the breadth of trivia is too wide. For more of this article in The New York Times on 16/6/'10 go to:http://www.nytimes.com/2010/06/20/magazine/20Computer-t.html?pagewanted=all

DEVELOPMENTS FROM THE 1990S TO 2012

Part 1:

In the 1990s and early 21st century, AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence is used for logistics, data mining, medical diagnosis and many other areas throughout the technology industry. The success was due to several factors: the increasing computational power of computers (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and a new commitment by researchers to solid mathematical methods and rigorous scientific standards.

On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov. In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail. Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws. In February 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.

Part 2:

The leading-edge definition of artificial intelligence research is changing over time. One pragmatic definition is: "AI research is that which computing scientists do not know how to do cost-effectively today." For example, in 1956 optical character recognition (OCR) was considered AI, but today, sophisticated OCR software with a context-sensitive spell checker and grammar checker software comes for free with most image scanners. No one would any longer consider already-solved computing science problems like OCR "artificial intelligence" today.

Low-cost entertaining chess-playing software is commonly available for tablet computers. DARPA no longer provides significant funding for chess-playing computing system development. The Kinect which provides a 3D body–motion interface for the Xbox 360 uses algorithms that emerged from lengthy AI research, but few consumers realize the technology source.

Part 3:

AI applications are no longer the exclusive domain of U.S. Department of Defense R&D, but are now commonplace consumer items and inexpensive intelligent toys. In common usage, the term "AI" no longer seems to apply to off-the-shelf solved computing-science problems, which may have originally emerged out of years of AI research.


Metamagical Themas is an eclectic collection of articles that Douglas Hofstadter wrote for the popular science magazine Scientific American during the early 1980s. The anthology was published in 1985 by Basic Books. The subject matter of the articles is loosely woven about themes in philosophy, creativity, artificial intelligence, typography and fonts, and important social issues. For more on this book go to:http://en.wikipedia.org/wiki/Metamagical_Themas

HOW ROBOTS AND ALGORITHMS ARE TAKING OVER

Part 1:

In the 2 April 2015 issue of The New York Review of Books Sue Halpern writes a review of: The Glass Cage: Automation and Us by Nicholas Carr. It is a 300 page book and she begins: "In September 2013, a year before Nicholas Carr published The Glass Cage: Automation and Us, his chastening meditation on the human future, a pair of Oxford researchers issued a report predicting that nearly half of all jobs in the United States could be lost to machines within the next twenty years. The researchers, Carl Benedikt Frey & Michael Osborne, looked at seven hundred kinds of work & found that among those occupations, the most susceptible to automation were: loan officers, receptionists, paralegals, store clerks, taxi drivers, security guards. Even computer programmers, the people writing the algorithms that are taking on these tasks, will not be immune. By Frey and Osborne’s calculations, there is about a 50 percent chance that programming, too, will be outsourced to machines within the next 2 decades. For more go to: http://www.nybooks.com/articles/archives/2015/apr/02/how-robots-algorithms-are-taking-over/?insrc=toc


Part 2:

In 1995 David Bromwich was appointed as Housum Professor of English at Yale. In 2006 he became a Sterling Professor. Bromwich is a fellow of the American Academy of Arts and Sciences. He has published widely on Romantic criticism and poetry, and on eighteenth-century politics and moral philosophy. For an essay in The New York Reivew of Books(9/7/'15) "Trapped in the Virtual Classroom" by David Bromwich go to: http://www.nybooks.com/articles/archives/2015/jul/09/trapped-virtual-classroom/  Bromwich writes: "The great social calamity of our time is that people are being replaced by machines. This is happening and it will go on happening. But we may want to stop or slow the process when we have a chance, in order to ask a large question. To what extent are the uniquely human elements of our lives, things not reproducible by mechanical or technical substitutes, the result ofspontaneous or unplanned experience? Such experience, whatever we think of it, is made possible by the arts of give-and-take that we learn in the physical presence of human beings." He continues:

"American society is still on the near side of robotification. People who can’t conjure up the relevant sympathy in the presence of other people are still felt to need various kinds of remedial help: they are autistic or sociopathic, it may be said—those are two of a range of clinical terms. Less clinically we may say that such people lack a certain affective range. However efficiently they perform their tasks, we don’t yet think well of those who in their everyday lives maximize efficiency and minimize considerate, responsive, and unrehearsed interaction, whether they neglect such things from physiological incapacity or a prudential fear of squandering their energy on emotions that are not formally necessary."

CARS DRIVING THEMSELVES

The scientists and engineers at a recent Computer Vision and Pattern Recognition(CVPR) conference are creating a world in which cars drive themselves, machines recognize people and “understand” their emotions, and humanoid robots travel unattended, performing everything from mundane factory tasks to emergency rescues. C.V.P.R., as it is known, is an annual gathering of computer vision scientists, students, roboticists, software hackers and, increasingly in recent years, business and entrepreneurial types looking for another great technological leap forward. 

The growing power of computer vision is a crucial first step for the next generation of computing, robotic & artificial intelligence systems. Once machines can identify objects and understand their environments, they can be freed to move around in the world. And once robots become mobile they will be increasingly capable of extending the reach of humans or replacing them. Self-driving cars, factory robots and a new class of farm hands known as ag-robots are already demonstrating what increasingly mobile machines can do. Indeed, the rapid advance of computer vision is just one of a set of artificial intelligence-oriented technologies — others include speech recognition, dexterous manipulation and navigation — that underscore a sea change beyond personal computing and the Internet, the technologies that have defined the last three decades of the computing world. For more on this story in The New York Times, 13/10/'13, go to: http://www.nytimes.com/2013/10/15/technology/the-rapid-advance-of-artificial-intelligence.html?_r=0