google.com, pub-2829829264763437, DIRECT, f08c47fec0942fa0

Friday, January 5, 2018

The Next Frontier for Artificial Intelligence

The Next Frontier for Artificial Intelligence?

Learning humans’ common sense


Spain’s artificial intelligence research institute is looking at teaching robots to know their limits, but think human-level AI is a way away just yet.

Nearly half a century has passed between the release of the films 2001: A Space Odyssey (1968) and Transcendence (2014), in which a quirky scientist’s consciousness is uploaded into a computer. Despite being 50 years apart, their plots, however, are broadly similar. Science fiction stories continue to imagine the arrival of human-like machines that rebel against their creators and gain the upper hand in battle.

In the field of artificial intelligence (AI) research, over the last 30 years, progress has been similarly slower than expected.

While AI is increasingly part of our everyday lives – in our phones or cars – and computers process large amounts of data, they still lack human-level capacity to make deductions from the information they’re given. People can read different sections of a newspaper and understand them, grasp the consequences and implications of a story. Just by interacting with their environment, humans acquire experience that gives them tacit knowledge. Today’s machines simply don’t have that kind of ability. Yet.

Gargoyle. Illustration: Elena

‘Building AI is like launching a rocket’: Meet the man fighting to stop artificial intelligence destroying humanity

As a result, common sense reasoning is still a challenge in AI research. “We have machines that are very good at playing chess, for example, but they cannot play domino too,” Ramon López de Mántaras, director of the of Spanish National Research Council’s Artificial Intelligence Research Institute said. “In the last 30 years, research has focused on weak artificial intelligence – that is to say, in making machines very good at a specific topic – but we have not progressed that much with common sense reasoning,” he said during a recent debate organized by the Catalan government’s ministry of telecommunications and information society.

Will this situation change with the development of smart city and cognitive computing systems, designed to be able to carry out human-like analysis of complex and diverse data sets? López de Mántaras, who has been exploring some of the most ambitious questions in AI since 1976, doesn’t think so. “Neither big data nor high performance computing are bringing us closer to robustness in AI,” he said.

Futurists who talk about ‘the singularity’, meaning the hypothetical advent of artificial general intelligence (also known as strong AI), predict it will occur between 2030 and 2045. López de Mántaras is skeptical, however: “If there is no big change in computer science, it won’t happen.”

The main difficulty in artificially reproducing the functioning of the human brain stems from the fact that the organ is analogue. Its ability to process information not only depends on the electrical activity of neurons, but also on many kinds of chemical activity, which can’t be modelled with current technologies. López de Mántaras speculates that non silicon-based technologies such as memristors, a type of passive circuit elements that maintain a relationship between the time integrals of current and voltage across a two terminal element, or DNA computing will be needed to move forward. However, he notes that we need something more than a technological change to solve the problem: we also need new mathematical models and algorithms to artificially reproduce the human brain – algorithms that are as yet unknown.

By 2030 though, humanoid robots that interact with the environment and that may have more general intelligence may have been developed, he said, and businesses will take advantage of the trend. Social robots as domestic assistants or to help elderly people or those with mobility problems are being worked on, as are self-driving vehicles, though their AI isn’t yet anywhere near human-level.

Ethics of AI (Artificial Intelligence)

Ethics of AI (Artificial Intelligence)


Robots that play music might not seem to raise much in the way of ethics questions. But what if AI was used by a musician to play better – does it matter if that musician wins a prestigious competition? And who should regulate AI improvements? Such questions were among those raised at the #èTIC debate where López de Mántaras and Albert Cortina, lawyer author of a book on singularity and posthumanism, shared their concerns for reducing the risks to society from intelligent machines.

Cortina said the debate is not only whether humans should improve their capabilities, but whether these improvements will generate inequality. It’s easy to see a situation where those with the means are able to augment their own physical or mental capacities with AI, leaving the rest of society with their all-too-human abilities. Should humans’ use of AI to improve themselves be capped to promote equality, or would society be better off if humans were able to add machine intelligence to their own?

Cyborg. Illustration : Elena

López de Mántaras said there are two key aspects to AI that need be regulated: the use of lethal autonomous weapons systems, and privacy. In this sense, López de Mántaras signed, with other AI experts around the globe, an open letter that pledges to carefully coordinate progress to artificial intelligence it does not grow beyond humanity’s control.

“There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls,” the letter says.

Nevertheless, who should establish the limits on the use of AI remains unanswered.

López de Mántaras


López de Mántaras’ team is working on a project to illustrate the problems a machine faces in understanding its own limitations.

Grasping what we can and can’t do may be obvious to humans, but not so to machines. The Artificial Intelligence Research Institute (IIIA) has teamed up with Imperial College London on the project, which uses an electronic musical instrument developed by the University Pompeu Fabra (UPF) of Barcelona, called Reactable. Inspired by modular analogue synthesizers such as those developed by Bob Moog in the early 1960s, Reactable uses a large, round multi-touch enabled table, which performers ‘play’ by manipulating physical objects on the table, turning and connecting them to each other to make different sounds and create a composition.

The IIIA’s machine is learning how to play the Reactable, adjusting its movements using common sense reasoning. If the IIIA machine moves too quickly, it can’t perform the action correctly. The machine can learn when its actions will succeed, and develops the ability to foresee what will happen when and if it fails too. Such learning is trickier than it seems.

Robot and Robodog. Illustration: Elena

“We are now doing experiments to see what happens when you move the instrument around to see whether the robot is able to rediscover a sound position,” López de Mántaras said – finding where the object originated from and the sound it made there.

The learning process should be similar to the human way of doing things, known as developmental or epigenetic robotics. “It is basic research without an immediate application, but it is important for the future,” López de Mántaras said.

This research is necessary to allow future robots to develop common sense knowledge. For example, to know that in order to move an object attached to a rope you have to pull the rope, and not push it. This and other physical properties of an object can only be learned by experience. Ultimately, says López de Mántaras, for any real future artificial intelligence to created, it will need to be have such common sense knowledge as its disposal.


Elements

Elements


Since the time of the alchemists, more and more elements have been discovered, the latest to be found tending to be the rarest. Many are familiar – those that primarily make up the Earth; or those fundamental to life. Some are solids, some gases, and two (bromine and mercury) are liquids at room temperature. Scientists conventionally arrange them in order of complexity. The simplest, hydrogen, is element 1; the most complex, uranium, is element 92. Other elements are less familiar – hafnium, erbium, dysprosium and praseodymium, say, which we do not much bump into in everyday life. By and large, the more familiar an element is, the more abundant it is.

The Earth contains a great deal of iron and rather little yttrium. There are, of course, exceptions to this rule, such as gold or uranium, elements prized because of arbitrary economic conventions or esthetic judgements, or because they have remarkable practical applications.

The fact that atoms are composed of three kinds of elementary particles – protons, neutrons and electrons – is a comparatively recent finding. The neutron was not discovered until 1932. Modern physics and chemistry have reduced the complexity of the sensible world to an astonishing simplicity: three units put together in various patterns make, essentially, everything.

Electrons and protons have a dedicated mutual aversion to their own kind, a little as if the world were densely populated by anchorites and misanthropes. Image: © Megan Jorgensen (Elena)

The neutrons, as we have said and as their name suggests, carry no electrical charge. The protons have a positive charge and the electrons an equal negative charge. The attraction between the unlike charges of electrons and protons is what holds the atom together. Since each atom is electrically neutral, the number of protons in the nucleus must exactly equal the number of electrons in the electron cloud. The chemistry of an atom depends only on the number of electrons, which equals the number of protons, and which is called the atomic number. Chemistry is simply numbers, an idea Pythagoras would have liked. If you are an atom with one proton, you are hydrogen; two, helium; three, lithium, four, beryllium; five, boron; six, carbon; seven, nitrogen; eight, oxygen; and so on, up to 92 protons, in which case your name is uranium.

Like charges, charges of the same sing, strongly repel one another. We can think of it as a dedicated mutual aversion to their own kind, a little as if the world were densely populated by anchorites and misanthropes. Electrons repel electrons. Protons repel protons. So how can a nucleous stick together? Why does it not instantly fly apart? Because there is another force of nature: not gravity, not electricity, but the short-range nuclear force, which, like a set of hooks that engage only when protons and neutrons come very close together, thereby overcomes the electrical repulsion among the protons. The neutrons, which contribute nuclear forces of attraction and no electrical forces of repulsion, provide a kind of glue that helps to hold the nucleus together. Longing for solitude, the hermits have been chained to their grumpy fellow and set among others given to indiscriminate and voluble amiability.

Grandeur and Intricacy of Nature

Grandeur and Intricacy of Nature


Johannes Kepler and Isaac Newton represent a critical transition in human history, the discovery that fairly simple mathematical laws pervade all of Nature; that the same rules apply on Earth as in the skies; and that there is a resonance between the way we think and the way the world works. They unflinchingly respected the accuracy of observational data. Their predictions of the motions of the planets to high precision provided compelling evidence that, at an unexpectedly deep level, humans can understand the Cosmos. Our modern global civilization, our view of the world and our present exploration of the Universe are profoundly indebted to their insights.

Newton was guarded about his discoveries and fiercely competitive with his scientific colleagues. He thought nothing of waiting a decade or two after its discovery to publish the inverse square law. But before the grandeur and intricacy of Nature, he was, like Ptolemy and Kepler, exhilarated as well as disarmingly modest. Just before his death he wrote: ” I do not know what I may appear to the world; but to myself I seem to have been only like a boy, playing on the seashore, and diverting myself, in now and then finding a smoother pebble or a prettier shell than ordinary, while the great ocean of truth lay all undiscovered before me.”

The doors of Heaven and Hell are adjacent and identical (Nicos Kazantzakis, The Last Temptation of Christ). Image: © Megan Jorgensen (Elena)

There is an historical account which many if fact describe an impact on the Moos seen from Earth with the naked eye: On the evening of June 25, 1178, five British monks reported something extraordinary, which was later recorded in the chronicle of Gervase of Caterbury, generally considered a reliable reporter on the political and cultural events of his time, after he had interviewed the eyewitnesses who asserted, under oath, the truth of their story. The chronicle reads: There was a bright New Moon, and as usual in that phase its horns were tilted towards the east. Suddenly, the upper horn split in two. From the midpoint of the division, a flaming torch sprang up, spewing out fire, hot coals, and sparks.

Googol and Googolplex

Googol and Googolplex


The American mathematician Edward Kasner once asked his nine-year-old nephew to invent a name for an extremely large number – ten to the power one hundred (10 in 100), a one followed by a hundred zeroes. The boy called it googol.

You, too, can make up your own very large numbers and five them strange names.

If a googol seems large, consider a googolplex. It is ten to the power of a googol – that is, a one followed by a googol zeros. By comparison, the total number of atoms in your body is about 10 (28), and the total number of elementary particles – protons and neutrons and electrons – in the observable universe is about 10 (80). If the universe were packed solid with neutrons, say, so there was no empty space anywhere, there would still be only about 19 (128) particles in it, quite a bit more than a googol but trivially small compared to a googolplex, do not approach, they come nowhere near, the idea of infinity.

The spirit of this calculation is very old. The opening sentences of Archimedes’ The Sand Reckoner are: “There are some, King Gelon, who think that the number of the sand is infinite in multitude: and I mean by the sand not only that which exists about Syracuse and the rest of Sicily, but also that which is found in every region, whether inhabited or uninhabited. And again, there are some who, without regarding it as infinite, yet think that no number has been named which is great enough to exceed its multitude”. Archimedes then went on not only to name the number but to calculate it. Later he asked how many grains of sand would fit, side by side, into the universe that he knew. His estimate: 10(65), which corresponds, by a curious coincidence, to 10(83) or so atoms.

Transmute the elements: cut the atom! Image: © Megan Jorgensen (Elena)

A googolplex is precisely as far from infinity as is the number one. We could try to write out a googolplex, but it is a forlorn ambition. A piece of paper large enough to have all the zeroes in a googolplex written out explicitly could not be stuffed into the known universe. Happily, there is a simpler and very concise way of writing a googolplex: 10 (10(100)), and even infinity (pronounced infinity).

In a burnt apple pie, the char is mostly carbon. Ninety cuts and you come to a carbon atom, with six protons and six neutrons in its nucleus and six electrons in the exterior cloud. If we were to pull a chunk out of the nucleus – say, one with two protons and two neutrons – it would be not the nucleus of a carbon atom, but the nucleus of a helium atom. Such a cutting or fission of atomic nuclei occurs in nuclear weapons and conventional nuclear power plants, although it is not carbon that is split. If you make the ninety-first cut of the apple pie, if you slice a carbon nucleus, you make not a smaller piece of carbon, but something else – an atom with completely different chemical properties. If you cut an atom, you transmute the elements.