Ethics of AI (Artificial Intelligence)
Robots that play music might not seem to raise much in the way of ethics questions. But what if AI was used by a musician to play better – does it matter if that musician wins a prestigious competition? And who should regulate AI improvements? Such questions were among those raised at the #èTIC debate where López de Mántaras and Albert Cortina, lawyer author of a book on singularity and posthumanism, shared their concerns for reducing the risks to society from intelligent machines.
Cortina said the debate is not only whether humans should improve their capabilities, but whether these improvements will generate inequality. It’s easy to see a situation where those with the means are able to augment their own physical or mental capacities with AI, leaving the rest of society with their all-too-human abilities. Should humans’ use of AI to improve themselves be capped to promote equality, or would society be better off if humans were able to add machine intelligence to their own?
Cyborg. Illustration : Elena |
López de Mántaras said there are two key aspects to AI that need be regulated: the use of lethal autonomous weapons systems, and privacy. In this sense, López de Mántaras signed, with other AI experts around the globe, an open letter that pledges to carefully coordinate progress to artificial intelligence it does not grow beyond humanity’s control.
“There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls,” the letter says.
Nevertheless, who should establish the limits on the use of AI remains unanswered.
López de Mántaras’ team is working on a project to illustrate the problems a machine faces in understanding its own limitations.
Grasping what we can and can’t do may be obvious to humans, but not so to machines. The Artificial Intelligence Research Institute (IIIA) has teamed up with Imperial College London on the project, which uses an electronic musical instrument developed by the University Pompeu Fabra (UPF) of Barcelona, called Reactable. Inspired by modular analogue synthesizers such as those developed by Bob Moog in the early 1960s, Reactable uses a large, round multi-touch enabled table, which performers ‘play’ by manipulating physical objects on the table, turning and connecting them to each other to make different sounds and create a composition.
The IIIA’s machine is learning how to play the Reactable, adjusting its movements using common sense reasoning. If the IIIA machine moves too quickly, it can’t perform the action correctly. The machine can learn when its actions will succeed, and develops the ability to foresee what will happen when and if it fails too. Such learning is trickier than it seems.
“We are now doing experiments to see what happens when you move the instrument around to see whether the robot is able to rediscover a sound position,” López de Mántaras said – finding where the object originated from and the sound it made there.
The learning process should be similar to the human way of doing things, known as developmental or epigenetic robotics. “It is basic research without an immediate application, but it is important for the future,” López de Mántaras said.
This research is necessary to allow future robots to develop common sense knowledge. For example, to know that in order to move an object attached to a rope you have to pull the rope, and not push it. This and other physical properties of an object can only be learned by experience. Ultimately, says López de Mántaras, for any real future artificial intelligence to created, it will need to be have such common sense knowledge as its disposal.
López de Mántaras
López de Mántaras’ team is working on a project to illustrate the problems a machine faces in understanding its own limitations.
Grasping what we can and can’t do may be obvious to humans, but not so to machines. The Artificial Intelligence Research Institute (IIIA) has teamed up with Imperial College London on the project, which uses an electronic musical instrument developed by the University Pompeu Fabra (UPF) of Barcelona, called Reactable. Inspired by modular analogue synthesizers such as those developed by Bob Moog in the early 1960s, Reactable uses a large, round multi-touch enabled table, which performers ‘play’ by manipulating physical objects on the table, turning and connecting them to each other to make different sounds and create a composition.
The IIIA’s machine is learning how to play the Reactable, adjusting its movements using common sense reasoning. If the IIIA machine moves too quickly, it can’t perform the action correctly. The machine can learn when its actions will succeed, and develops the ability to foresee what will happen when and if it fails too. Such learning is trickier than it seems.
Robot and Robodog. Illustration: Elena |
The learning process should be similar to the human way of doing things, known as developmental or epigenetic robotics. “It is basic research without an immediate application, but it is important for the future,” López de Mántaras said.
This research is necessary to allow future robots to develop common sense knowledge. For example, to know that in order to move an object attached to a rope you have to pull the rope, and not push it. This and other physical properties of an object can only be learned by experience. Ultimately, says López de Mántaras, for any real future artificial intelligence to created, it will need to be have such common sense knowledge as its disposal.
No comments:
Post a Comment
You can leave you comment here. Thank you.