To prevent artificial intelligence from going rogue, here is what Google is doing

DeepMind and Open AI propose to temper machine learning in development of AI with human mediation—trainers give feedback that is built into the motivator software in a bid to prevent the AI agent from performing an action that is possible, but isn’t desirable. (Reuters) Against the backdrop of warnings about machine superintelligence going rogue, Google is charting a two-way course to prevent this. The company’s DeepMind division, in collaboration with Open AI, a research firm, has brought out a paper that talks of human-mediated machine-learning to avoid unpredictable AI behaviour when it learns on its own. Open AI and DeepMind looked at the problem posed by AI software that is guided by reinforcement learning and often doesn’t do what is desired/desirable. The reinforcement method involves the AI entity figuring out a task by performing a range of actions and sticking with those that maximise a virtual reward given by another piece of software that works as a mathematical motivator based on an algorithm or a set of algorithms. But designing a mathematical motivator to preclude any action that is undesirable is quite a task—when DeepMind pitted two AI entities against each other in a fruit-picking game that allowed them to…

Link to Full Article: To prevent artificial intelligence from going rogue, here is what Google is doing

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!