The (real) hard problem of AI
It’s not making software that can solve our problems: it’s figuring out how to pose those problems so that the software doesn’t bite us in the ass.
Stuart Russell (Peter Norvig’s co-author on the authoritative Artificial Intelligence: A Modern Approach) lectures on the real significance of modern AI research and its potential and pitfalls.
Russell’s point boils down to this: when (or if) we figure out how to get AI to solve the difficult problems we have, the problem of expressing that problem in terms that will not get the AI to go astray is very, very hard. It’s the “sorcerer’s apprentice” problem — the reason that the third genie wish is always, “Please undo the first two wishes.”
Not least because AI systems are often designed to decompose hard problems into simpler sub-problems and to solve those — so if you tell HAL9000 to keep the mission going, it might decide to create and solve a sub-problem of keeping itself running at all costs so that it can fulfill its larger mission.
Russell’s problem is really not an AI problem at all — it’s just a special case of the problem of regulation altogether. If you tell company managers that they have a duty to use their investors’ money wisely, how do you stop them from interpreting that as “Pollute to the point where your estimated savings from not treating your waste are just ahead of the penalties you’ll pay for destroying the health of everyone in breathing range of the factory?”
This is the subject of Tim Harford’s important book Adapt, which talks about the problem of constructing bank rules that encourage banks to behave responsibly, instead of just recklessly enough to make as much money as possible without being shut down as a criminal enterprise:
It was over a year after Lehman Brothers collapsed before a British court started to hear testimony from Lehman’s clients, the financial regulator and PwC about what might be the correct way to treat a particular multi-billion dollar pool of money that Lehman held on behalf of clients. Who should get paid, how much and when? As PwC’s lawyer explained to the court, there were no fewer than four schools of thought as to the correct legal approach. The court case took weeks. Another series of court rulings governed whether Tony Lomas was able to execute a plan to speed up the bankruptcy process by dividing Lehman creditors into three broad classes and treating them accordingly, rather than as individuals. The courts refused.
It slowly emerged that the bank had systematically hidden the extent of its financial distress using a legal accounting trick called Repo 105, which made both Lehman’s tower of debt and its pile of risky assets look smaller and thus safer than they really were. Whether Repo 105 was legitimate in this context is the subject of legal action: in December 2010, New York State prosecutors sued Lehman’s auditors, Ernst & Young, accusing them of helping Lehman in a “massive accounting fraud”. But if that case remains unproven, it is quite possible that Lehman’s financial indicators were technically accurate despite being highly misleading, like the indicator light at Three Mile Island which showed only that the valve had been told to close, and not that it actually had.
Source: The (real) hard problem of AI
Via: Google Alerts for AI