DailyDirt: Lethal Machines
Artificial intelligence is obviously pretty far from gaining sentience or even any kind of disturbingly smart general intelligence, but some of its advances are nonetheless pretty impressive (eg. beating human chess grandmasters, playing poker, driving cars, etc). Software controls more and more stuff that come in contact with people, so more people are starting to wonder when all of this smart technology might turn on us humans. It’s not a completely idle line of thinking. Self-driving cars/trucks are legitimate safety hazards. Autonomous drones might prevent firefighters from doing their job. There are plenty of situations that are not entirely theoretical in which robots could potentially harm large numbers of people unintentionally (and possibly in a preventable fashion). Where should we draw the line? Asimov’s 3 laws of robotics may be insufficient, so what kind of ethical coding should we adopt instead?
After you’ve finished checking out those links, take a look at our Daily Deals for cool gadgets and other awesome stuff.
Source: DailyDirt: Lethal Machines
Via: Google Alerts for AI