Map of life expectancy at birth from Global Education Project.

Saturday, March 17, 2012

What, Me Robot?

It seems a hot subject in moral philosophy these days is robot ethics, which partly means the ethics of humans employing robots, but also means the ethnics which which autonomous artificial intelligences should be equipped.

When it comes to robots, the future isn't what it used to be -- or rather, the present isn't what the future used to be. The Jetsons had a humanoid robot maid complete with frilly apron and cap. Commander Data, apart from his sickly pallor and weird eyeballs, was indistinguishable from Brent Spiner. Isaac Asimov's universe was filled with humanoid robots employed in every sort of job, their autonomy constrained only by the famous Three Laws of Robotics.

As it turns out, robots, so far, don't look anything like people. In fact most of them don't look like anything at all, their existence is purely virtual. Siri's only physical manifestation is a voice. Watson is a voice plus a lever that presses a button. The Google driverless car doesn't have a humanoid machine sitting in the driver's seat. The roomba doesn't consist of the Jetson's maid pushing around a vacuum cleaner, it's just a vacuum cleaner that runs around the room on its own. Assembly line robots don't look like UAW members, they consist of specialized implements mounted on retractable turrets.

Most of these devices don't pose much of an ethical problem. Watson has already violated Asimov's First Law by beating humans at Jeopardy! and thereby depriving them of cash winnings. The Google car presumably must obey the traffic laws, but I don't know exactly what it will do at a four-way stop sign or in a dispute over a parking space. Anyway these are not profound problems.

I tried to think of an application for which a humanoid robot would actually make sense, and that seems feasible in the near future given current technology. The only thing I could come up with is playing golf. The rules of golf require that professionals under age 50 walk the course, so it does indeed need to be bipedal. You have to use approved clubs, so it has to swing from shoulder height. You might as well use two arms for esthetic reasons, and maybe two hands do give some mechanical advantage. It makes sense to put the visual and communicative apparatus on top, so it might as well have a head. So, a basically humanoid body plan.

The rules of golf give a huge advantage to the AI department because it's perfectly legal, indeed customary, to have a caddy who can provide information and advice. This greatly eases the demands on the machine as a human can help it find its way around the course and correct any mistakes in judgment. The bot would need learning capability. It would have the basic swing motions for the various kinds of shots programmed initially, and then it would work out on the range, trying modifications until it gained accuracy. It would need an understanding of the basic components of the game: the drive, where the object is generally to hit the ball as far as possible while ending up in the fairway; the approach shot using an iron, trying to end up as close to the flag as possible; and putting; plus the assortment of pitches and chips from various lies and sand, although I expect it would play so consistently it would never need those, except perhaps in high wind.

It would need pattern recognition ability so it could distinguish the fairway, green, and flag; perceive the ball and the lie; and read the greens. All that, if you can already make a driverless car, seems quite doable. But the big challenge would be social skills. It would need to distinguish among various categories of human and interact with them appropriately, and in context, i.e. during a golf game, during a practice round or on the range, and while not playing golf.

The caddy is a unique individual who the bot would have to recognize, and accept input including instructions. The caddy's evaluation of shots and behavior would be essential input for machine learning as well. It would have to recognize competing golfer and their caddies, and behave toward them appropriately, knowing whose turn it is to play, not creating distractions while the competitor is playing, not stepping in the line of the competitor's putt, etc. It should be programmed to congratulate good shots, but it needs criteria for judging them so it doesn't appear to be mocking. Then there are marshals and rules officials, and fans with whom it can slap hands or sign autographs while waiting for the fairway to clear.

Asimov's three laws clearly don't work. Its very purpose could harm other humans, at least psychologically, by beating them at golf, although I don't suppose it would be eligible for prize money. It has to obey its caddy, but nobody else. Unfortunately, I think the world would quickly lose interest because it would consistently shoot in the low 50s, hitting every fairway, knocking every iron stiff, and making every putt.

Okay, we've developed this great toy. Watson now goes on to provide expert advice to doctors, but what does out golfbot do? Sure, you could adapt it to work in the carwash or tend bar or hang seamless gutters, but I suspect it's cheaper in the long run to hire humans for those jobs. I can imagine that it's really only good for two things: being a soldier and killing people, or being a sex partner. And as you can see if you read the link at the top of this post, those are the main problems that seem to concern robot ethicists nowadays. For the first function, Asimov's laws won't work at all.

1 comment:

price per head service said...

That is very good comment you shared.Thank you so much that for you shared those things with us.Im wishing you to carry on with ur achivments.All the best.