Hanging out a my favorite Starbucks with good friend and consultant Richard Porter, we bantered this around a bit. The problem I have with the concept is that pain is nothing more than a stimulus that causes discomfort. The greater the discomfort, the greater the pain. But, how do you make a robot uncomfortable? Dis-comfort for a robot is meaningless.
Let’s go a little further. Pain is the interpretation of a stimulus. A very human interpretation.
Richard’s take is that the only difference between a tickle and a sharp poke is the way the neural pathways in the brain are conditioned to respond. So, yeah, same thing, he’s just a scary smart economist and every thing get’s filed down to the science level. I’m good with that. Back to the robots, how is that supposed to translate to a machine? We can program it so that it responds in a fashion that as humans we perceive as being in pain, but that response is nothing more than an anthropomorphized reaction. Us telling it how to act.
Drilling down with Richard we decided that in order for a robot to actually be in pain, it would have to be a sentient artificial intelligence (AI). It would have to be able to determine for itself that something was, or was not, painful. And if we’re going to go that far, who’s to say that it wouldn’t like being bashed over the head? But that’s a topic for another day.
On the flip side of pain, of course, is pleasure. Just like with pain, what would that mean to a robot?
Even if you take current technology to the limits and have a fully functioning android with heated, rubberized skin and fully functioning body parts, it isn’t going to feel anything. It’s simply going to interpret sensory input and respond according to it’s programming.
The ultimate sex toy? Maybe, but it’s still not feeling anything. Not without that pesky intelligence. and we’re not there yet.
So, for the moment, the question isn’t should robots feel pain, so much as should we program them to respond as if they are in pain. Or pleasure. I can see the pleasure bit.
Sex toy and all that, but pain? Not so much. Should it be able to determine when certain parameters are dangerous to itself or those around it? Sure, but it doesn’t have to act like it’s in pain. It can simply avoid the situation or warn others accordingly.
I’m having fun with this concept in a Sci-Fi Rom story I’m doing for an up coming anthology. I have a com that has become sentient. They’ve decided that the next step in it’s evolution is to be integrated into the organic based cyborg body of a small owl. The question of what happens when it ‘wakes’ from the procedure and now has access to sensory input generated by the living flesh fascinates the hell out of me. You’ll hear more about this when I can go public with it.
How about you? Can you think of an application where a robots response of pain would be beneficial? How about the moral or ethical implications? Let’s hear it!