Monday, December 7, 2015

Emotional Robots

From http://www.techrepublic.com/article/angelica-lim-flutist-global-roboticist-proud-master-of-a-robot-dalmatian-named-sparky/

Angelica Lim is a developer at Alderbarn Robotics in Paris. In this article by TechRepublic they interviewed her on her work with robots. She has been working on teaching robots to recognize emotions by teaching them facial expressions, like how babies learn. Her work with emotional robots started with making robots that could play music. (An example of one such robot is here). The researchers found that even when the robots played the notes perfectly it did not match how a human player would play. So they have stared to work on adding emotion to robots.  Some emotion responsive robots have been developed already. one such robot is NAO, which is a robot that helps teach autistic children about emotions and social interactions. (You can see video about project here.) Not everyone approves of emotional robots thought. Sherry Turkle objects to giving robots emotion because they can not really feel emotion so are just pretending. She believes that this pretending to care would be damaging to children and elderly, the two age groups most often identified as benefiting from a robot companion. 


I thought that this was a very interesting topic.  Thinking about how they would go about programming the robots to recognize and react to emotions seems that it would be even more difficult than a chat bot. The idea of being able to use robots to teach autistic children about emotions is also interesting. The article talks about how the robots work well for this because they present fewer signals that the children have to interpret. We often focus on making robots more human but this shows that there are times when a robots limitations are a benefit. As for Turkle's objection that having robots with emotions would be damaging because they are pretending, I don't really understand her point. She does not go into detail on her objection and that may be part of it. To me the robot is not trying to trick the person, they are following their programming. Her fear seems connected to the fear shown in the first in the first short story in I, Robot, that people will not be able to distinguish between a person and robot or prefer robots to people.  I think that children would learn to interact with people, and robots could help for that. If the elderly feel like their only companion is a robot then I that says more about how we treat elderly people than the dangers of robots. 

4 comments:

  1. I really like the concepts you presented here! The ideas of robots having emotions is a major concept in sci-fi movies. I think it is interesting to have robots attempt to use face recognition in order to detect emotion. However, I agree that this is even more complex than the chat box.

    I understand the confusion of Turkle's objection, but I understand her concern of robots working with children with autism. I could see difficulties if the robot would leave the child, perhaps causing a breakdown depending on the child's form of autism. However, I still do not think this is a big enough problem to eliminate the research of emotional robots. I'd love to learn more about the potential of robots with emotions and how that would influence the creation of artificial intelligence.

    ReplyDelete
    Replies
    1. Have you read Chapter 1 of "I, Robot" yet? Seems apropos to your point here...

      Delete
  2. In my opinion, the fear that people will soon not be able to tell the difference between robots and human beings or prefer robots to humans is invalid. Think of the robots as pets, people have had pets for so long and humans have not feared that they are getting insufficient because their pets are doing things better than them. Gloria and Robbie, seem to have a normal relationship like that of a pet and owner.
    Who is to say that when humans react accordingly to our situations they are not pretending. After years of practice, our brains get programmed to react accordingly in a given situation. If we get the robots to do the same, am sure we will be open to use them for emotional comfort. I am sure there are apps out there that are to cheer you up when you are having a bad day and so on.

    ReplyDelete
    Replies
    1. I can't completely agree with the robot to pet analogy.

      If the goal of achieving artificial intelligence is to create an agent that possesses the cognitive/intellectual abilities of humans (or better), then why must it be reduced to a pet? At this point, it's an agent that can think on our level. Whether or not the agent looks like us, we shouldn't treat something that can think like us as inferior (I'm assuming a pet would be inferior to a human on a societal sense), it just wouldn't be fair.

      Delete