Tuesday, December 8, 2015

Malevolent AI: Fact or Fiction? Far Away or Soon to Come?

                


                As of right now, it seems that most of our class discussions about A.I. have been geared towards looking at A.I. in a positive frame. We can regularly see A.I. misbehaving in a multitude of sci-fi movies such as The Terminator series, The Matrix trilogy, and even "Avengers: Age of Ultron", whose antagonist robot is pictured above. Since these movies are disappointingly unrealistic by today's standards (unless we are currently living in the matrix, and you are all robots...), it's hard to relate much of what we see. So this made me wonder, what would a so-called malevolent A.I. look/act like?

                When looking for answers to this question, I stumbled across an article titled “AI gone wrong: Cybersecurity director warns of 'malevolent AI'.” In this article, author Hope Reese interviews the director of the Cybersecurity Lab at the University of Louisville, Roman Yampolskiy, in order to discuss the possible dangers of creating A.I. devices. This article is a great read about what ‘malevolent A.I.’ is and how it seemingly isn’t so farfetched anymore.


                After reading through this article I was impressed in two separate capacities. First, how far along in A.I. we have come. Think about it, nearly every bit of electronic device made anymore has a touch of A.I. in it whether it is primitive or complex in use and design. Secondly, I noticed how easy it would be for a malevolent device to take over many other devices and use them for itself. In a different article on Scientific American, the writer talks more deeply about how A.I. is created and how he believes we are too far out to have A.I. be a threat as some mainstream scientists are concerned. However I disagree with his views. We have already seen how hackers are able to dig through coding and find ways into other devices and take control of them. I believe it would be possible to program bots to do this on their own as well. The point here being that although we may not have yet made an A.I. system that has all the attributes of something that could be malevolent, we are near making all of the individual aspects. It will only take one programmer compiling all of these into one device, similar to the ‘super computer’ in the movie ‘Eagle Eye.’ This article further goes along with my views as well. My question is which will happen first: Humans creating malevolent A.I. intentionally or unintentionally? We have all the pieces, but who will be the first person to begin putting these together? 

8 comments:

  1. What exactly would you say are the pieces that need to be put together to make a piece of A.I. dangerous? I feel like from the argument you make here, the main concern we should have still remains with humans. If humans are the ones programming an A.I. to use another bot for its own desires then humans are still the ones causing the issues. These bots have just become their weapons.

    I think this issue really deals with the debate over whether or not computers could become self-aware. If they had the capacity to know of their existence then I think that is when a problem may arise. As we see in all these films that you refer to, everything goes south when a robot sees itself as superior to its human creators.

    ReplyDelete
    Replies
    1. It seems to me that we need to separate three ideas here:
      * robots being self-aware
      * robots believing they are superior to humans
      * robots intentionally harming humans
      Even if robots are self-aware and believe themselves to be superior to humans, why would they harm us? Would it be "rational" according to some kind of utility function? Why wouldn't they merely "pity" us for our limited intellectual capacities?

      On the other hand, even if a robots believed they were inferior to humans, might they still try to intentionally harm us? Could they experience "jealousy" about the things that our minds can still do which theirs can't?

      Delete
    2. What about the three law of robotics? Or is that just a concept in the I, Robot

      Delete
    3. Excellent question. Of course the "Three Laws of Robotics" was just something Isaac Asimov made up for his story -- but *would* it be possible to engineer a robot that had to obey these laws? Or not? Would it be possible to have human-level intelligence *without* the freedom to choose certain actions that violate certain rules? (We don't seem to be able to get humans to stop killing each other, but maybe there's more hope for robots?)

      Delete
    4. From what I remember from the LSFY section covering the morality and ethics of AI, I recall that Asimov's laws follow a deontological system, where rules are set to be followed without thinking of the consequences of action. Examples of such are like: don't kill, don't steal, etc. This means the rules are followed in a black and white manner without any grey area of thought.

      But what if a robot has to choose between two rules of Asimov's laws, such as telling a robot to jump off a cliff, ending it's "life", thus choosing between the second and third rule?

      I'm concluding that Asimov's laws, while it's probably a great start granted it was made 50 some years ago, shouldn't be the rules to be followed in today's time when researching AI.

      Delete
  2. We, computer scientists, are very generous people. We only care about making the world a better place by making life easier and solving problems. That said, It is therefore an optimistic conclusion that we wont create malevolent robots, at least not intentionally.

    ReplyDelete
  3. I agree that many fear that robots may become dangerous, if you consider it possible for a robot to become self aware and then continue to feel superior. One use of robots that I would like to understand more about is robots as a military force. I believe if we could create AI, it would only be a matter of time before someone thought of using robots in war to save human lives. Then there may be a chance for malevolent robots, and even more dangerous robots. I don't think we can fully understand the capabilities, both positive and negative, of robots if AI became more successful.

    ReplyDelete