Tuesday, February 16, 2016

Technological Singularity



I think I would have the honor to be the last student to post an article to AIAI blog in this term. Here is a new term for y'all to enjoy. Technological Singularity. A Google search will lead you to this wiki page. And I am sure as hell that you will be bored by the flood of words on that page (not like you guys have any time to read the blog anyway).

To sum it up, it is a hypothetical event in which technology grows too fast, artificial intelligence outsmart humans and humans cannot predict what happens in the future anymore because everything is out of human's understanding. You know stuff like Terminator and The Matrix.



Here is an article about what could lead to the singularity and how we could prevent it. There are four ways that a technological singularity could happen

- Scientists could develop advancements in artificial intelligence (AI)
- Computer networks might somehow become self-aware
- Computer/human interfaces become so advanced that humans essentially evolve into a new species
- Biological science advancements allow humans to physically engineer human intelligence

And the first three leads to machines taking over us.

And here is some more articles discussing this issues (they are shorter than the wiki page and the previous articles)

5 comments:

  1. Which event do you find most likely? I am a bit skeptical about the third option because it's hard for me to imagine how much you can modify a huge amout of people with computer interfaces (then again, we have such a ubiquity of smartphones nowadays that perhaps implanted technology will become commonplace instead of an external device).

    Interesting point in the first article about finding ways to bypass Asimov's rules. Why might they aim to do that?

    ReplyDelete
    Replies
    1. I think that is interesting too! The only reason I originally thought for bypassing the laws would be to harm humans, ie potentially use AI as weapons. However, I think there may be other reasons such as judgement issues. This is the whole Robin Hood way of thinking. Trying to find justice for the right people. However, considering who are the good guys is a matter of perspective, I still do not think it is a good enough reason to get rid of the laws...

      Delete
    2. It doesn't make a lot of sense to talk about "getting rid of" the three laws in the real world, since these three laws have never existed, except in science fiction. It's really not clear to me whether it would be *possible* to program these laws into an intelligent system... perhaps if the system were entirely deduction/logic-based.. but the currently most promising methods in A.I. seem to be more neural-network based, and they learn how to act from millions of example data points, rather than having "laws" built into them...

      Delete
  2. I found it interesting that the second article thought that the thing some likely to prevent this from happening is Moore's becoming false. I think that the most likely cause would a powerful AI that then evolves other AI's. We saw with the penguin wizards that the evolved penguins where much better than the ones we designed.

    ReplyDelete
  3. Some parameters go into the evolving of artificial intelligence. Parameters like fitness function, utility tests and so on are all designed by the developers trying to evolve a particular algorithm. Do you think they will carelessly let the evolution go out of hand?

    ReplyDelete