Sunday, December 27, 2015

Rearranging Asimov's Laws?

Image: Tufts Human-Robot Interaction Lab (from linked article)

You have all been reading I, Robot by Isaac Asimov...

Some of you may also have previously seen the movie 2001: A Space Odyssey.   This recent research from Tufts University, which is attempting to make robots interact more intelligently with humans, touches on both of these...

Nifty, right?  What could possibly go wrong?

Friday, December 18, 2015

Debugging Gender Binary

Source: https://cdn3.dogonews.com/pictures/3626/content/kobianxin_0220606250913406301818.jpg?1285699722


Have you ever yelled at a navigation A.I. for giving you misleading or otherwise poor directions? I have fond memories of hearing my mother yell at what she dubbed the "Nav Lady" back when the screen in our Toyota Sienna was working. I can still hear the smooth, well-articulated words of the Nav Lady's voice emanating from the speakers to alert the driver of an impending turn, only to be met by a witty retort from whoever was at the wheel.

It was kinda weird. No, not my mom's name for this disembodied voice as if it's a sentient being. I meant the voice of the nav system. The tone was so...sultry. Euuugh. Trust me. Android's nav voice and Siri definitely sound more impersonal, which I appreciate.

But I digress.

In an article on The Atlantic's website titled "We Need a New Pronoun for Artificial Intelligence," author Kaveh Waddell imagines a future where the existence of artificial entities demands a third-person pronoun with which to refer to these A.I. Waddell suggests that the implied gender of an A.I. helps humans acclimate to the presence of A.I. because it is generally a shared quality with humans to identify as male or female. As humans become more intimately familiar with technology and artificially intelligent technology becomes more prevalent, the assignment of a male or female gender to a A.I. would be obsolete. An A.I. will be commonplace enough to merit his/her/its own third person pronoun. Waddell believes 'it' to be inadequate, and 'he' and 'she' aren't quite correct.

There are two examples the author pointed out from popular media that I find interesting. In the case of A.I. featured in films such as Her, the A.I. is clearly a female stock character meant to entice a male protagonist. However, often when we encounter non-specifically gendered A.I. such as R2-D2, who arguably shows more intelligence than some sentient beings in the universe (lookin' at yousa, Jar-Jar), the default action can be to go to male pronouns. When you think about it, it's not a very informed decision; R2-D2 speaks in nonspecific bloops and bleeps, and there aren't any real anatomical clues to speak of (I hope).

I believe the assertion held by the author holds merit, but it will depend on the extent to which humans can develop thoughtful A.I. If we indeed see anthropoid robots become more prevalent in the home, at work, or on the street, it seems fitting to be able to refer to a highly functioning "thing" in recognition of [its] greater capacity. However, I find it unlikely that it will matter a great deal to the A.I. how someone recognizes [its] gender, unless some programmer finds his/herself quite bored one day and decides, "You know what? I need to blow off some steam. I think I'll program something I can sit here and offend all day." More likely, I believe, is that more people will grow up around artificial intelligence as they play larger roles throughout one's life, and a new pronoun designation will naturally take hold.

That is, if we can decide on one. Hey, if the Swedes can do it, so can we.

[Extra Post] Fun "Evolution" Game

(This is not supposed to be my graded blog post.)

Considering the second half of class today, I wanted to share this neat little game I know of. The goal of the program is to "evolve" a car that can successfully traverse a particular track to obtain a high score in a short amount of time. Users can influence factors such as the generation size and mutation rate. It is possible to save a car and then drop it into a new environment to see what happens (try it with some of the cars on the 'Best Cars' page). Alternatively, you can design a car yourself, but that's not as fun in my opinion.

There's also this one, but I haven't yet toyed with it. This is based on the program above with the same physics engine, but otherwise rewritten. It's kind of cool to have more than one car running at once and see them race each other. You can also check it out on Github here.

Feel free to comment with neat car designs, or links to other games you know of. Happy December!

Artificial Intelligence Is Likely Affecting Your Life, Daily

            The use of big data to analyze media ratings and profitable markets is nothing new, but the terms of how that information is retrieved has not seemed to evolve much in recent memory.  However, with an already nearly immeasurable amount of devices capable of streaming and downloading media content, the ratings for a TV show cannot be limited any more to simply the amount of televisions that are watching a broadcast at any one time.  The amount of people streaming a show, downloading it for later viewing or mentioning it on a social media site is a vastly more complex and largely scaled data set to sift through than traditional TV ratings as we have seen up until this point.  This is where Artificial Intelligence comes in, to make sense of all this data.  According to http://recode.net/2015/12/16/bbc-uses-artificial-intelligence-to-track-down-new-audiences-for-sherlock/, the company which the BBC has hired to analyze viewer data for its show Sherlock, Parrot Analytics, has reported that the amount of data required to go over is on the order of several petabytes for this one show.              


Image credited to recode.net; specifically this article 

To better examine how large a petabyte (PB) is, it is the equivalent of 1,000,000 gigabytes, or 1,000 terabytes of information.  And this is just for one show, on one network.  The “big four” tech giants in the world, Google, Amazon, Microsoft and Facebook, are estimated to hold 1,200 PBs of information altogether, across every service they offer, according to http://www.sciencefocus.com/qa/how-many-terabytes-data-are-internet.  When we start talking about the amount of data to sift through in regards to media ratings, we start talking about data sets that are large to the point that a human, or team of well-trained statisticians, may never be able to comb through them all, much less actually boil them down into useful information for an average reader to make sense of.  Parrot Analytics, however, employs artificially intelligent agents to comb through all of this information, and low and behold, a viable, useful set of data is produced as a result.

In the article, Parrot is described as calculating what it calls a “demand rating,” which is a measurement of the interest in a specific media broadcast in a specific area of the world.  It is the result of analyzing the total social media mention that a show has, in a sense.

Truthfully, this is where the analysis of data has to go, into the hands of an artificially intelligent program to be able to make sense of it all.  In Probability and Statistics, a course taught in the Math department (MATH 315), one of the first lessons you are taught is that a list of data that is even as small as 20 statistics will look different to you when viewed in a list versus when it is made into a graph.  Students and teachers alike are only able to identify patterns on data sets when those sets are made into a visual representation.  However, when that same data is presented in a list, such as

1                     1.00015
2                     1.00016
3                     1.00099
4                     1.00127
5                     1.00019
6                     1.00018
7                     1.00017
8                     1.00003
9                     1.00019
10                 1.00053
11                 1.00137

People will be less likely to identify patterns in that data (here, the pattern is that it starts out low, then increases, then lowers back to its original state, then increases again).  The end result is that these thousands of terabytes of information result in the discovery of new markets for TV shows.  In the case of Sherlock, the unlikely market of Seoul, South Korea, proved to hold a very large number of fans for its show.  After analytics from Parrot tipped them off that they may have a large fan base in that part of the world, the BBC included the city in a promotional world tour, according to the news article this writing is based on.  The tour involved being able to get a selfie in front of a Tardis replica and being able to buy tickets to meet Benedict Cumberbatch, the actor who plays Sherlock Holmes in the series.  There were 4,000 possible tickets that could be sold, and 50,000 people physically left their homes and lined up for them in a few minutes.


To me, this confirms the suspicion that Artificial Intelligence will be not just an important tool to have in the years to come, but indispensable in terms of analyzing trends.  These kinds of numbers, 2-3 PB of information for one television show, are the result of multiple different outlets for people to express interest in a given form of media, and as time has shown us, the available outlets to express interest tend to increase exponentially.  An article from 2014 in the Wall Street Journal, http://www.wsj.com/articles/SB10001424052702303640604579296580892973264, records technology giant Cisco as saying “the number of devices connected to the Internet will swell from about 10 billion today [in 2014] to 50 billion by 2020, as wireless links spread beyond smartphones and PCs to many other kinds of devices.”  This is known as the Internet of Things, or IoT.  The data sets which we use to analyze television shows are growing, and the data which we can collect about cities and governments is increasing exponentially as well.  In this article, http://mic.com/articles/130132/this-high-tech-city-is-showing-the-rest-of-the-world-what-the-future-looks-like#.4EjQngQLF, the city of Glasgow, Scotland, is mentioned to now have sensors placed under the pavement of its streets to detect when traffic is coming, so to better time traffic lights, for example, or to allow users of a mobile app to report potholes in their neighborhood.  Crime is now tracked using technology, data is collected as to where more electricity is spent, and citizens of the city can now be given notice when it might be faster to ride a bicycle to work than drive their car.  The result of all of this information is a data set that is orders of magnitude larger than any set of information gathered in history, and it is growing rapidly.  Search algorithms set to find a proper time to go and fix a pothole, or agents designed to find a good market for a TV show are now, or soon going to be, required to analyze data sets that may soon become too large for most humans to comprehend.  A more intelligent world demands more intelligent tools to work with it, and for that, artificially intelligent programs are not only valuable, but indispensable in terms of what kind of big data sets they can look at, and just how they can affect our lives as a result.

Thursday, December 17, 2015

Just sharing a story.

I thought you all might enjoy this story I randomly found because it involves a robot and is somewhat funny. ^\_['_']_/^ (This is a robot shrugging.)

Thursday, December 10, 2015

Autonomous Vehicles, a Matter of Simplifying the Equation

    Google has been amidst development of their autonomous car for years. Talk of the Google Car that can completely drive itself has spread the nation, even though the vehicles have never left the comfort of Google's home in Mountain View, California. However, over the past year, the company has shifted its of focus from developing a standard, full-speed, four passenger vehicle, to a simplified, souped-up golf cart. A recent article by MIT Technology Review's Mark Harris revealed the thinking behind Google's new car, simply referred to as, "Prototype."

Photo Credit: MIT Technology Review (http://www.technologyreview.com/news/537556/why-googles-self-driving-bubble-cars-might-catch-on/)

    The premise for Prototype lies behind the need for Google to simplify all of the possible outcomes and situations that one of its autonomous vehicles could encounter while driving down the road. These vehicles are examples of planning agents. As discussed in class, these are agents who must ask "what if?" and plan their possible responses accordingly. However, it requires an immense amount of computing power and work on the agent's part to determine how the entire complex world state would be in the future so that it can plan its responses. In essence, Prototype is Google's effort to limit the world state of an autonomous vehicle by only allowing it to encounter certain state spaces within the vast world that is all of the roads on earth. In a simplified way, it is similar to a person starting to find the optimal solution to a game of Pac-Man by only attempting to optimize the amount of dots eaten by Pac-Man without even considering the enemy ghosts.
    Prototype has been limited to a top speed of 25 miles per hour and must on drive down roads with a speed limit of at most 35 miles per hour within the city limits of Mountain View. This cuts out the complications of highway commutes and high speed driving altogether, while allowing the car to be considered in the same classification as a golf cart according to the National Highway Traffic Safety Administration (NHTSA). Google claims that this will allow their products to get out on the road and continue testing as they work to find the solution to the complex problem of self-driving cars. Yet, after years of work on the Google Car without any significant confidence that it will be capable of handling all the problems the road could throw its way in the near future, this seems to be a step backward for Google.
    The real question seems to be whether or not anyone will be able to create a completely autonomous vehicle in the near future? All of a sudden a race has developed between many companies including Tesla and Lexus to figure out who will be the first to answer yes to that question. Even the Chinese search engine company Baidu has jumped into the mix. They have taken a slightly different approach than Google by delving straight into testing on busy streets under strict observation by operators to see where the car needs help and could use improvements.
    A common theme has been formed throughout all of these projects though that makes me doubt any of these companies' ability to produce such a car in the next few years. They all have underestimated the kinds of extraneous situations that a driver faces when behind the wheel of a car. They all seem to be stuck dealing with scenarios such as a biker hand-signaling as they bike ahead of the car, another driver on the road craning their neck to see that no one is driving this car, a police car pulled to the side of the road where the car has to slow and move over for, or even a drunk driver swerving and dangerously driving down the road. Each of these possibilities normally requires a driver to think, calculate, and weigh options on the fly in order to determine the safest course of action. Which turns the discussion to a topic also discussed in class, can machines have the ability to think such as humans? Can a self-driving car sense and predict a situation that requires a new course of action in which it may have never encountered before in order to protect its passengers?

    That is a whole argument in itself, but I believe the true limiting factor in the whole dilemma is much different. The limiting factor is us. Humans may be the reason autonomous cars are not already on roads. This is not due to the fact that we cannot create such a machine, but because we cause many of the problems that self-driven cars face. If we really consider all of those difficult situations that could cause problems for these automobiles, they are all rooted in the danger and unpredictability of humans. If all vehicles on the road were autonomous they would have the ability to communicate and inform one another about where they will be at what time and potentially avoid all accidents and problematic situations altogether. This may be optimistic thinking, but one thing is certain, in the near future it is hard to imagine a company releasing a completely autonomous, full speed, passenger vehicle for consumer purchase.

Tuesday, December 8, 2015

Malevolent AI: Fact or Fiction? Far Away or Soon to Come?

                


                As of right now, it seems that most of our class discussions about A.I. have been geared towards looking at A.I. in a positive frame. We can regularly see A.I. misbehaving in a multitude of sci-fi movies such as The Terminator series, The Matrix trilogy, and even "Avengers: Age of Ultron", whose antagonist robot is pictured above. Since these movies are disappointingly unrealistic by today's standards (unless we are currently living in the matrix, and you are all robots...), it's hard to relate much of what we see. So this made me wonder, what would a so-called malevolent A.I. look/act like?

                When looking for answers to this question, I stumbled across an article titled “AI gone wrong: Cybersecurity director warns of 'malevolent AI'.” In this article, author Hope Reese interviews the director of the Cybersecurity Lab at the University of Louisville, Roman Yampolskiy, in order to discuss the possible dangers of creating A.I. devices. This article is a great read about what ‘malevolent A.I.’ is and how it seemingly isn’t so farfetched anymore.


                After reading through this article I was impressed in two separate capacities. First, how far along in A.I. we have come. Think about it, nearly every bit of electronic device made anymore has a touch of A.I. in it whether it is primitive or complex in use and design. Secondly, I noticed how easy it would be for a malevolent device to take over many other devices and use them for itself. In a different article on Scientific American, the writer talks more deeply about how A.I. is created and how he believes we are too far out to have A.I. be a threat as some mainstream scientists are concerned. However I disagree with his views. We have already seen how hackers are able to dig through coding and find ways into other devices and take control of them. I believe it would be possible to program bots to do this on their own as well. The point here being that although we may not have yet made an A.I. system that has all the attributes of something that could be malevolent, we are near making all of the individual aspects. It will only take one programmer compiling all of these into one device, similar to the ‘super computer’ in the movie ‘Eagle Eye.’ This article further goes along with my views as well. My question is which will happen first: Humans creating malevolent A.I. intentionally or unintentionally? We have all the pieces, but who will be the first person to begin putting these together? 

Monday, December 7, 2015

Emotional Robots

From http://www.techrepublic.com/article/angelica-lim-flutist-global-roboticist-proud-master-of-a-robot-dalmatian-named-sparky/

Angelica Lim is a developer at Alderbarn Robotics in Paris. In this article by TechRepublic they interviewed her on her work with robots. She has been working on teaching robots to recognize emotions by teaching them facial expressions, like how babies learn. Her work with emotional robots started with making robots that could play music. (An example of one such robot is here). The researchers found that even when the robots played the notes perfectly it did not match how a human player would play. So they have stared to work on adding emotion to robots.  Some emotion responsive robots have been developed already. one such robot is NAO, which is a robot that helps teach autistic children about emotions and social interactions. (You can see video about project here.) Not everyone approves of emotional robots thought. Sherry Turkle objects to giving robots emotion because they can not really feel emotion so are just pretending. She believes that this pretending to care would be damaging to children and elderly, the two age groups most often identified as benefiting from a robot companion. 


I thought that this was a very interesting topic.  Thinking about how they would go about programming the robots to recognize and react to emotions seems that it would be even more difficult than a chat bot. The idea of being able to use robots to teach autistic children about emotions is also interesting. The article talks about how the robots work well for this because they present fewer signals that the children have to interpret. We often focus on making robots more human but this shows that there are times when a robots limitations are a benefit. As for Turkle's objection that having robots with emotions would be damaging because they are pretending, I don't really understand her point. She does not go into detail on her objection and that may be part of it. To me the robot is not trying to trick the person, they are following their programming. Her fear seems connected to the fear shown in the first in the first short story in I, Robot, that people will not be able to distinguish between a person and robot or prefer robots to people.  I think that children would learn to interact with people, and robots could help for that. If the elderly feel like their only companion is a robot then I that says more about how we treat elderly people than the dangers of robots. 

Sunday, December 6, 2015

Hour of Code

Hey Everyone,
Who is interested in having an hour of code on Saturday? What times work best for everyone? Do we want to open it up to all Computer Science majors/211-212 classes?

Sunday, November 29, 2015

Not Only Can Computers Not Think, They Never Will.

Photo Credit: DNews
Our very first debate of the class, on November 20th, 2015 was the motion Can Computer Think? One of the claims given by the proposing team was drawn from paralleling the development of computers and technology with human development. Whether we are looking at the trajectory of evolution or the growth of human beings from infancy to adulthood, we see complex advancement from primitiveness to advanced and more articulate creatures. The proponents to this motion claimed that technology too, has been evidenced to depict the progress to becoming more complex and articulate at a rate that if computers are not thinking now, they soon will be. In his paper Can Computers Think?, John Searle says no. He argues that digital computers don't and never will, have the mental functioning like those of human beings. The nature of his refutation goes back to the fact that computers are only syntactical and human minds are more than just syntactical, they are also semantical; they have content. To be better understand what he means by this assertion, he compares the operations of digital computers with human minds.
The distinction he gives between the two faculties; mental process and programs processes goes as follows. The operations of digital computers can be specified purely formally; we specify the steps of operations in computers in terms of abstract symbols-sequence of zeros and ones, but the symbols have no meaning, they are not about anything! The zeroes and ones, for example, are just numerals; they don't even stand for numbers. This feature of programs, that they are defined purely formally or syntactically, is fatal to view that mental process and program processes are identical. There is more to having a mind than having formal or syntactical processes, our mental states by definition, have certain sorts of content, they have both syntax and semantics.
Going back to the Turing Test or the Chinese room experiment. It is important to note that producing the desired output is not enough to classify the process as a thought process. If the man in the Chinese room is replaced with a digital computer that correctly recognizes the given symbol and produces the right output, it may fool the people outside the room that it indeed does understand Chinese but we know it doesn’t. Producing the right output is not enough, interpretation and understanding the meaning of this symbols constitutes being a Chinese speaker. The same goes for the Turing Test, the main idea is to mimic mental process, keyword mimic. More efficient programs may be designed, and will ultimately pass the Turing Test, but none of these programs will ever have semantic content. None of this program will find meaning in what the person on the other side is saying and produce the desired output only by the virtue of understanding the content and not by following the designed algorithms specified only syntactically.


Saturday, November 28, 2015

Artificial Intelligence? 97.4 Percent Of Computers Say They Still Us Need Humans




Photo Credit:www.redlambda.com
In the article by Forbes, it answers the question about if computers need humans still, or can computers learn without human input.  They site a survey by Evans Data Corporation (EDC), EDC interviewed 529 software developers and 2.6% reported that machine learning software does not require human input. The survey also revealed that human input does not stop after software has been deployed. 

I think the article raises a good question, but the way they came up with the answer is flawed. I agree with Forbs but I am not content with how they came to the 97.4%.  They only interviewed 529 developers, the sample size is pretty small considering that there are 3.6 million software developers in the US as of 2013. In order to have access to the data I have to sign up with the company, which seems that they may be hiding something.  

A second article speaks about the same issue that humans are still needed and they believe that even in 2035 we will not have achieved AI. The reason is because the problem is not a hardware problem but a software problem. The software needs to be developed in order to achieve AI. I agree with the article, the better the software the closer we are to achieve AI. 

How this links to our class is that when we debated the question of “Can computers think?” and directly to the “No” discussion. Also, to the overall discussion of what is AI.