Thursday, February 18, 2016

Just for Fun

I just found this post about a potential presidential candidate and thought you guys might enjoy:

Forget Trump or Bernie, how about Watson for President?

Robots and Human Values

As we were reading I, Robot and after watching the film, it comes into question if the three laws of robotics suggested by Asimov would really be the saving grace of Artificial Intelligence and humanity. When the three laws were modified slightly, the robots in the story were able to find loopholes such as risking one human's life in order to save more humans in the future. Though the movie did not follow the plot of the book in the slightest, I found it interesting that the one robot that ended up the "good guy" was the one that was not programmed to follow the specific three laws. Perhaps one of the things overlooked is robots learning human values.
In this article written by Alison Flood posted on The Guardian website, She addresses a new strategy in order to teach Robots to learn human values. And what is the new method attempting to teach human values to AI? Storybooks. We all remember classic stories from growing up, teaching patience and respect for others. Well now Mark Riedl and Brent Harrison, from the School of Interactive Computing at Georgia Institute of Technology have created a system that acts as a prototype that is able to learn human values and social conventions from those childhood stories. They call this system Quixote. Quixote is able to run simulations virtually and gets rewarded after doing the correct actions similar to the story (see chart below).
I believe this is a very interesting way to attempt to solve some of the issues and concerns that face AI. However, I began to wonder if this may also be dangerous depending on the type of story fed to the robot. I am sure as AI continues to advance, we shall see an increase in the human values encoded into robots and artificial intelligence. 

Tuesday, February 16, 2016

Technological Singularity



I think I would have the honor to be the last student to post an article to AIAI blog in this term. Here is a new term for y'all to enjoy. Technological Singularity. A Google search will lead you to this wiki page. And I am sure as hell that you will be bored by the flood of words on that page (not like you guys have any time to read the blog anyway).

To sum it up, it is a hypothetical event in which technology grows too fast, artificial intelligence outsmart humans and humans cannot predict what happens in the future anymore because everything is out of human's understanding. You know stuff like Terminator and The Matrix.



Here is an article about what could lead to the singularity and how we could prevent it. There are four ways that a technological singularity could happen

- Scientists could develop advancements in artificial intelligence (AI)
- Computer networks might somehow become self-aware
- Computer/human interfaces become so advanced that humans essentially evolve into a new species
- Biological science advancements allow humans to physically engineer human intelligence

And the first three leads to machines taking over us.

And here is some more articles discussing this issues (they are shorter than the wiki page and the previous articles)

Saturday, February 13, 2016

Robots in Elderly Care and Nursing


In this article (see my quick translation here) we are told about a little robot in Japan named PALRO (see picture, from the PALRO website) that's being used to lead group recreational activities and exercises for the elderly at a particular nursing home. The article itself doesn't directly discuss anything hard-hitting, but as I read it I had to grapple with some of my own concerns about robots as a replacement for human interactions, the same kind of worries that came to mind when I read Asimov's "Robbie."

In summary, the article describes how PALRO's introduction has been enjoyable for the elderly patrons of this "nursing home" (actually called a "day service," it's a kind of outpatient senior center or home-visiting nursing service) as well as helpful for the staff. The benefits to the staff were ones I hadn't expected; I thought that PALRO's presence might leave the staff only trivial things to do, but in reality PALRO gave them the chance to focus on the more time-intensive but also more-important work of one-on-one rehabilitation, which is really also a plus for the patrons.

As I recently commented on the "How do you really feel about AI?" post on the blog, I have some reservations about using robots as a replacement for human interaction, as I don't think it's healthy. (Even interaction with other people through the computer is a questionable replacement for face-to-face interaction.) But this particular article made me reconsider that, because my experience with the elderly in America is that they don't get the kind of interaction that they need from other people. (I'm not sure if this is the case in Japan, however.) So, while it may not be ideal that they hae interactions with robots instead of people, it is better than some of the things that happen in nursing homes here, especially if the robot allows the staff to give more personalized attention to those that need it.

It's hard to connect this to much of our class discussion, because we didn't address human-robot interactions much except when we discussed the imperfections of chatbots, but I think it's a worthy topic. It pops up in a lot of sci-fi, not just I, Robot, so people have been thinking about it for some time. I'm curious to know my classmate's thoughts on the matter--either on this particular use of robots or on robots as "friends" in general.

(This does bear some similarities to Pepper, but as far as I know PALRO is a little less sophisticated (?) and is meant to engage people in chatting and activities, not to recognize and react to emotions. Maybe this affects your thoughts on the subject.) 

Wednesday, February 10, 2016

Japanese company opens all-robot factory; some insight on economic decisions.

This article about the Japanese cosmetic company Shiseido opening an all-robot factory has some interesting information on the economics behind their decision. For example, "in other countries, automation often fuels concerns about layoffs, yet Japan’s shrinking labor force justifies a move to more robots."

Tuesday, February 9, 2016

Are you hazy on iRobot details??

Hello, if you HAVE read iRobot but may be confused about some of the details. This will assist you in clearing up the haze. Click here.

For fun

Here is a robot solving a rubix cube game. Just a little fun before the exam. Go here

Sunday, February 7, 2016

Robotic politics...

I recently ran across this article about A.I. composing political speeches, and immediately thought about Chris's short story...

I, Robot Audio Book

If you are like me, then sitting down and reading any kind of novel sounds like absolute torture. Here is a link to I, Robot audio book. It is beautifully read and narrated with voice and stuff. I always find that I become more productive when I listen to these audio books during my day to day routines, runs, chores, math homework..etc. It is only eight hours long and really fun. Enjoy

Here is another good video of Pepper the emotional robot. Just see how articulate she is...YAY!!

Monday, February 1, 2016

Developers Make Progress On Self Learning Code

http://www.wired.com/2016/01/in-a-huge-breakthrough-googles-ai-beats-a-top-player-at-the-game-of-go/


This article published a few days ago talks about the game of Go and how google developers in Britain have developed code and then ran the code against itself to self learn.  We talked about in class how we have figured out checkers, but not chess because it's too complicated and the game of Go is more complicated than chess.  They believe the system used for learning can apply to almost anything and in 2014 thought it would take almost 10 years to get it to work.  I think this is a large breakthrough for AI and it makes me wonder if we are finally evolving code quicker than expected.    

Sunday, January 31, 2016

How do YOU *really* feel about A.I.?

Evan asked about this on the Q&A, and I wanted to re-post it here since people might miss seeing it on the Q&A.

http://lovelace.augustana.edu/q2a/index.php/1582/what-are-your-actual-feelings-about-ai

It's an excellent question.  Naturally, you don't have to answer... but it could be interesting to discuss.

You can either talk about it on the Q&A post, or (in case you fear being voted up/down on Lovelace), you can discuss it in the comments section of this blog post here...

NXT Maze Solver that Remembers the Path Back to the Start

I found this video helpful when working on my final project. Check it out

https://www.youtube.com/watch?v=apQhBppWDLw

The blog is informative too....https://decibel.ni.com/content/blogs/ILabVIEW/2012/04/22/nxt-maze-solver-that-remembers-the-shortest-path#comment-47189

Thursday, January 28, 2016

Evolving Mario's Brain?

For another application of BOTH genetic algorithms AND artificial neural networks (much like my swarm robotics project), check out this awesome video about evolving a brain to play Super Mario.



It should also help reinforce the idea about how artificial neural networks work, which I explained briefly during class...

Sunday, January 24, 2016

Robots Taking Our Jobs

In this article by Ivana Kottasova, she talks about the likelihood of A.I. and robots taking the job market from humans.  The robots that we currently have perform mostly manual labor, but experts say as A.I. advances the more skilled jobs will start to disappear as well.  She says that a study done by Bank of America stated robots are likely to be performing 45% of manufacturing tasks by 2025 compared to the 10% performed today.  In this study they also looked at the falling prices of computers and robots over the past decade and how they will continue to get cheaper over the next decade, and this is one of the main reasons the artificial work force so appealing to employers.  Another study done by Oxford University stated that nearly half of all U.S. jobs will be at high risk of being taken over by computers and an additional 20% of the jobs facing medium risk.

Ivana also said another reason why our jobs are becoming more at risk of being taken over by computers is because of the current advances in voice and facial recognition, machine learning, and simply because they are getting easier to use.  According to experts, economic inequality will only increase as more jobs are taken by robots.  This is not the first time that technology is causing a change in the global economy, just look at the industrial revolution for example, but this time it is much different.  Workers will have the choice to take jobs that they are overqualified for or simply stay unemployed.  Ivana also wrote another article in late 2015 in which she talked with the chief economist of the Bank of England Andy Haldane, and he said "These machines are different.  Unlike in the past, they have the potential to substitute for human brains as well as hands."

Our last debate in class was about whether or not the development of advanced A.I. would be beneficial for humans or not, and these articles only help to provide evidence that it would not help us.  When the A.I. that we create becomes more intelligent than we are, who knows how the A.I. will act.  We can try to put safeguards in place to make sure we are always the most advanced creatures on the planet, but as we try to evolve programs with genetic algorithms who knows what will remain and what will be taken out.  I personally think that advanced A.I. most likely will cause the collapse of our economy and possibly human kind.  Once the job market is taken by computers and the economic gap is widened even further, the majority of the population will become poor and out of work.  Once that happens I'm not entirely sure what will happen to society.  We are trying to create robots to help us complete tasks and make our lives easier, but what happens when we create the A.I. that can do every task that we can do also?  At that point what need is there for humanity?

Sunday, January 17, 2016

If you think Google's/Facebook's image recognition is impressive, that's only the start.



I'm fairly appalled - almost creeped out - every time I upload a picture on Facebook, the site gives correct tag suggestions on almost everyone that's featured in the photo. Even then, I'm still not used to having the capability to use Google as a reverse image search engine, and that has been around for quite a bit. These are some pretty ground-breaking features that came about within the last 4-5 years. I would just sit there and try to imagine how a search algorithm could trace through photos and give return you a result. It's already hard enough for me to comprehend the search algorithms used by Google for just basic text-based search. There's so many factors like relevancy to keywords, domain names, exact names, or determining worthy sources. But apparently, that's only scratching the surface.

computer vision from http://www.mathworks.com

A recent Wired article calls the image/facial recognition computer vision. The identification process these search algorithms use are part of what's known as deep learning, a "breed", as Wired calls it, of artificial intelligence. Deep learning goes on to represent a branch of machine learning used to model high-abstract concepts. The article continues to talk about a historical 2012 image recognition competition for computers called ImageNet being won by the University of Toronto, introducing the use of deep neural nets, which is technology that uses mass collections of images to learn to identify another image. It sets up its own rules to find a result versus using human-influenced rules.

exmaple of a neural net from http://e-lab.github.io


The article features a more recent breakthrough. A team of researchers from Microsoft has found a way to expand on that concept. They recently won the next ImageNet competition with their new approach called the deep residual network, which is essentially their version of a complex neural net that spans 152 layers of mathematical operations. This is tremendous since most nets use 6-7 layers. Those few layers itself are often difficult tasks for programmers to have them communicate together within their networks. With 152 layers, the Microsoft researchers resolved the problem by skipping a signal across layers within a network that was deemed unnecessary and saves them for when they are needed later. The process alone allows the signal to be much stronger and span through more layers than any other network.

I'm impressed by Microsoft's findings. Their research is going to affect not only the future of image recognition, but also areas of A.I. such as speech recognition or language understanding. Even then, I can't even imagine myself where this deep residual network can possibly lead us.