Sunday, November 29, 2015

Not Only Can Computers Not Think, They Never Will.

Photo Credit: DNews
Our very first debate of the class, on November 20th, 2015 was the motion Can Computer Think? One of the claims given by the proposing team was drawn from paralleling the development of computers and technology with human development. Whether we are looking at the trajectory of evolution or the growth of human beings from infancy to adulthood, we see complex advancement from primitiveness to advanced and more articulate creatures. The proponents to this motion claimed that technology too, has been evidenced to depict the progress to becoming more complex and articulate at a rate that if computers are not thinking now, they soon will be. In his paper Can Computers Think?, John Searle says no. He argues that digital computers don't and never will, have the mental functioning like those of human beings. The nature of his refutation goes back to the fact that computers are only syntactical and human minds are more than just syntactical, they are also semantical; they have content. To be better understand what he means by this assertion, he compares the operations of digital computers with human minds.
The distinction he gives between the two faculties; mental process and programs processes goes as follows. The operations of digital computers can be specified purely formally; we specify the steps of operations in computers in terms of abstract symbols-sequence of zeros and ones, but the symbols have no meaning, they are not about anything! The zeroes and ones, for example, are just numerals; they don't even stand for numbers. This feature of programs, that they are defined purely formally or syntactically, is fatal to view that mental process and program processes are identical. There is more to having a mind than having formal or syntactical processes, our mental states by definition, have certain sorts of content, they have both syntax and semantics.
Going back to the Turing Test or the Chinese room experiment. It is important to note that producing the desired output is not enough to classify the process as a thought process. If the man in the Chinese room is replaced with a digital computer that correctly recognizes the given symbol and produces the right output, it may fool the people outside the room that it indeed does understand Chinese but we know it doesn’t. Producing the right output is not enough, interpretation and understanding the meaning of this symbols constitutes being a Chinese speaker. The same goes for the Turing Test, the main idea is to mimic mental process, keyword mimic. More efficient programs may be designed, and will ultimately pass the Turing Test, but none of these programs will ever have semantic content. None of this program will find meaning in what the person on the other side is saying and produce the desired output only by the virtue of understanding the content and not by following the designed algorithms specified only syntactically.


Saturday, November 28, 2015

Artificial Intelligence? 97.4 Percent Of Computers Say They Still Us Need Humans




Photo Credit:www.redlambda.com
In the article by Forbes, it answers the question about if computers need humans still, or can computers learn without human input.  They site a survey by Evans Data Corporation (EDC), EDC interviewed 529 software developers and 2.6% reported that machine learning software does not require human input. The survey also revealed that human input does not stop after software has been deployed. 

I think the article raises a good question, but the way they came up with the answer is flawed. I agree with Forbs but I am not content with how they came to the 97.4%.  They only interviewed 529 developers, the sample size is pretty small considering that there are 3.6 million software developers in the US as of 2013. In order to have access to the data I have to sign up with the company, which seems that they may be hiding something.  

A second article speaks about the same issue that humans are still needed and they believe that even in 2035 we will not have achieved AI. The reason is because the problem is not a hardware problem but a software problem. The software needs to be developed in order to achieve AI. I agree with the article, the better the software the closer we are to achieve AI. 

How this links to our class is that when we debated the question of “Can computers think?” and directly to the “No” discussion. Also, to the overall discussion of what is AI.

Friday, November 20, 2015

Welcome!

Welcome to the A.I.A.I. (Augustana Insider's Artificial Intelligence) blog !

Here you will find the intellectual (and occasionally artificially intellectual?) musings of students from CSC 320, Principles of Artificial Intelligence, at Augustana College.