Monday, August 31, 2009

Guardian piece on 'Why can't computers think?

Today's Guardian asks "Why can't computers think?:

At the end of this week, the 2009 Loebner prize will be contested in Brighton, to try to find a computer program that can fool a human that it is another human over five minutes of interaction through a screen. We know they will all fail. ......... It's not that computers can never fool people.


True, from Flirty bot passes for human:
"New software designed to conduct flirtatious conversations is good enough to fool people into thinking they are chatting with a human, a security company has warned. The CyberLover software was designed in Russia to engage people in conversations with the objective of inducing them to reveal information about their identities or to lead them to visit a web site that will deliver malicious content to their computers."


From Comment is free (Cif):
The 2009 Loebner Prize has just three computer entries, each of which will be compared against four humans interrogated by only four judges. This is in contrast to the 2008 Loebner Prize contest.

[Note, neither Elbot, Loebner Prize 2008's winner, nor runner-up Eugene have entered Loebner 2009]

To encourage and attract the best systems to enter a contest, the reward must be substantial. Professor Selmer Bringjord of RPI, said as much at the 2008 AISB Symposium on the Turing test.

Compare the Loebner Prize's $3000 and bronze medal award last year (same amount to be awarded in 2009) to DARPA's reward for their 2007 Urban Challenge:

1st Place: $2,000,000
2nd Place: $1,000,000
3rd Place : $500,000


Don't expect a machine to display 'thinking' in the lifetime of the Loebner Prize, or achieve the 30% deception rate required by Turing to pass his imitation game:

"average interrogator will not have more than a 70 per cent chance or making the right identification after five minutes of questioning". (Turing, 1950)

No comments: