As I was looking over some articles on the subject of the turing test, and how it is the only system used to benchmark the level of artificial intelligence. First we need to know what the system is. Basically it is an interview. You communicate with another “person” via some impersonal means such as text. After having a conversation with this “person”, you are to determine whether you think they are a human or not. Simple enough, but not perfect. In September 2011, the online robot known as “Cleverbot” passed this test at the Techniche festival. However it only passed with convincing 59% of humans that it was also a human. This is no large winning margin, until you look at the control group. Humans that were having conversations with other humans only though so 63% of the time. Which means that 37% of the time they thought that another human was a robot. This is a huge error of margin for the turing test, a test which we use as a baseline to judging AI. Recently I have been reading of large numbers of robots that have been able to “pass” the turing test. This calls into question whether this test it what we should be using as a benchmark. In fact another problem with this test is the level of interaction between people. Arguments and disagreements are bound to pop up in a natural conversation, and you may form your opinions on whether you like who your talking too or not. That is why new systems for judging AI is evolving, one new method measures the HRI between two subjects. This Human Robot Interaction is key in unlocking the ability to make believable robots, not whether they sound human, but whether they act human and can cause us to connect with them on an emotional level.