ChatGPT

Does the latest AI chatbot really know everything?

The latest AI that everyone is talking about is called ChatGPT. I came up with my own alternative Turing test and put it to the test.

This is what Alexander Klöpping recently tweeted about the new artificial intelligence chatbot ChatGPT:

Alexander was impressed, so I decided to give it a go myself. It turned out to be easy: I just created a free account and started asking away. First impression: good… And as the chatbot got more and more questions right, I of course started asking harder questions. Pretty soon, this led me to wonder what the hardest question was that one could possibly ask…

Suddenly I remembered something. In the past, my partner and I have had disagreements about technology. For example: he loves YouTube. He says you can find anything on there. Even difficult mathematics! Perhaps not at the most advanced level, but he claimed that anything will be well explained in layman terms. So, I challenged him and said: I bet I can come up with a topic that is not well explained, not even in layman terms, on YouTube.

My counterexample? Inverse problems. In case you are wondering what the definition of an inverse problem is: it’s the process of calculating from a set of observations the causal factors that produced them. This was not a sufficient answer to my partner (fair enough); he wanted real-world examples. And it turned out that YouTube could not grant him those.

Since then, I’ve come to think of inverse problems as an excellent way of testing any platform that claims to have the answer to everything. An alternative Turing Test, if you will. A Turing Test checks whether a computer is capable of thinking like a human being – and humans can provide surprising, correct answers to questions. A chatbot is typically not able to do that. So, it is time to put ChatGPT to the test. I ask what inverse problems are, and why inverse problems are difficult. Somewhat surprisingly, it answers both questions to my satisfaction. Then I ask for an easy real-world example of an inverse problem, and it says:

Inverse kinematics: Given a desired end effector position and orientation, find the joint angles that would allow a robot arm to reach that position.

My partner will be happy with this! I next ask for three examples, and it can do that as well. Bloody hell. Is this machine able to generate an infinite number of inverse problems? I now ask for six examples, and it says:

I’m sorry, but I am not able to provide more than three examples of inverse problems. As a large language model, I do not have the ability to generate new examples on my own.

In summary, ChatGPT seems to have outsmarted some humans. But thankfully, not all of us yet.

Credits

Main image: ChatGPT

Avatar photo

Caroline Jagtenberg

Operations Research: leuker kan ik het niet maken. Ik gebruik OR zowel in de academische wereld als in het bedrijfsleven. En geloof niet dat data de oplossing voor alles is.

To the VVSOR website