Learn how to unlock the secrets of AI by enhancing your understanding and developing complexity.

AI knows things no one told it

Raphael Milliere, a philosopher from Columbia University, gave another example of the amazing things LLMs are capable of at a New York University conference in March. These models have already shown that they can write computer code. This is impressive, but not surprising since there are so many examples of code on the Internet. Milliere took it a step farther and demonstrated that GPT could also execute code. The philosopher entered a program that calculated the 83rd Fibonacci number. He says that it is a multi-step reasoning to a high degree. The bot got it right. GPT was wrong when Milliere asked for the Fibonacci 83rd number directly. This suggests that the system didn’t simply repeat the Internet. It was actually performing its own calculations in order to get the right answer.

It is not a computer. It is missing essential computational elements such as working memory. OpenAI, GPT’s inventor, acknowledged that GPT alone should not be capable of running code. They have since developed a special plug-in, a tool ChatGPT uses when answering a question, which allows it to. This plug-in, however, was not used during Milliere’s demo. He hypothesizes instead that the machine improvised memory by harnessing mechanisms to interpret words according to context, a similar situation to how nature repurposes its existing capacities to new functions.

This ability to make decisions on the fly shows that LLMs have a level of internal complexity that is far beyond simple statistical analyses. Researchers have found that these systems appear to understand what they’ve learned. Last week, at the International Conference on Learning Representations, Kenneth Li, a doctoral student at Harvard University, along with his AI research colleagues Aspen K. Hopkins of Massachusetts Institute of Technology and David Bau of Northeastern University as well as Fernanda Viegas and Hanspeter Pfister, both at Harvard, spun up a smaller version of the GPT network to study its inner workings. The researchers fed in millions of Othello board games in text format to train the network. The model was nearly perfect.

Source:
https://www.scientificamerican.com/article/how-ai-knows-things-no-one-told-it/

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注