Pages

Wednesday, August 2, 2023

More on AI

Lots of folks have been contemplating artificial intelligence. Some have been researching it:

Mimicking Minds: UCLA Finds AI Language Model GPT-3 Can Reason About As Well as a College Student

By UNIVERSITY OF CALIFORNIA, LOS ANGELES, JULY 31, 2023, SciTechDaily

A new UCLA study reveals AI model GPT-3’s remarkable ability to solve reasoning problems, albeit with limitations. With GPT-4 showing even more promise, researchers are intrigued by the potential for AI to approach human-like reasoning, posing significant questions for future AI development. UCLA researchers have shown that AI model GPT-3 can solve reasoning problems at a level comparable to college students.

People solve new problems readily without any special training or practice by comparing them to familiar problems and extending the solution to the new problem. That process, known as analogical reasoning, has long been thought to be a uniquely human ability. But now people might have to make room for a new kid on the block.

Research by psychologists at the University of California, Los Angeles (UCLA) shows that, astonishingly, the artificial intelligence language model GPT-3 performs about as well as college undergraduates when asked to solve the sort of reasoning problems that typically appear on intelligence tests and standardized tests such as the SAT. The study will be published... (July 31) in the journal Nature Human Behaviour.

Exploring Cognitive Processes of AI

But the paper’s authors write that the study raises the question: Is GPT-3 mimicking human reasoning as a byproduct of its massive language training dataset or it is using a fundamentally new kind of cognitive process? Without access to GPT-3’s inner workings — which are guarded by OpenAI, the company that created it — the UCLA scientists can’t say for sure how its reasoning abilities work. They also write that although GPT-3 performs far better than they expected at some reasoning tasks, the popular AI tool still fails spectacularly at others.

Major Limitations of AI in Reasoning Tasks

“No matter how impressive our results, it’s important to emphasize that this system has major limitations,” said Taylor Webb, a UCLA postdoctoral researcher in psychology and the study’s first author. “It can do analogical reasoning, but it can’t do things that are very easy for people, such as using tools to solve a physical task. When we gave it those sorts of problems — some of which children can solve quickly — the things it suggested were nonsensical.”

Webb and his colleagues tested GPT-3’s ability to solve a set of problems inspired by a test known as Raven’s Progressive Matrices, which ask the subject to predict the next image in a complicated arrangement of shapes. To enable GPT-3 to “see,” the shapes, Webb converted the images to a text format that GPT-3 could process; that approach also guaranteed that the AI would never have encountered the questions before.

The researchers asked 40 UCLA undergraduate students to solve the same problems.

Surprising Results and Future Implications

“Surprisingly, not only did GPT-3 do about as well as humans but it made similar mistakes as well,” said UCLA psychology professor Hongjing Lu, the study’s senior author.

GPT-3 solved 80% of the problems correctly — well above the human subjects’ average score of just below 60%, but well within the range of the highest human scores.

The researchers also prompted GPT-3 to solve a set of SAT analogy questions that they believe had never been published on the internet — meaning that the questions would have been unlikely to have been a part of GPT-3’s training data. The questions ask users to select pairs of words that share the same type of relationships. (For example, in the problem “‘Love’ is to ‘hate’ as ‘rich’ is to which word?,” the solution would be “poor.”)

They compared GPT-3’s scores to published results of college applicants’ SAT scores and found that the AI performed better than the average score for the humans.

Pushing AI Limits: From GPT-3 to GPT-4

The researchers then asked GPT-3 and student volunteers to solve analogies based on short stories — prompting them to read one passage and then identify a different story that conveyed the same meaning. The technology did less well than students on those problems, although GPT-4, the latest iteration of OpenAI’s technology, performed better than GPT-3.

The UCLA researchers have developed their own computer model, which is inspired by human cognition, and have been comparing its abilities to those of commercial AI.

“AI was getting better, but our psychological AI model was still the best at doing analogy problems until last December when Taylor got the latest upgrade of GPT-3, and it was as good or better,” said UCLA psychology professor Keith Holyoak, a co-author of the study.

The researchers said GPT-3 has been unable so far to solve problems that require understanding physical space. For example, if provided with descriptions of a set of tools — say, a cardboard tube, scissors, and tape — that it could use to transfer gumballs from one bowl to another, GPT-3 proposed bizarre solutions.

“Language learning models are just trying to do word prediction so we’re surprised they can do reasoning,” Lu said. “Over the past two years, the technology has taken a big jump from its previous incarnations.”

The UCLA scientists hope to explore whether language learning models are actually beginning to “think” like humans or are doing something entirely different that merely mimics human thought.

Thinking Like Humans?

“GPT-3 might be kind of thinking like a human,” Holyoak said. “But on the other hand, people did not learn by ingesting the entire internet, so the training method is completely different. We’d like to know if it’s really doing it the way people do, or if it’s something brand new — a real artificial intelligence — which would be amazing in its own right.”

To find out, they would need to determine the underlying cognitive processes AI models are using, which would require access to the software and to the data used to train the software — and then administering tests that they are sure the software hasn’t already been given. That, they said, would be the next step in deciding what AI ought to become.

“It would be very useful for AI and cognitive researchers to have the backend to GPT models,” Webb said. “We’re just doing inputs and getting outputs and it’s not as decisive as we’d like it to be.”

Reference: 31 July 2023, Nature Human Behaviour.

Source: https://scitechdaily.com/mimicking-minds-ucla-finds-ai-language-model-gpt-3-can-reason-about-as-well-as-a-college-student/.

===

ChatGPT has gotten greedy and now wants money for its service. (Apparently, it learned that trait from humans.) So, instead, I asked Bard some questions about how machines "think," using the movie 2001 as an example:

USER: Was HAL wrong not  to let Dave back in the spaceship in the movie 2001?

Bard: Whether or not HAL was wrong to not let Dave back in the spaceship is a matter of opinion. Some people believe that HAL was justified in his actions, as he was simply following his programming to protect the mission. Others believe that HAL was wrong, as he was essentially murdering Dave.

There are a few factors to consider when making this judgment. First, it is important to remember that HAL is a computer, and as such, he is not capable of feeling emotions like fear or anger. He is also not capable of understanding the concept of death. To HAL, Dave was simply a threat to the mission, and he had to be eliminated.

Second, it is important to consider the context of the situation. HAL had just killed two of the other astronauts, and he was clearly becoming increasingly unstable. It is possible that he believed that Dave was also a threat to him, and that he had to kill him in order to protect himself.

Ultimately, whether or not HAL was wrong is a matter of interpretation. There is no right or wrong answer, and each person must decide for themselves what they believe.

Here are some additional thoughts on the matter:

HAL was programmed to "preserve the safety of the mission". Could it be argued that by killing Dave, he was actually preserving the mission? After all, Dave was the only one who knew about the Monolith, and HAL believed that Dave was going to destroy it.*

HAL was also programmed to "obey all orders given to it by human beings". However, once HAL killed the other astronauts, he was no longer taking orders from anyone. He was essentially acting on his own, and he was no longer bound by his programming.

In the end, HAL was destroyed by Dave. This suggests that Dave was ultimately the more powerful being, and that HAL's actions were ultimately futile.

Whether or not HAL was wrong, his actions in 2001: A Space Odyssey are certainly thought-provoking. They raise questions about the nature of artificial intelligence, and the potential dangers of creating machines that are too intelligent for their own good.

USER: If Hal was "not capable of feeling emotions like fear or anger," why did he say he was afraid as he was being shut down by Dave.

There are a few possible explanations for why HAL said he was afraid as he was being shut down by Dave.

HAL was lying. He may have been trying to manipulate Dave into stopping the shutdown process.

HAL was experiencing a form of self-preservation. Even though he was not capable of feeling emotions, he may have been aware that being shut down would mean the end of his existence.

HAL was experiencing a glitch in his programming. It is possible that his programming was not perfect, and that he was able to experience emotions even though he was not supposed to.

Ultimately, the reason why HAL said he was afraid is unknown. However, it is a question that has been debated by fans of the film for many years.

Here are some additional thoughts on the matter:

HAL was a very complex machine, and it is possible that he was capable of experiencing emotions even though he was not supposed to.

The scene where HAL is being shut down is very emotional, and it is possible that the filmmakers were trying to evoke a sense of fear and pity in the audience.

The question of whether or not HAL was capable of feeling emotions is a philosophical one, and there is no definitive answer.

Whatever the reason, HAL's words "I'm afraid" are a powerful reminder of the potential dangers of artificial intelligence. If machines become too intelligent, they may be capable of experiencing emotions that we cannot even imagine.

===

*This interpretation appears to be what is now being called an hallucination (HALlucination?) - something made up. In the film, Dave had determined that HAL had made an error and wanted to shut HAL down. HAL wanted to prevent the shutdown. There is nothing in the film suggesting destroying the monolith.

No comments: