Author Topic: ChatGPT, Lobster Gizzards, and Intelligence  (Read 328 times)

0 Members and 1 Guest are viewing this topic.

Offline Kamaji

  • Hero Member
  • *****
  • Posts: 57,900
ChatGPT, Lobster Gizzards, and Intelligence
« on: May 03, 2023, 01:45:08 pm »
ChatGPT, Lobster Gizzards, and Intelligence

Chat knows more, gizzards are more complex, and you’re more intelligent.

Frederick R. Prete
1 May 2023

There’s currently lot of anxious chat about ChatGPT-4 here in the Academy. Some professors worry that it’s about to take their jobs (a development that might lead to more interesting lectures). Others are breathlessly predicting the annihilation of humanity when AI spontaneously morphs into something malevolent and uncontrollable. Mostly, however, professors are worried that students will get Chat to do their homework, and some of them are really confused about what to do about this.

But these concerns tend to misunderstand how Chat works and take the “Intelligence” part of “Artificial Intelligence” too literally. The reassuring truth is that Chat isn’t really that smart. To clear things up, I asked ChatGPT-4 to give me a high-level explanation of how it works. It did a pretty good job, but it left some important stuff out, which I’ll fill in.

*  *  *

During its initial training, GPT-4 acquired a massive amount of information—raw data from material already on the Internet or from information fed to it by its developers. That’s what makes it seem so smart. It has access to about one petabyte (1,024 terabytes) of data (about 22 times more than GPT-3) and uses about 1.8 trillion computational parameters (10 times more than GPT-3). To draw a rough comparison, a petabyte of data printed out digitally would be about 500 billion pages of text.

But here’s the important part that many people don’t know. After its initial training, Chat was “fine-tuned” with what’s called “supervised” training. This means that developers and programmers (that is, real people) and some other AI programs refined Chat’s responses so that they’d meet “human alignment and policy compliance” standards. Developers continue to monitor Chat’s behavior—a bit like helicopter parents—and reprimand it when it gets out of line (so to speak) to ensure that it doesn’t violate company standards by using “disallowed” speech or making stuff up. Apparently, all of this parenting has paid off (from the developers’ point of view). GPT-4 has been much better behaved than its younger sibling, GPT-3. Its “trigger rate” for disallowed speech is only about 20 percent of GPT-3’s, and it makes fewer mistakes.

So, from the outset, Chat and other AI systems are shaped by the peculiarities of a select group of people and their idiosyncratic, subjective points-of-view, assumptions, biases, and prejudices. Consequently—and contrary to what many people think—AI systems like Chat are not “objective” thinking machines any more than you or I are. They’re not even really thinking. They’re manipulating bits of information that people have chosen for them as directed (either explicitly or implicitly through the structures of the computer programs).

*  *  *

It’s important to note that, while ChatGPT can generate impressive human-like text, it may sometimes produce incorrect or nonsensical answers. This is because its knowledge is based on the data it was trained on and it lacks the ability to reason like a human. Additionally, it may also inherit biases present in the training data.

That is why AI isn’t really “intelligent,” and why it isn’t ever going to develop a superhuman intelligence—“Artificial General Intelligence”—that allows it to take over the world and destroy humankind.

Math and lobsters

As scientist Jobst Landgrebe and philosopher Barry Smith have argued in their recent book Why Machines Will Never Rule the World, AI systems like ChatGPT won’t ever develop human-like intelligence due to the limitations of their foundational mathematical modeling. Although we can accurately model small (often highly abstracted) real-world phenomena, we simply don’t have the computation ability to model large natural systems—like intelligence—with current mathematics.

*  *  *