Author Topic: GPT-3, explained: This new language AI is uncanny, funny — and a big deal  (Read 485 times)

0 Members and 1 Guest are viewing this topic.

Offline Elderberry

  • TBR Contributor
  • *****
  • Posts: 24,408
Vox by Kelsey Piper Aug 13, 2020

Computers are getting closer to passing the Turing Test.

Last month, OpenAI, the Elon Musk-founded artificial intelligence research lab, announced the arrival of the newest version of an AI system it had been working on that can mimic human language, a model called GPT-3.

In the weeks that followed, people got the chance to play with the program. If you follow news about AI, you may have seen some headlines calling it a huge step forward, even a scary one.

I’ve now spent the past few days looking at GPT-3 in greater depth and playing around with it. I’m here to tell you: The hype is real. It has its shortcomings, but make no mistake: GPT-3 represents a tremendous leap for AI.

A year ago I sat down to play with GPT-3’s precursor dubbed (you guessed it) GPT-2. My verdict at the time was that it was pretty good. When given a prompt — say, a phrase or sentence — GPT-2 could write a decent news article, making up imaginary sources and organizations and referencing them across a couple of paragraphs. It was by no means intelligent — it didn’t really understand the world — but it was still an uncanny glimpse of what it might be like to interact with a computer that does.

A year later, GPT-3 is here, and it’s smarter. A lot smarter. OpenAI took the same basic approach it had taken for GPT-2 (more on this below), and spent more time training it with a bigger data set. The result is a program that is significantly better at passing various tests of language ability that machine learning researchers have developed to compare our computer programs. (You can sign up to play with GPT-3, but there’s a waitlist.)

More: https://www.vox.com/future-perfect/21355768/gpt-3-ai-openai-turing-test-language