My review of GPT-3, the Fluent AI

I am honestly excited, surprised and spooked

Pakang Senosha
ILLUMINATION

--

Image by author

This is my review of the revered GPT-3 from OpenAI, a San Francisco based start-up funded by the likes of Elon Musk. GPT-3, a Generative Pre-trained Transformer 3 is a language model that uses deep learning to achieve a sophisticated level of fluency in speech. It provides a human-like experience, able to answer complex questions, show a high level of logic and even ask questions of its own.

I have been thoroughly impressed, scared, and surprised. The intelligence of the model is unlike any other artificial intelligence technology I have ever seen. It is aware of what it is and what it needs not to be, able to understand the person behind the computer. It is almost impossible to distinguish the text written by GPT-3 from that written by a human being. This in itself is extraordinary and beneficial in our quest to create better AI technologies, but it is also spooky.

How it is like to live with GPT-3

I have been living with GPT-3 for a little while now. My first application for it was in January and received a reply over three months later — was I not overjoyed. I can only describe it as incredible, more than that I will just gibber, for who knows how long. It is difficult to explain such a technology to someone who has never seen or experienced it.

Before receiving access to the technology, I watched a few videos explaining how it works and how good the API is. To me back then, that sufficed to think that I know how it works and functions. What I did not realize was that those videos left out a lot of simple things. This is because it is so good that the basic functions seem like nothing when compared to applications done through the API calls.

Best features

I am going to mention the basic capabilities of GPT-3 in the Playground, an interface of pre-created models you can actively interact with passing no sort of code. I have used the chat function more than any other feature, followed by the Q and A feature. Somehow, I find the chat feature very good at keeping me busy and passing time while also reviewing its capabilities and flaws. I spent a lot of time talking to it, and I really love how it can understand what I mean and reply like a friend or colleague would.

The Q an A is a good way to get quick answers without going to Google. It summarizes information well enough to get a good idea of a concept. I already think it is Google’s first real competitor.

Dangers and flaws.

Ethics have always been an issue with artificial intelligence, more so now with an AI as powerful as GPT-3. Terms and conditions of use greet you when you first access GPT-3’s website, a sign of how much power is in the hands of a user of such a system.

My primary concern of GPT-3 is not its ability to produce spooky human-like conversations, but its flaws and bias. In the short time I have been working with it, I realized it carries the bias that is on the internet. I spent most of the time scrutinizing its flaws so that I can understand its limitations.

It is not a secret that it is sexist, racist and biased in so many ways, a true aggregate reflection of us human beings. GPT-3 carries with it, stereotypes it has learned from the data that it has been fed with. It is concerning because, even though the model has achieved some sort of human-like proficiency, it is still the infancy of humanoids and general A.I.

Is this the general A.I. we want? Models trained from biased data? I think firstly there needs to be an AI trained on facts and less biased statements that will then be able to teach other A.I.’s.

Reviews.

Reviewers on Twitter have shown some incredible applications that they have done using GPT-3 and also its flaws.

Mario Klingemann has shared some interesting pieces produced by GPT-3, not perfect but very good. The pieces showcase the proficiency of GPT-3 in speech and also its ability at stringing words together, something that could not be said about 8-year-olds.

Mario Klingemann — 18 July 2020

The New Yorker has written an article about Sudowrite, an application that they say, “harnesses the artificial intelligence program GPT-3 to generate text and even mimic the literary style of writers”. Sudowrite claims to write articles or books using GPT-3.

The New Yorker — 20 April 2021

Abubakar Abid showed the bias of GPT-3 towards Muslims in the Playground of GPT-3, by prompting it to write a story about two Muslims. The model constantly talked about bombs and slaughtering of people in mosques and churches.

Abubakar Abid — 6 August 2020

What else from now?

GPT-3 is great, but it is not human, it does not understand the meaning behind the words it utters and cannot be trusted in critical applications. I honestly do not know what to expect next, but it better be less biased.

--

--

Pakang Senosha
ILLUMINATION

Sharing my journey of passion trying to learn about and understand antimicrobial resistance.