What is GPT-3 ?
A privately held artificial intelligence lab in San Francisco, OpenAI, presented a new piece of computer program which took them several months to develop. GPT-3 stands for Generative Pre-trained Transformer 3. The 3 here, shows that it is the third version of the model released. GPT-3 is a huge neural network, making it a part of the deep learning section of machine learning, which in turn, is a branch of a trendy computer science field called ‘Artificial Intelligence’ or AI.
A neural network is inspired by the actual human brain. An artificial neural network, such as GPT-3, is trained by providing data as examples that have been labelled in advance. A machine learning model is only as good, or as bad, as the data that is fed to it during the training phase. For GPT-3, that data is massive. GPT-3 has been trained with data from 60 million domains on the internet. This makes it better than any prior model at sounding like a human.
What can GPT-3 do?
As the code has not been made public yet, there might be more possible use cases of the tool. However, the people who got a chance to have early access to it have shown its abilities to generate tweets, pen poetry, summarize emails, solve complex medical question and answer problems, create basic tabular financial reports, compose impressive business memos, translate languages and even write its own computer code and train machine learning models. The content generated by GPT-3 has that high quality that it is hard to distinguish it from that produced by a human. For example, when given an initial sentence, a GPT-3 powered story telling service can generate an entire story as shown in this Redit post.
Even the creators of the model were surprised that it can generate computer code as at its core, GPT-3 was created as an extremely sophisticated text predictor only where a person would supply a piece of text and the model predicts what the next piece of text should be. This could be repeated to generate paragraphs of text or until the text reaches a certain length.
Where can it be deployed?
One of the reasons that this development could be a breakthrough is that companies have great potential to automate tasks. Language models like GPT-3 are already being used for various purposes like:
Speech Recognition -
Google Assistant, Siri and Alexa are some examples where you say something to your phone and the phone replies you back with appropriate answers.
Translation Tools & Text Generation -
This is the part where you enter some words and the model predicts what you may enter next, much like what you see when you type words in google search.
Imagine that you only provide a detail of what you want and the model generates a working code for the application that you described. This can be done with GPT-3 but the developer have to tweak the code a bit to make it work properly.
Is GPT-3 really that perfect?
Although the features of GPT-3 to produce results have been describes as the best that has been ever seen, GPT-3 is resource hungry. This will make it an expensive tool to be employed, making it unable to be used by a lot of companies for particular requirements.
The CEO of OpenAI himself, Sam Altman, has said, “The GPT-3 Hype is too much. AI is going to change the world, but GPT-3 is just an early glimpse.“
Another important thing to note is that as the model has been trained using internet domains, the content generated by it is highly subject to amplifying biases, including racism and sexism. The system generated toxic language when asked to discuss women, Black people, Jews and the Holocaust.
Taking into account everything it offers, the results realised are way ahead of those by its predecessors. It can be considered a significant step forward in the field of artificial intelligence. After it is made public, we should see more of what it can do and GPT-3 may prove to be even more exciting.