December 6, 2023

OpenAI has released GPT-4, the latest version of its very popular chatbot ChatGPT, which is based on artificial intelligence.

The new model can respond to pictures by writing captions and descriptions and coming up with recipe ideas based on pictures of ingredients, for example.

It can also handle up to 25,000 words, which is about eight times more than ChatGPT can handle.

Since it came out in November 2022, millions of people have used ChatGPT.

It’s often asked to write songs, poems, ads, computer code, and help with homework, even though teachers say students shouldn’t use it.

ChatGPT answers questions with language that sounds like it was written by a person. It can also copy the writing styles of songwriters and authors by using the internet as it was in 2021 as its knowledge base.

People worry that it could one day do many of the jobs that people do now.

OpenAI said that it worked on GPT-4’s safety features for six months and trained it based on what people said. It did warn, though, that it might still spread false information.

The first people who can use GPT-4 will be ChatGPT Plus subscribers, who pay $20 per month for extra features.

It already runs the Bing search engine platform from Microsoft. The big tech company put $10b into OpenAI.

During a live demo, it answered a complicated tax question, but there was no way to check if it was right.

Like ChatGPT, GPT-4 is a type of artificial intelligence that learns from what it does. Generative AI uses algorithms and predictive text to make new content based on what it is told to do.
OpenAI said that GPT-4 has “more advanced reasoning skills” than ChatPT. For example, the model can find meeting times that work for three different schedules.

OpenAI also announced new partnerships with the app Duolingo, which helps people learn a new language, and the app Be My Eyes, which helps people who are blind or visually impaired. The goal is to make AI chatbots that can help their users by using natural language.

OpenAI has warned, however, that GPT-4 is still not 100% reliable and may “hallucinate,” which is when AI makes up facts or mistakes in its reasoning.