Your blog about Artificial Intelligence and Gadgets

The presentation of GPT-4o by OpenAI has been impressive

GPT-4o has arrived without warning full of new features determined to revolutionize the world of AI

How I am enjoying this time, so dark for some issues but full of light and paths to explore in the field of technology and more specifically artificial intelligence and it is that the news doesn't stop coming.

just a few hours ago OpenAI, in its Spring Update has shown through a live demo with some of its workers its new LLM or great language model based on GPT4, which will be known from today as GPT-4o —the “o” is from “omni”—.

The video, less than 30 minutes, is a fantastic example of how GPT-4o works

When will you be able to use GPT4-o

Let's get to the important thing, and that is that since I started watching the presentation I have been trying to update the application on my mobile and from the PC to try everything you see in the video, but don't worry, it will be very soon.

So much so that in fact, although it is going to be staggered, it can already be tested in many of the accounts. ChatGPT simply by closing and reopening the application, without even having to update, so in a few hours everyone will be able to use it.

How much does GPT-4o cost?

This is probably the best news of the day, as the price of GPT-4o —via ChatGPT— it will be free. All accounts that do not pay the premium will be able to enjoy the advantages of using this new model.

The only difference so far is that those of us who pay the fee €22 per month for having ChatGPT We will have up to 5 times more interactions than free accounts, but in absolute values we do not know what difference there is.

What they have shown in the video

In this demo of They have had GPT 4o like little children with a toy, and I have to admit that I felt the same way when I saw it, since I had the feeling of being experiencing the same thing, a new toy with which to have fun.

They have spent almost half an hour doing some tests to show the new capabilities of this model, and the truth is that the improvement of everything related to the communication between machine and human has left me stunned.

Conversational interaction in real time:

It was shown how GPT-4o can handle real-time conversations without noticeable latencies, responding instantly and naturally. And I have to admit that after having tried these models, today seemed like fiction.

In fact, it has been clearly seen that the presenters interrupted the AI and they changed the interlocutor and GPT-4o has been able to maintain the conversation even if the person and the pace of what they spoke changed. like a human.

Response to emotions and context:

In one part of the demonstration, GPT-4o responded appropriately to emotional cues given by the presenters, adjusting their tone and content in response to each person's emotional and physical input.

And not only that, but through the camera—which I will talk about later— I could interpret the emotions the presenter was showing, who was smiling and according to GPT-4o full of joy. It's been amazing.

Real-time translation:

The model's ability to function as a real-time translator was demonstrated, facilitating a fluid conversation between people who spoke different languages, instantly translating from English to Italian and vice versa.

So much so that even just after giving him the prompt to act as translator, she responded «perfect«, implying that not only He had understood it but had started doing it immediately.

Visual and Audio Analysis:

GPT-4o also demonstrated its ability to integrate and process information from multiple sources sensory. For example, he was asked to solve a mathematical problem written on paper, which involved visually recognizing and processing the written content and then providing help to solve it.

In fact, and it is for me the most interesting thing, Not only has he solved it but he has guided the presenter step by step how to do it, correcting him if he made a mistake and encouraging him when he understood the procedure to follow.

I guess so implementing this in education and I'm afraid to think how far we can go once this technology becomes normalized.

Interaction with Multimedia Content:

In another demonstration, showed how GPT-4o can interact with visual content, such as photos and documents, allowing users to upload images and receive relevant information or perform tasks based on the visual content.

While this is not new—The Rabbit r1 with its vision mode has been trying to perfect it for months— It must be recognized that it is a step forward by one of the large technology companies in using this tool.

An artificial intelligence that is capable of joking

I found it amazing the ability to contextualize, understand jokes, make them and respond with irony or sarcasm, I think it is a truly important step to eliminate the human-machine barrier that currently exists.

How GPT-4o is better than its competitors

I leave you the different tests that have been executed of this new model with respect to the different LLMs that are currently on the market, including those from Google or Meta, and you will see that except for one of the tests, it wins all of them.

The implementation of GPT-4o in Rabbit r1

It only took a few minutes Jesse Lyu on Discord and Twitter and show their excitement after this launch, and it is no wonder. Part of the Rabbit r1 ecosystem is based on ChatGPT, and they use their API on the gadget.

In this way, we hope that the device has integrated the functions that the OpenAI model currently has. We will see how these alliances evolve, but I will love to be here when they happen...

Do you like to read?

Try Amazon Kindle Unlimited absolutely FREE for the first 30 days

Related

LEAVE AN RESPONSE

Please enter your comment!
Please enter your name here

Do you have doubts about technology? Write to me on my networks.