Sign up for the Newsletter!
Home Gadgets This is how Rabbit Inc. has trained more than 100 dogs so far...

This is how Rabbit Inc. has trained more than 800 applications so far

0

Apparently, the Rabbit R1 will be prepared to support hundreds of applications and websites from the moment of its launch

Sign up for the Newsletter!

It seems that for several years now Jesse Lyu and his team have taken the concept of turning the Artificial Intelligence in an executor capable of carrying out tasks and actions which we have been doing until now.

And in the last interview they have done with the CEO of the company at StrictlyVC, from TechCrunch.com has confirmed something that many of us expected and that until now we had not heard.

As he said, the process was as follows:

Problem statement: What do we want?

Jesse had been considering LAM for years as the possibility of an artificial intelligence acting as we would, but from the beginning he had problems finding the solution. In his own words:

Trying to solve the "actions", we had several options: We could do it through GPT's, plugins or API's, but as a Startup, it was obviously not the best way to move forward...

So the answer was to do something that people already did instinctively, and try to study the way we did it. The data was already there, and you simply had to understand how it worked and use it.

«Neurosymbolic», the solution to AI learning and action

This is where the idea of using neurosymbolism (I don't know if there is a correct translation into Spanish) arises, and of using what we already do to show it to AI and have it perform those actions for us.

The Neurosymbolic Artificial Intelligence: A Fusion of Neural Networks and Symbolic AI

In summary, Neurosymbolic Artificial Intelligence represents an advanced approach in the field of artificial intelligence that combines neural networks with symbolic AI techniques and allows the execution of tasks.

From this moment is where the people of Rabbit saw the solution to the problem. What if they used all the data they had from collecting interactions from different companies? And this is how since 2020 they have been working with third-party applications.

It was enough to know how people interact with different applications, collect the data and use it, and that is precisely what they have done, ¡WITH MORE THAN 800 DIFFERENT APPLICATIONS! In this way they had almost everything in their favor.

We started by asking those reviewers to help us get real interactions of people with Spotify, Uber, Expedia or whatever, so we have 800 of the most popular apps and then we collected those recordings of real humans and interactions with different interfaces.

Thus, explains Lyu, after this data collection, they began to ask the AI to analyze the recordings of human interactions step by step to be able to understand how humans reached our final goal.

In this way, the LAM or Great Action Model is trained so that it can reach the same goal as us by pressing the same buttons as us.

What do I think at this moment?

Well, it's early at the moment, but if they are able to accomplish half of what it seems they have already achieved, the use of this device will be a real "MUST" for me and for many other people.

NO COMMENTS

LEAVE AN RESPONSE

Please enter your comment!
Please enter your name here

Exit mobile version