Apparently, the Rabbit R1 will be prepared to support hundreds of applications and websites from the moment of its launch
- Are you going to buy the Rabbit R1? Beware of Amazon and the “accessories” that are being sold
- MWC 24: Deutsche Telekom presents the AI Phone, the smartphone with Artificial Intelligence
- How does the Rabbit R1 sound? Discover their Playlist
It seems that for several years now Jesse Lyu and his team have taken the concept of turning the Artificial Intelligence in an executor capable of carrying out tasks and actions which we have been doing until now.
And in the last interview they have done with the CEO of the company at StrictlyVC, from TechCrunch.com has confirmed something that many of us expected and that until now we had not heard.
As he said, the process was as follows:
Problem statement: What do we want?
Jesse had been considering LAM for years as the possibility of an artificial intelligence acting as we would, but from the beginning he had problems finding the solution. In his own words:
So the answer was to do something that people already did instinctively, and try to study the way we did it. The data was already there, and you simply had to understand how it worked and use it.
«Neurosymbolic», the solution to AI learning and action
This is where the idea of using neurosymbolism (I don't know if there is a correct translation into Spanish) arises, and of using what we already do to show it to AI and have it perform those actions for us.
The Neurosymbolic Artificial Intelligence: A Fusion of Neural Networks and Symbolic AI
In summary, Neurosymbolic Artificial Intelligence represents an advanced approach in the field of artificial intelligence that combines neural networks with symbolic AI techniques and allows the execution of tasks.
From this moment is where the people of Rabbit saw the solution to the problem. What if they used all the data they had from collecting interactions from different companies? And this is how since 2020 they have been working with third-party applications.
It was enough to know how people interact with different applications, collect the data and use it, and that is precisely what they have done, ¡WITH MORE THAN 800 DIFFERENT APPLICATIONS! In this way they had almost everything in their favor.
Thus, explains Lyu, after this data collection, they began to ask the AI to analyze the recordings of human interactions step by step to be able to understand how humans reached our final goal.
In this way, the LAM or Great Action Model is trained so that it can reach the same goal as us by pressing the same buttons as us.
What do I think at this moment?
Well, it's early at the moment, but if they are able to accomplish half of what it seems they have already achieved, the use of this device will be a real "MUST" for me and for many other people.