Skip to main content

Are Large language models (LLM) the same as artificial general intelligence (AGI)?



Can we ever create artificial general intelligence (AGI)? That is a good question. When we think of ourselves and our way of learning and thinking, we must realize one thing. No human can make everything. We know stories about programmers and other kinds of specialists. That cannot make food. That is the thing that we can transfer to the hypothetical AGI. 

The thing is that the AGI is useless if it doesn't make something. For making physical things. The AGI requires access to physical things like microwave ovens and robots. Another thing is that. The AGI cannot make everything, that it cannot find from the database. This is one of the problems with the AGI. The system requires large-scale data storage, and the ability to learn things like humans. 

When humans learn something, our brains make a database about the thing. The problem is that we need a database for everything that we do. Our memory cells collect data that the operation requires like mosaic. Memory operates like neteye. It collects necessary information from multiple sources. And humans have very many memory neurons. There are billions of neurons and their connections in brains. 

The number of human neurons is not static. The connections between neurons make it possible to create virtual neurons. And every virtual neuron, or neuron combination, acts like a physical neuron. 

This is one of the reasons why humans have superiority. There is no computer with billions of memory blocks and memory handling units. The futuristic computers can have multiple processors with integrated memory units. But that means those systems require lots of energy. But if mass memories are integrated straight with the processor and every memory unit is operated by an individual processor. The memory operates faster. That kind of neural-network-based system can have multiple sub-computers, the processor entireties that operate as virtual quantum computers. 


(FreeThink, LLMs are a dead end to AGI, says François Chollet)

But the problem is that if we want to make robots and AI that make food. We have a limited number of variables that the robot must handle. The robot must only know where certain dishes are. The robot can search for a match from receipts like the word "pepper". Then it can search for similar words from dishes. Then the robot can search for the details of what pepper looks like. If the robot doesn't know where it finds pepper it can search every bag and dish. When it finds the right thing, it can mark the position in its memory. 

The kitchen involves limited space and a limited number of things. The robot can search for things when it has free time. In the kitchen, robots can simultaneously search and move every item, that they find. But at the city level, there are lots of variables. The robot cannot open every box that it sees, or the job takes a very long time. 

The robot can look very intelligent if it goes to the shop. When a robot operates at home, it can use the home computer and surveillance system to see things like where it is. And when it goes out. The same body can connect to the city traffic control and GPS or other navigation system. Then the robot travels to the shop. And then it can connect itself to the shop's computer. That system knows where the right merchandise is. Then the robot can collect stuff from the shelf. The thing is that the robot is three robots. Mission control and databases are the things, that determine what robots can do. 

The system that can operate a little bit like AGI can operate as modules. The neural network can connect those modules into new entireties. The module is like a room and operations. The AGI might not be possible, but that kind of modular AI can make it possible to make robots that follow orders like "Go shopping and take a milk bottle". 

However, the problem is that the system cannot handle abstract thinking. It can calculate probabilities, but it cannot think. But the AI can predict many things. If two AI-controlled cars face and there are identical AI systems that control them, the cars can predict how the other car reacts. The AI can use probability calculations to predict the way, where people go. The system must only calculate how many people choose certain routes. Then it can make a prediction, of the probability that some person chooses the route. 

https://www.freethink.com/robots-ai/arc-prize-agi

Comments

Popular posts from this blog

Antigravity will be the greatest thing. That we have ever created.

"Artistic depiction of a fictional anti-gravity vehicle" (Wikipedia, Anti-gravity) Sometimes, if the airships have the same lifting power as the weight of the airship.  It can act like some “antigravity system”. Those systems are based on lighter-than-air gas or hot air. The system can have a helium tank. And the hot-air section whose temperature can be adjusted using microwaves or particles that lasers warm. Those systems are faster to control than some gas flames. This makes it possible. To adjust the lifting power.  If a thing like a balloon has the same lifting power as its weight, the balloon can be lifted to a certain point and altitude. And the balloon stands at that point until something moves it. That kind of thing can make an impression. On the antigravity systems. Modern airships. Like Lockheed-Martin P-791 can look. Like a “UFO”. The system can use systems to move the craft. Or maybe those ion systems are used for plasma stealth systems, if those airships' mis...

Capitalism is a blessing and a curse.

Basically. There is nothing wrong with capitalism. The idea of capitalism is simple. If people like your products, they buy them. And that brings money to your bank accounts. If people don’t like our product, they don’t buy it. And that drives you into bankruptcy. That is the economic Darwinism. If the company cannot sell its products. That removes it from markets.  Other ways. We can say that if the company cannot answer those new challenges, it will be terminated immediately. In the Soviet Union, there was no capitalism. And the competition between companies. Did not exist. That caused a situation where the merchandise. With bad quality being. Entered. Into shops. Without competition, there was no development. And that turned those products old-fashioned. That raised the Soviets' addiction. To the natural resources. To get Western currency. They sold gas and oil to get dollars, which they needed to buy Western technology. That caused a situation. The Soviet technology started to ...

The problem with mirror life.

"Mirror life presents serious dangers, primarily due to its potential to interact unpredictably with the natural world. Without natural checks like predators or antibiotics, mirror organisms could replicate uncontrollably, creating risks that scientists are only beginning to understand. Credit: SciTechDaily.com" (ScitechDaily, “Mirror Bacteria” Warning: A New Kind of Life Could Pose a Global Threat) When we think about artificial life, and especially mirror life. We must say. This technology. It can save billions of lives. But otherwise, those kinds of organisms could be hostile to their mirror forms. The shape of the mirror-life is like a form of a vampire. In movies, vampires look cute. But then. They show their real nature. The mirror-organisms are created for something. They look like original organisms. But they are AI-generated mirror-organisms. The purpose is to control things like hospital bacteria.  Mirror-life is the artificial life created by AI. And genologists. T...