Skip to main content

The AI doomsday myth debunked.



The AI or large language models LLMs cannot turn rebellious. That means that the AI doomsday myth debunked. The reason for that is this: the LLM cannot think of abstraction. AI, or the LLM. Which is the user interface. Between the user, and the AI cannot think as we think. The AI cannot make anything spontaneous without outcoming orders or instructions. When AI reacts to physical stimulus, there is the database or database group that involves descriptions of some situations. 

When the AI sees. If the database involvement matches with something, the database activates another database. That involves reactions to how the AI responds to that situation. But the AI cannot think of abstractions and we can say that the AI cannot rebel because it doesn't have imagination. 

When we order the AI to make some calculations, it's the ultimate tool. The system can make calculations and apply complicated formulas to that process. But then we face a situation where the AI requires some new formula to make calculations the AI is helpless. AI is good for applying something. There it can use existing tools more effectively than humans. But when the AI must make some new tools. It cannot make them without human control. 

When we think about AI and its dangers, we can think of AI as a dog. If we train AI to hurt people and give it deadly missions. That thing makes AI or robots that AI controls dangerous. Same way guardian dogs are dangerous to people. The physical body and mission as a weapon make the AI dangerous. 

The LLM is a tool that can create complicated answers to questions. It's a tool that can generate new codes and computer programs incredibly fast. And LLM is best as a programming tool. The LLM can create new code very fast because the programming code is well-documented. 

The AI is the ultimate programming editor, that allows users to create complicated programming code very fast. The biggest risk is the AI-controlled systems have some kind of error code, that causes risks in safety. The programming errors are bad things in automated systems. And if the autopilot car doesn't slow down at the right moment, that thing causes risks. 

Things like ventilation systems and other kinds of tools require good interaction between sensors and actors. The AI involves a database, that tells the right position of the thermostat. If the system doesn't know how to turn thermostat the result is a catastrophe. 

When we want AI to make something. It requires precise and well-detailed orders. Same way, as dogs and other humans, AI requires clear orders, that are easy to understand. Commands that the AI  takes must be literally right. And they must involve so much data, that the AI can make its mission as it should. 


https://scitechdaily.com/new-research-debunks-ai-doomsday-myths-llms-are-controllable-and-safe/



Comments

Popular posts from this blog

Antigravity will be the greatest thing. That we have ever created.

"Artistic depiction of a fictional anti-gravity vehicle" (Wikipedia, Anti-gravity) Sometimes, if the airships have the same lifting power as the weight of the airship.  It can act like some “antigravity system”. Those systems are based on lighter-than-air gas or hot air. The system can have a helium tank. And the hot-air section whose temperature can be adjusted using microwaves or particles that lasers warm. Those systems are faster to control than some gas flames. This makes it possible. To adjust the lifting power.  If a thing like a balloon has the same lifting power as its weight, the balloon can be lifted to a certain point and altitude. And the balloon stands at that point until something moves it. That kind of thing can make an impression. On the antigravity systems. Modern airships. Like Lockheed-Martin P-791 can look. Like a “UFO”. The system can use systems to move the craft. Or maybe those ion systems are used for plasma stealth systems, if those airships' mis...

Capitalism is a blessing and a curse.

Basically. There is nothing wrong with capitalism. The idea of capitalism is simple. If people like your products, they buy them. And that brings money to your bank accounts. If people don’t like our product, they don’t buy it. And that drives you into bankruptcy. That is the economic Darwinism. If the company cannot sell its products. That removes it from markets.  Other ways. We can say that if the company cannot answer those new challenges, it will be terminated immediately. In the Soviet Union, there was no capitalism. And the competition between companies. Did not exist. That caused a situation where the merchandise. With bad quality being. Entered. Into shops. Without competition, there was no development. And that turned those products old-fashioned. That raised the Soviets' addiction. To the natural resources. To get Western currency. They sold gas and oil to get dollars, which they needed to buy Western technology. That caused a situation. The Soviet technology started to ...

The problem with mirror life.

"Mirror life presents serious dangers, primarily due to its potential to interact unpredictably with the natural world. Without natural checks like predators or antibiotics, mirror organisms could replicate uncontrollably, creating risks that scientists are only beginning to understand. Credit: SciTechDaily.com" (ScitechDaily, “Mirror Bacteria” Warning: A New Kind of Life Could Pose a Global Threat) When we think about artificial life, and especially mirror life. We must say. This technology. It can save billions of lives. But otherwise, those kinds of organisms could be hostile to their mirror forms. The shape of the mirror-life is like a form of a vampire. In movies, vampires look cute. But then. They show their real nature. The mirror-organisms are created for something. They look like original organisms. But they are AI-generated mirror-organisms. The purpose is to control things like hospital bacteria.  Mirror-life is the artificial life created by AI. And genologists. T...