Skip to main content

The AI doomsday myth debunked.



The AI or large language models LLMs cannot turn rebellious. That means that the AI doomsday myth debunked. The reason for that is this: the LLM cannot think of abstraction. AI, or the LLM. Which is the user interface. Between the user, and the AI cannot think as we think. The AI cannot make anything spontaneous without outcoming orders or instructions. When AI reacts to physical stimulus, there is the database or database group that involves descriptions of some situations. 

When the AI sees. If the database involvement matches with something, the database activates another database. That involves reactions to how the AI responds to that situation. But the AI cannot think of abstractions and we can say that the AI cannot rebel because it doesn't have imagination. 

When we order the AI to make some calculations, it's the ultimate tool. The system can make calculations and apply complicated formulas to that process. But then we face a situation where the AI requires some new formula to make calculations the AI is helpless. AI is good for applying something. There it can use existing tools more effectively than humans. But when the AI must make some new tools. It cannot make them without human control. 

When we think about AI and its dangers, we can think of AI as a dog. If we train AI to hurt people and give it deadly missions. That thing makes AI or robots that AI controls dangerous. Same way guardian dogs are dangerous to people. The physical body and mission as a weapon make the AI dangerous. 

The LLM is a tool that can create complicated answers to questions. It's a tool that can generate new codes and computer programs incredibly fast. And LLM is best as a programming tool. The LLM can create new code very fast because the programming code is well-documented. 

The AI is the ultimate programming editor, that allows users to create complicated programming code very fast. The biggest risk is the AI-controlled systems have some kind of error code, that causes risks in safety. The programming errors are bad things in automated systems. And if the autopilot car doesn't slow down at the right moment, that thing causes risks. 

Things like ventilation systems and other kinds of tools require good interaction between sensors and actors. The AI involves a database, that tells the right position of the thermostat. If the system doesn't know how to turn thermostat the result is a catastrophe. 

When we want AI to make something. It requires precise and well-detailed orders. Same way, as dogs and other humans, AI requires clear orders, that are easy to understand. Commands that the AI  takes must be literally right. And they must involve so much data, that the AI can make its mission as it should. 


https://scitechdaily.com/new-research-debunks-ai-doomsday-myths-llms-are-controllable-and-safe/



Comments

Popular posts from this blog

New AI-based operating systems revolutionize drone technology.

"University of Missouri researchers are advancing drone autonomy using AI, focusing on navigation and environmental interaction without GPS reliance. Credit: SciTechDaily.com" (ScitechDaily, AI Unleashed: Revolutionizing Autonomous Drone Navigation) The GPS is an effective navigation system. But the problem is, how to operate that system when somebody jams it? The GPS is a problematic system. Its signal is quite easy to cut. And otherwise, if the enemy gets the GPS systems in their hands, they can get GPS frequencies. That helps to make the jammer algorithms against those drones. The simple GPS is a very vulnerable thing.  Done swarms are effective tools when researchers want to control large areas. The drone swarm's power base is in a non-centralized calculation methodology. In that model, drones share their CPU power with other swarm members. This structure allows us to drive complicated AI-based solutions. And in drone swarms, the swarm operates as an entirety. That ca

Hydrogen is one of the most promising aircraft fuels.

Aircraft can use hydrogen in fuel cells. Fuel cells can give electricity to the electric engines that rotate propellers. Or they can give electricity to electric jet engines. In electric jet engines. Electric arcs heat air, and the expansion of air or some propellant pushes aircraft forward. Or, the aircraft can use hydrogen in its turbines or some more exotic engines like ramjets. Aircraft companies like Airbus and some other aircraft manufacturers test hydrogen as the turbine fuel.  Hydrogen is one of the most interesting fuels for next-generation aircraft that travel faster than ever. Hydrogen fuel is the key element in the new scramjet and ramjet-driven aircraft. Futuristic hypersonic systems can reach speeds over Mach 20.  Today the safe top speed of those aircraft that use air-breathe hypersonic aircraft is about Mach 5-6.   Hydrogen is easy to get, and the way to produce hydrogen determines how ecological that fuel can be. The electrolytic systems require electricity, and electr

The neuroscientists get a new tool, the 1400 terabyte model of human brains.

"Six layers of excitatory neurons color-coded by depth. Credit: Google Research and Lichtman Lab" (SciteechDaily, Harvard and Google Neuroscience Breakthrough: Intricately Detailed 1,400 Terabyte 3D Brain Map) Harvard and Google created the first comprehensive model of human brains. The new computer model consists of 1400 terabytes of data. That thing would be the model. That consists comprehensive dataset about axons and their connections. And that model is the path to the new models or the human brain's digital twins.  The digital twin of human brains can mean the AI-based digital model. That consists of data about the blood vessels and neural connections. However, the more advanced models can simulate electric and chemical interactions in the human brain.  This project was impossible without AI. That can collect the dataset for that model. The human brain is one of the most complicated structures and interactions between neurotransmitters, axons, and the electrochemica