The AI or large language models LLMs cannot turn rebellious. That means that the AI doomsday myth debunked. The reason for that is this: the LLM cannot think of abstraction. AI, or the LLM. Which is the user interface. Between the user, and the AI cannot think as we think. The AI cannot make anything spontaneous without outcoming orders or instructions. When AI reacts to physical stimulus, there is the database or database group that involves descriptions of some situations.
When the AI sees. If the database involvement matches with something, the database activates another database. That involves reactions to how the AI responds to that situation. But the AI cannot think of abstractions and we can say that the AI cannot rebel because it doesn't have imagination.
When we order the AI to make some calculations, it's the ultimate tool. The system can make calculations and apply complicated formulas to that process. But then we face a situation where the AI requires some new formula to make calculations the AI is helpless. AI is good for applying something. There it can use existing tools more effectively than humans. But when the AI must make some new tools. It cannot make them without human control.
When we think about AI and its dangers, we can think of AI as a dog. If we train AI to hurt people and give it deadly missions. That thing makes AI or robots that AI controls dangerous. Same way guardian dogs are dangerous to people. The physical body and mission as a weapon make the AI dangerous.
The LLM is a tool that can create complicated answers to questions. It's a tool that can generate new codes and computer programs incredibly fast. And LLM is best as a programming tool. The LLM can create new code very fast because the programming code is well-documented.
The AI is the ultimate programming editor, that allows users to create complicated programming code very fast. The biggest risk is the AI-controlled systems have some kind of error code, that causes risks in safety. The programming errors are bad things in automated systems. And if the autopilot car doesn't slow down at the right moment, that thing causes risks.
Things like ventilation systems and other kinds of tools require good interaction between sensors and actors. The AI involves a database, that tells the right position of the thermostat. If the system doesn't know how to turn thermostat the result is a catastrophe.
When we want AI to make something. It requires precise and well-detailed orders. Same way, as dogs and other humans, AI requires clear orders, that are easy to understand. Commands that the AI takes must be literally right. And they must involve so much data, that the AI can make its mission as it should.
https://scitechdaily.com/new-research-debunks-ai-doomsday-myths-llms-are-controllable-and-safe/
Comments
Post a Comment