Skip to main content

Can we control the AI anyway?


"An in-depth examination by Dr. Yampolskiy reveals no current proof that AI can be controlled safely, leading to a call for a halt in AI development until safety can be assured. His upcoming book discusses the existential risks and the critical need for enhanced AI safety measures. Credit: SciTechDaily.com" (ScitechDaily, Risk of Existential Catastrophe: There Is No Proof That AI Can Be Controlled)


The lab-trained AI makes mistakes. The reason for those mistakes is in the data used in the laboratory. In a laboratory environment, everything is well-documented and cleaned. In the real world dirt, and light conditions are less controlled than in laboratories. 

We are facing the same problem with humans when they are trained in some schools. Those schools are like laboratory environments. There is no hurry, and there is always space around the work. And everything is dry. There are no outsiders or anybody, that is in danger. 

When a person goes to real work there are always time limits and especially outside the house, there is icy ground, slippery layers, and maybe some other blocks. So the lab environment is different than real-life situations. And the same thing that makes humans make mistakes causes the AI's mistakes.

The AI works the best way when everything is well-documented. When AI uses pre-processed datasets highly trained professionals are analyzed and sorted. But when the AI searches data from the free net or from some sensors the data that it gets is not so-called sterile. The dataset is not well-documented and there is a larger data mass. That the AI must use when it selects the data for solutions. 

What is creative AI? The creative AI doesn't create information from nowhere. It just sorts data into a new order. Or it reconnects different data sources together. And that thing makes it a so-called learning or cognitive tool. 

In machine learning the cognitive AI connects data from sensors to static datasets and then that tool makes the new models or action profiles by following certain parameters. The the system stores best results in its database and that thing is the new model for the operation. 

The fuzzy logic means that in the program are some static points. Then the system will get the variables from some sensors. In airfields, there are things like runways and roll routes that are static data. Aircraft, ground vehicles, and their positions are variables. 

The system sees if there is a dangerous situation in some landing route. And then it just orders other planes to the positions that programmers preset for the system. The idea of this kind of so-called pseudo-intelligence is that there is a certain number of airplanes that fit in a waiting pattern. There are multiple layers in that pattern. 


"A study reveals AI’s struggle with tissue contamination in medical diagnostics, a problem easily managed by human pathologists, underscoring the importance of human expertise in healthcare despite advancements in AI technology." (ScitechDaily, A Reality Check – When Lab-Trained AI Meets the Real World, “Mistakes Can Happen”)



In the case of an emergency other aircraft are dodging the plane that has problems. In that situation, there are sending and receiving waiting patterns. 


Certain points determine whether is it safer to continue landing or pull up. In an emergency, the idea is that the other aircraft pulls turn sideways, and when it moves to another waiting pattern all planes in that pattern pull up or turn away from the incoming aircraft in the same way, if they are in the same level or risk position as the dodging aircraft. 

Because all aircraft turn like ballet dancers that minimizes the possibility that the planes travel against each other. The waiting pattern where the other planes move will transfer the planes up in the order that the most up aircraft will pull up first. This logic minimizes the sideways movements. This denies the possibility that some plane will come to an impact course from upwards. 

So can we ever control the AI? The AI itself can be in multiple servers all around the world. That thing called non-centralized data processing methodology. In a non-centralized model the data that makes the AI is in multiple locations. Those pieces connect each other in their entirety by using certain marks. The non-centralized data processing model is taken from the internet and ARPANET. 

The system involves multiple central computers or servers that are in different locations. That thing protects the system against local damages and guarantees its operational abilities in a nuclear attack. But that kind of system is vulnerable to computer viruses. The problem is that the shutdown of one server will not end the AI's task. The AI can write itself into the RAMs of the computers and other tools. 

The way how the AI interacts makes it dangerous. The language model itself is not dangerous. But it creates so-called sub-algorithms that can interact with things like robots. So the language model creates a customized computer program for every situation. When the AI-based antivirus operates it searches the WWW-scale virus databases, and then it creates algorithms that destroy the virus. 

The problem is that the AI makes mistakes. If the observation tools are not what they should be, that causes a destructive process. The most problematic thing with AI is that it's superior in weapon control. Weapons' purpose in war is to destroy enemies. And the AI that controls weapons must be controlled by friendly forces. But the opponent must not have access to that tool. 

Creative AI can make non-predicted movements. And that makes it dangerous. The use of creative AI in things like cruise missiles and other equipment helps them to reach the target. But there are also risks. The "Orca" is the first public large-scale AUV (Autonomous Underwater Vehicle). That small submarine can perform the same missions as manned submarines. 

There is the possibility that in the crisis the UAV overreacts to some threat. The system can interpret things like some sea animals or magma eruptions as attack and then the submarine attacks against its targets. The system works like this. When the international situation turns tighter the submarine turns into the "yellow space". That means it will make counter-attacks.  And then the system can attack unknown vehicles. 


https://scitechdaily.com/a-reality-check-when-lab-trained-ai-meets-the-real-world-mistakes-can-happen/


https://scitechdaily.com/risk-of-existential-catastrophe-there-is-no-proof-that-ai-can-be-controlled/

Comments

Popular posts from this blog

New AI-based operating systems revolutionize drone technology.

"University of Missouri researchers are advancing drone autonomy using AI, focusing on navigation and environmental interaction without GPS reliance. Credit: SciTechDaily.com" (ScitechDaily, AI Unleashed: Revolutionizing Autonomous Drone Navigation) The GPS is an effective navigation system. But the problem is, how to operate that system when somebody jams it? The GPS is a problematic system. Its signal is quite easy to cut. And otherwise, if the enemy gets the GPS systems in their hands, they can get GPS frequencies. That helps to make the jammer algorithms against those drones. The simple GPS is a very vulnerable thing.  Done swarms are effective tools when researchers want to control large areas. The drone swarm's power base is in a non-centralized calculation methodology. In that model, drones share their CPU power with other swarm members. This structure allows us to drive complicated AI-based solutions. And in drone swarms, the swarm operates as an entirety. That ca...

Hydrogen is one of the most promising aircraft fuels.

Aircraft can use hydrogen in fuel cells. Fuel cells can give electricity to the electric engines that rotate propellers. Or they can give electricity to electric jet engines. In electric jet engines. Electric arcs heat air, and the expansion of air or some propellant pushes aircraft forward. Or, the aircraft can use hydrogen in its turbines or some more exotic engines like ramjets. Aircraft companies like Airbus and some other aircraft manufacturers test hydrogen as the turbine fuel.  Hydrogen is one of the most interesting fuels for next-generation aircraft that travel faster than ever. Hydrogen fuel is the key element in the new scramjet and ramjet-driven aircraft. Futuristic hypersonic systems can reach speeds over Mach 20.  Today the safe top speed of those aircraft that use air-breathe hypersonic aircraft is about Mach 5-6.   Hydrogen is easy to get, and the way to produce hydrogen determines how ecological that fuel can be. The electrolytic systems require elec...

The neuroscientists get a new tool, the 1400 terabyte model of human brains.

"Six layers of excitatory neurons color-coded by depth. Credit: Google Research and Lichtman Lab" (SciteechDaily, Harvard and Google Neuroscience Breakthrough: Intricately Detailed 1,400 Terabyte 3D Brain Map) Harvard and Google created the first comprehensive model of human brains. The new computer model consists of 1400 terabytes of data. That thing would be the model. That consists comprehensive dataset about axons and their connections. And that model is the path to the new models or the human brain's digital twins.  The digital twin of human brains can mean the AI-based digital model. That consists of data about the blood vessels and neural connections. However, the more advanced models can simulate electric and chemical interactions in the human brain.  This project was impossible without AI. That can collect the dataset for that model. The human brain is one of the most complicated structures and interactions between neurotransmitters, axons, and the electrochemica...