"An in-depth examination by Dr. Yampolskiy reveals no current proof that AI can be controlled safely, leading to a call for a halt in AI development until safety can be assured. His upcoming book discusses the existential risks and the critical need for enhanced AI safety measures. Credit: SciTechDaily.com" (ScitechDaily, Risk of Existential Catastrophe: There Is No Proof That AI Can Be Controlled)
The lab-trained AI makes mistakes. The reason for those mistakes is in the data used in the laboratory. In a laboratory environment, everything is well-documented and cleaned. In the real world dirt, and light conditions are less controlled than in laboratories.
We are facing the same problem with humans when they are trained in some schools. Those schools are like laboratory environments. There is no hurry, and there is always space around the work. And everything is dry. There are no outsiders or anybody, that is in danger.
When a person goes to real work there are always time limits and especially outside the house, there is icy ground, slippery layers, and maybe some other blocks. So the lab environment is different than real-life situations. And the same thing that makes humans make mistakes causes the AI's mistakes.
The AI works the best way when everything is well-documented. When AI uses pre-processed datasets highly trained professionals are analyzed and sorted. But when the AI searches data from the free net or from some sensors the data that it gets is not so-called sterile. The dataset is not well-documented and there is a larger data mass. That the AI must use when it selects the data for solutions.
What is creative AI? The creative AI doesn't create information from nowhere. It just sorts data into a new order. Or it reconnects different data sources together. And that thing makes it a so-called learning or cognitive tool.
In machine learning the cognitive AI connects data from sensors to static datasets and then that tool makes the new models or action profiles by following certain parameters. The the system stores best results in its database and that thing is the new model for the operation.
The fuzzy logic means that in the program are some static points. Then the system will get the variables from some sensors. In airfields, there are things like runways and roll routes that are static data. Aircraft, ground vehicles, and their positions are variables.
The system sees if there is a dangerous situation in some landing route. And then it just orders other planes to the positions that programmers preset for the system. The idea of this kind of so-called pseudo-intelligence is that there is a certain number of airplanes that fit in a waiting pattern. There are multiple layers in that pattern.
"A study reveals AI’s struggle with tissue contamination in medical diagnostics, a problem easily managed by human pathologists, underscoring the importance of human expertise in healthcare despite advancements in AI technology." (ScitechDaily, A Reality Check – When Lab-Trained AI Meets the Real World, “Mistakes Can Happen”)
In the case of an emergency other aircraft are dodging the plane that has problems. In that situation, there are sending and receiving waiting patterns.
Certain points determine whether is it safer to continue landing or pull up. In an emergency, the idea is that the other aircraft pulls turn sideways, and when it moves to another waiting pattern all planes in that pattern pull up or turn away from the incoming aircraft in the same way, if they are in the same level or risk position as the dodging aircraft.
Because all aircraft turn like ballet dancers that minimizes the possibility that the planes travel against each other. The waiting pattern where the other planes move will transfer the planes up in the order that the most up aircraft will pull up first. This logic minimizes the sideways movements. This denies the possibility that some plane will come to an impact course from upwards.
So can we ever control the AI? The AI itself can be in multiple servers all around the world. That thing called non-centralized data processing methodology. In a non-centralized model the data that makes the AI is in multiple locations. Those pieces connect each other in their entirety by using certain marks. The non-centralized data processing model is taken from the internet and ARPANET.
The system involves multiple central computers or servers that are in different locations. That thing protects the system against local damages and guarantees its operational abilities in a nuclear attack. But that kind of system is vulnerable to computer viruses. The problem is that the shutdown of one server will not end the AI's task. The AI can write itself into the RAMs of the computers and other tools.
The way how the AI interacts makes it dangerous. The language model itself is not dangerous. But it creates so-called sub-algorithms that can interact with things like robots. So the language model creates a customized computer program for every situation. When the AI-based antivirus operates it searches the WWW-scale virus databases, and then it creates algorithms that destroy the virus.
The problem is that the AI makes mistakes. If the observation tools are not what they should be, that causes a destructive process. The most problematic thing with AI is that it's superior in weapon control. Weapons' purpose in war is to destroy enemies. And the AI that controls weapons must be controlled by friendly forces. But the opponent must not have access to that tool.
Creative AI can make non-predicted movements. And that makes it dangerous. The use of creative AI in things like cruise missiles and other equipment helps them to reach the target. But there are also risks. The "Orca" is the first public large-scale AUV (Autonomous Underwater Vehicle). That small submarine can perform the same missions as manned submarines.
There is the possibility that in the crisis the UAV overreacts to some threat. The system can interpret things like some sea animals or magma eruptions as attack and then the submarine attacks against its targets. The system works like this. When the international situation turns tighter the submarine turns into the "yellow space". That means it will make counter-attacks. And then the system can attack unknown vehicles.
https://scitechdaily.com/a-reality-check-when-lab-trained-ai-meets-the-real-world-mistakes-can-happen/
https://scitechdaily.com/risk-of-existential-catastrophe-there-is-no-proof-that-ai-can-be-controlled/
Comments
Post a Comment