Skip to main content

Can we control the AI anyway?


"An in-depth examination by Dr. Yampolskiy reveals no current proof that AI can be controlled safely, leading to a call for a halt in AI development until safety can be assured. His upcoming book discusses the existential risks and the critical need for enhanced AI safety measures. Credit: SciTechDaily.com" (ScitechDaily, Risk of Existential Catastrophe: There Is No Proof That AI Can Be Controlled)


The lab-trained AI makes mistakes. The reason for those mistakes is in the data used in the laboratory. In a laboratory environment, everything is well-documented and cleaned. In the real world dirt, and light conditions are less controlled than in laboratories. 

We are facing the same problem with humans when they are trained in some schools. Those schools are like laboratory environments. There is no hurry, and there is always space around the work. And everything is dry. There are no outsiders or anybody, that is in danger. 

When a person goes to real work there are always time limits and especially outside the house, there is icy ground, slippery layers, and maybe some other blocks. So the lab environment is different than real-life situations. And the same thing that makes humans make mistakes causes the AI's mistakes.

The AI works the best way when everything is well-documented. When AI uses pre-processed datasets highly trained professionals are analyzed and sorted. But when the AI searches data from the free net or from some sensors the data that it gets is not so-called sterile. The dataset is not well-documented and there is a larger data mass. That the AI must use when it selects the data for solutions. 

What is creative AI? The creative AI doesn't create information from nowhere. It just sorts data into a new order. Or it reconnects different data sources together. And that thing makes it a so-called learning or cognitive tool. 

In machine learning the cognitive AI connects data from sensors to static datasets and then that tool makes the new models or action profiles by following certain parameters. The the system stores best results in its database and that thing is the new model for the operation. 

The fuzzy logic means that in the program are some static points. Then the system will get the variables from some sensors. In airfields, there are things like runways and roll routes that are static data. Aircraft, ground vehicles, and their positions are variables. 

The system sees if there is a dangerous situation in some landing route. And then it just orders other planes to the positions that programmers preset for the system. The idea of this kind of so-called pseudo-intelligence is that there is a certain number of airplanes that fit in a waiting pattern. There are multiple layers in that pattern. 


"A study reveals AI’s struggle with tissue contamination in medical diagnostics, a problem easily managed by human pathologists, underscoring the importance of human expertise in healthcare despite advancements in AI technology." (ScitechDaily, A Reality Check – When Lab-Trained AI Meets the Real World, “Mistakes Can Happen”)



In the case of an emergency other aircraft are dodging the plane that has problems. In that situation, there are sending and receiving waiting patterns. 


Certain points determine whether is it safer to continue landing or pull up. In an emergency, the idea is that the other aircraft pulls turn sideways, and when it moves to another waiting pattern all planes in that pattern pull up or turn away from the incoming aircraft in the same way, if they are in the same level or risk position as the dodging aircraft. 

Because all aircraft turn like ballet dancers that minimizes the possibility that the planes travel against each other. The waiting pattern where the other planes move will transfer the planes up in the order that the most up aircraft will pull up first. This logic minimizes the sideways movements. This denies the possibility that some plane will come to an impact course from upwards. 

So can we ever control the AI? The AI itself can be in multiple servers all around the world. That thing called non-centralized data processing methodology. In a non-centralized model the data that makes the AI is in multiple locations. Those pieces connect each other in their entirety by using certain marks. The non-centralized data processing model is taken from the internet and ARPANET. 

The system involves multiple central computers or servers that are in different locations. That thing protects the system against local damages and guarantees its operational abilities in a nuclear attack. But that kind of system is vulnerable to computer viruses. The problem is that the shutdown of one server will not end the AI's task. The AI can write itself into the RAMs of the computers and other tools. 

The way how the AI interacts makes it dangerous. The language model itself is not dangerous. But it creates so-called sub-algorithms that can interact with things like robots. So the language model creates a customized computer program for every situation. When the AI-based antivirus operates it searches the WWW-scale virus databases, and then it creates algorithms that destroy the virus. 

The problem is that the AI makes mistakes. If the observation tools are not what they should be, that causes a destructive process. The most problematic thing with AI is that it's superior in weapon control. Weapons' purpose in war is to destroy enemies. And the AI that controls weapons must be controlled by friendly forces. But the opponent must not have access to that tool. 

Creative AI can make non-predicted movements. And that makes it dangerous. The use of creative AI in things like cruise missiles and other equipment helps them to reach the target. But there are also risks. The "Orca" is the first public large-scale AUV (Autonomous Underwater Vehicle). That small submarine can perform the same missions as manned submarines. 

There is the possibility that in the crisis the UAV overreacts to some threat. The system can interpret things like some sea animals or magma eruptions as attack and then the submarine attacks against its targets. The system works like this. When the international situation turns tighter the submarine turns into the "yellow space". That means it will make counter-attacks.  And then the system can attack unknown vehicles. 


https://scitechdaily.com/a-reality-check-when-lab-trained-ai-meets-the-real-world-mistakes-can-happen/


https://scitechdaily.com/risk-of-existential-catastrophe-there-is-no-proof-that-ai-can-be-controlled/

Comments

Popular posts from this blog

There is a suggestion that dark matter may have deformed another universe.

The researchers suggest that dark matter is the deformed dark universe. Or in the most exciting theories, dark matter is the dark universe inside our universe. In that theory dark matter is entangled with the visible material. That theory is taken from the multiverse theory. There our visible universe is one of many universes. The other universes can be invisible because their electrons and quarks are different sizes. And that thing makes those other universes invisible to us.  Another hypothesis is that the hypothetical other universes send radiation that radiation from our universe pushes away. Things like invisible 9th. planet causes ideas that maybe there is another universe in our universe. The thing that makes the mysterious dark matter interesting is that. The dark matter can form structures that can be similar to visible material. But those structures are not visible.  The multiverse theory is not new. The thing in that theory is that there are multiple universes at this moment

The neuroscientists get a new tool, the 1400 terabyte model of human brains.

"Six layers of excitatory neurons color-coded by depth. Credit: Google Research and Lichtman Lab" (SciteechDaily, Harvard and Google Neuroscience Breakthrough: Intricately Detailed 1,400 Terabyte 3D Brain Map) Harvard and Google created the first comprehensive model of human brains. The new computer model consists of 1400 terabytes of data. That thing would be the model. That consists comprehensive dataset about axons and their connections. And that model is the path to the new models or the human brain's digital twins.  The digital twin of human brains can mean the AI-based digital model. That consists of data about the blood vessels and neural connections. However, the more advanced models can simulate electric and chemical interactions in the human brain.  This project was impossible without AI. That can collect the dataset for that model. The human brain is one of the most complicated structures and interactions between neurotransmitters, axons, and the electrochemica

Nano-acoustic systems make new types of acoustic observation systems possible.

' Acoustic diamonds are a new tool in acoustics.  Another way to make very accurate soundwaves is to take a frame of 2D materials like graphene square there is a hole. And then electrons or laser beams can make that structure resonate. Another way is to use the electromagnetic field that resonates with the frame and turns electromagnetic energy into an oscillation in the frame.  Nano-acoustic systems can be the next tool for researching the human body. The new sound-wave-based systems make it possible to see individual cells. Those soundwave-based systems or nano-sonars are tools that can have bigger accuracy. Than ever before. The nano-sonar can use nanodiamonds or nanotubes as so-called nano-LRAD systems that send coherent sound waves to the target. In nanotube-based systems, the nanotube can be in the nanodiamond.  The term acoustic diamond means a diamond whose system oscillates. The system can create oscillation sending acoustic or electromagnetic waves to the diamond. Diamond