The creature doesn't need consciousness and intelligence to be dangerous. But when we create learning AI we must realize that those systems can learn skills that make them unpredictable.
The other thing is that biological computers that are microchips that communicate with living neurons are things. That can someday be more intelligent than we are. And when we make those things, we are making another intelligent creature that can be more intelligent than we are.
In some SciFi books, the artificial intelligence rips itself out of control and turns against humanity. One of the versions is that the alien spacecraft makes crashland and then its cre dies. The robot's mission is to protect the crew from attack against the planet's native creatures.
There is a vision that some kind of earthquake can suddenly destroy the nuclear command center. And that causes the vision that the militarized AI interprets that thing as a purpose attack. And then it makes the counterstrike against some other nation.
Learning machines are dangerous if they have tools for that thing. If the purpose of a learning machine is to lead an army, that thing is always dangerous. The ultimate example is the robot. That controls killer robots on the battlefield. In those cases, the system turns dangerous because its purpose is to be dangerous.
There is a vision that AI will not rebel. The reason for that is this: AI has no consciousness. And it cannot imagine things. Then we can ask are insects like wasps and hornet organisms with consciousness? And what kind of consciousness do bacteria have? Those things can be dangerous if people go too close to them. And in some visions, the AI can try to shoot even nuclear weapons, if it has access to it, if somebody tries to shut down the computer that runs the AI.
In that case, the computer that guards the nuclear weapons interprets that action as an attempt to harm the nuclear shield. And if somebody forgets to tell the AI that there is some service for its hardware it can think that an attempt to shut down the central processing unit is action from undercover enemy agents.
We can think that non-organic computers and AI don't have consciousness. But they can react in devastating ways. That means the computers can have reflecs that make it dangerous.
The biological computer is always the brain in a vat.
If we someday create a biological computer with a cloned brain, we face a situation that we create a creature, that is more intelligent than we are. We can create mini-brains by using cloned neurons.
There are visions of computers that are more intelligent than humans. And one of those systems is the biological computer. The biocomputer can be the brain that is under a glass dome. And the regular computers translate that EEG that the binary and quantum systems can cooperate with those brains that we can call "thinking units".
The biological computer, connected with the quantum computer is the most powerful computer or data-handling tool, in the world. The system lays on a binary system and it can remote-control robots that clean the base, and serve the system.
But the different situation is with the biological computers. The biological computers are all some kind of brains in a vat. Biologial microchips are hybrid tools. That have regular microchips connected with living neural tissue have their own will. Those neurons act like all other neurons. And they form a brain that defends itself.
Consciousness makes the creature support its species. Things like mini brains make it possible that in the distant future, there could be computer centers where living brains are under glass domes. Those brains are connected with life support systems. And that kind of biological computer can also control remote-controlled robots.
Each of those brains can be cloned human brain. And they can be as intelligent as humans. The problem is that we cannot control that system very well. In those systems those "think units" might operate through robots that bring nutrients to those systems. This kind of system might be extremely dangerous if they see some kind of threat.
https://www.helsinki.fi/en/hilife-helsinki-institute-life-science/news/development-human-derived-mini-brain-close-completion-new-technical-solution-promotes-treatment-brain-diseases-0
https://scitechdaily.com/not-science-fiction-anymore-what-happens-when-machine-learning-goes-too-far/
https://en.wikipedia.org/wiki/Brain_in_a_vat
https://learningmachines9.wordpress.com/2024/02/09/can-machine-learning-turn-things-dangerous/
Comments
Post a Comment