Skip to main content

New artificial intelligence learns by using the "cause and effect" methodology.



Image I


The cause and effect methodology means the AI tests simultaneously the models that are stored in its memory. And when some model fits a case that the AI must solve, the AI stores that model to other similar cases. And in that case, the AI finds a suitable solution for things that it must solve. It selects the way to act that is most suitable for it. The most beneficial case means that the system uses minimum force for reaching the goal. 

The "cause and effect method" in the case that the AI-controlled robot will open the door might be that the first robot is searching marks about things that help to determine which way the door is opening. Then the robot first just pulls the door and turns the handle. Then the robot tries the same thing but it pushes the door. Then the robot can note that the door is locked and find another way to get in. 

But if a robot must get in it might have a circular programming architecture. If the robot cannot open the door by using the methods that are found in the first circle. It will step to the next level and use more force. And then the robot will try to kick the door in or some other way to break it. The idea is that robot always uses minimum force. But the problem is how to determine the case. The robot is allowed to do in cases that it faces the door. 

There are cases where the cause and effect methodology is not suitable. If the alone operating robot would be on ice it cannot test the strength of the ice. But if the robot group is operating under the control of the same AI which operates them as an entirety the system might use the cause and effect methodology. 

There is the possibility that artificial intelligence is located in the computer center. And it can operate radio-controlled cars by using the remote control. So the moving robots are dummies and work under the control of the central computer. There is the possibility that this kind of robot system is someday sent to another planet. 




Image II: 


The model of the large robot groups is taken from ants. The ants are moving robots. And anthills are the central computer of the entirety. 

The cause and effect methodology would be suitable for the groups of simple robots that are operating under the same AI. Those cheap and simple moving robots are easy to replace if they are damaged. And the AI that operates those sub-robots can be at the computer center and control those robots by using regular data remote-control systems. 

The supercomputer that drives AI would be at different capsule or orbiting trajectories. And the simple robot cars are operating on the ground. The system might have two stages. At the first stage. The main computer that orbits the planet will send the instructions to the ground-based computers. Those are in the landing capsules. And then those capsules are controlling the robot cars and quadcopters. Keeping the moving robots as simple as possible. Is making it possible to replace destroyed individuals from the group easily. 


The AI sends the robot simultaneously to the route over the icy terrain. And the robot tells all the time its condition. If the ice breaks under it can send the data to its mates about the strength of the ice. Robots are sending information about their location all the time. 

The system knows the last position of the robot. And the strength of the ice can measure by using the last images of that robot. The system knows to avoid the place where ice collapses. And the next robot knows to avoid that place. That thing means that the cause and effect methodology is suitable for large groups of robots where individual robots are not very complicated. 

Artificial intelligence can operate remote-controlled robots. And that means the robots that are forming the group are simple. They might be more remote-control cars than complicated robots. The central computer that is operating the entirety is intelligent. The reason why those robots have only necessary sensors is that they are easy to replace. And maybe robot factories can make those robots in the operational area. 


()https://scitechdaily.com/ai-that-can-learn-cause-and-effect-these-neural-networks-know-what-theyre-doing/

Image I: https://scitechdaily.com/ai-that-can-learn-cause-and-effect-these-neural-networks-know-what-theyre-doing/

Image II: https://upload.wikimedia.org/wikipedia/commons/thumb/1/1d/AntsStitchingLeave.jpg/800px-AntsStitchingLeave.jpg


https://thoughtandmachines.blogspot.com/

Comments

Popular posts from this blog

New AI-based operating systems revolutionize drone technology.

"University of Missouri researchers are advancing drone autonomy using AI, focusing on navigation and environmental interaction without GPS reliance. Credit: SciTechDaily.com" (ScitechDaily, AI Unleashed: Revolutionizing Autonomous Drone Navigation) The GPS is an effective navigation system. But the problem is, how to operate that system when somebody jams it? The GPS is a problematic system. Its signal is quite easy to cut. And otherwise, if the enemy gets the GPS systems in their hands, they can get GPS frequencies. That helps to make the jammer algorithms against those drones. The simple GPS is a very vulnerable thing.  Done swarms are effective tools when researchers want to control large areas. The drone swarm's power base is in a non-centralized calculation methodology. In that model, drones share their CPU power with other swarm members. This structure allows us to drive complicated AI-based solutions. And in drone swarms, the swarm operates as an entirety. That ca

Hydrogen is one of the most promising aircraft fuels.

Aircraft can use hydrogen in fuel cells. Fuel cells can give electricity to the electric engines that rotate propellers. Or they can give electricity to electric jet engines. In electric jet engines. Electric arcs heat air, and the expansion of air or some propellant pushes aircraft forward. Or, the aircraft can use hydrogen in its turbines or some more exotic engines like ramjets. Aircraft companies like Airbus and some other aircraft manufacturers test hydrogen as the turbine fuel.  Hydrogen is one of the most interesting fuels for next-generation aircraft that travel faster than ever. Hydrogen fuel is the key element in the new scramjet and ramjet-driven aircraft. Futuristic hypersonic systems can reach speeds over Mach 20.  Today the safe top speed of those aircraft that use air-breathe hypersonic aircraft is about Mach 5-6.   Hydrogen is easy to get, and the way to produce hydrogen determines how ecological that fuel can be. The electrolytic systems require electricity, and electr

The neuroscientists get a new tool, the 1400 terabyte model of human brains.

"Six layers of excitatory neurons color-coded by depth. Credit: Google Research and Lichtman Lab" (SciteechDaily, Harvard and Google Neuroscience Breakthrough: Intricately Detailed 1,400 Terabyte 3D Brain Map) Harvard and Google created the first comprehensive model of human brains. The new computer model consists of 1400 terabytes of data. That thing would be the model. That consists comprehensive dataset about axons and their connections. And that model is the path to the new models or the human brain's digital twins.  The digital twin of human brains can mean the AI-based digital model. That consists of data about the blood vessels and neural connections. However, the more advanced models can simulate electric and chemical interactions in the human brain.  This project was impossible without AI. That can collect the dataset for that model. The human brain is one of the most complicated structures and interactions between neurotransmitters, axons, and the electrochemica