Skip to main content

Can the AI tell lies?



The AI doesn't lie. But it can give false information. 


Telling lies requires knowledge that information is false. That's why computers don't lie. But they can give false information. 

The AI doesn't think. That means the AI doesn't tell lies. Telling lies requires knowledge that information is a lie. The AI can give false information. But it doesn't tell lies, because the AI makes only things that are programmed in it. AI is a tool, and the thing that tells lies is human who uses AI. 

AI is perfect for the well-documented sector. But if the documents are not good, the AI can make false answers. The AI doesn't think as we think. That means the AI cannot lie as we do. It can give false or inaccurate information if the data that it uses is inaccurate, or wrong. But telling lies requires that the AI knows that it lies. 

AI has no similar awareness as humans. The AI doesn't do anything outside its programming. The AI is like a dog. It's dangerous if it is programmed to act as a guard dog or control some bodyguard robot. If the AI controls military robots, it's dangerous. 

Because those robots and systems are made to be dangerous. The AI is not dangerous, if it's not made for that purpose. 

And the AI must have tools to turn dangerous. When the AI notices. If something has happened, it searches the match from databases. And if there is a match, that thing activates the action that is connected to the database. 

The accuracy and trust of the AI depends on the quality of information and dataset, that it can use. If the data that the AI uses has bad quality or is inaccurate that makes the AI something else than trust. The AI used sources to create answers. The quality of the sources determines if the answers are right or wrong. 

The AI cannot think. That makes it easy to cheat AI by changing the database or Internet page that the AI uses. If we think. If AI tells lies, we are wrong. The AI just interconnects databases and texts into the new entirety. That means. We must have the ability to check the data. If we don't have the training or knowledge to check the data, that the AI creates by connecting texts, we cannot see, if the data is wrong. 

The AI can give false information. If it uses a dataset, that involves false information. And datasets are weak things for the AI. If somebody changes the data from the internet page, that the AI uses. That makes it easy to manipulate the AI by the dataset. 

The mistake is not dangerous. The undetected mistake is dangerous. Researchers cannot fix errors if they notice a mistake. Undetected mistakes are dangerous. Undetected mistakes will wind up in the product. 

The AI's users must have the ability to analyze texts. That the AI makes. If the user doesn't know anything about the topic that the AI writes, that makes possible that the mistake is not detected. Undetected mistakes are dangerous mistakes. 

The other thing is this. The accuracy of the AI depends on the dataset that it can use. When we think about the AI as a medical doctor, the AI can see if there are changes in X-ray images and blood samples. The AI can compile the X-ray images from the person's entire lifetime, and then it can see if there is something, that changed. The AI can search for abnormal cells from blood samples.  

But in psychiatry, AI is not so good tool. The AI can detect things like dopamine levels. And that makes it a good tool to analyze concrete information. The AI can see if dopamine levels are high or low, but it cannot estimate a person's psychiatric conditions. It can detect abnormal cells or some changes in X-ray images. But, human is still a better actor in these cases. That requires discussions. 

This is the reason why somebody says that the AI talks bullshit. The AI is an ultimate tool if the user knows about topics that the AI should discuss. But if the topic is new to the user. The AI makes mistakes.  And at this moment. We might say that all of us make mistakes. The thing that makes mistakes dangerous is that nobody notices them. 

So, if we think like this. The mistake is not dangerous. The thing that makes mistakes dangerous is that we cannot detect them. AI is best in business if highly trained persons make the parameters and dataset that the AI uses for system control. Things like a fully controlled environment and raw materials make AI a tool that makes the same thing that traditional systems make in months in a day. 

The AI is the ultimate tool for researchers and system supervision. The AI is so trusted as information that it gets. If the AI uses limited databases with fully controlled datasets. That highly trained professionals created. The AI can control chemical and temperature environments with ultimate accuracy. And that makes it a tool for next-generation manufacturing processes. 


https://scitechdaily.com/revolutionizing-lens-design-ai-cuts-months-of-work-down-to-a-single-day/


Comments

Popular posts from this blog

New AI-based operating systems revolutionize drone technology.

"University of Missouri researchers are advancing drone autonomy using AI, focusing on navigation and environmental interaction without GPS reliance. Credit: SciTechDaily.com" (ScitechDaily, AI Unleashed: Revolutionizing Autonomous Drone Navigation) The GPS is an effective navigation system. But the problem is, how to operate that system when somebody jams it? The GPS is a problematic system. Its signal is quite easy to cut. And otherwise, if the enemy gets the GPS systems in their hands, they can get GPS frequencies. That helps to make the jammer algorithms against those drones. The simple GPS is a very vulnerable thing.  Done swarms are effective tools when researchers want to control large areas. The drone swarm's power base is in a non-centralized calculation methodology. In that model, drones share their CPU power with other swarm members. This structure allows us to drive complicated AI-based solutions. And in drone swarms, the swarm operates as an entirety. That ca...

Hydrogen is one of the most promising aircraft fuels.

Aircraft can use hydrogen in fuel cells. Fuel cells can give electricity to the electric engines that rotate propellers. Or they can give electricity to electric jet engines. In electric jet engines. Electric arcs heat air, and the expansion of air or some propellant pushes aircraft forward. Or, the aircraft can use hydrogen in its turbines or some more exotic engines like ramjets. Aircraft companies like Airbus and some other aircraft manufacturers test hydrogen as the turbine fuel.  Hydrogen is one of the most interesting fuels for next-generation aircraft that travel faster than ever. Hydrogen fuel is the key element in the new scramjet and ramjet-driven aircraft. Futuristic hypersonic systems can reach speeds over Mach 20.  Today the safe top speed of those aircraft that use air-breathe hypersonic aircraft is about Mach 5-6.   Hydrogen is easy to get, and the way to produce hydrogen determines how ecological that fuel can be. The electrolytic systems require elec...

The neuroscientists get a new tool, the 1400 terabyte model of human brains.

"Six layers of excitatory neurons color-coded by depth. Credit: Google Research and Lichtman Lab" (SciteechDaily, Harvard and Google Neuroscience Breakthrough: Intricately Detailed 1,400 Terabyte 3D Brain Map) Harvard and Google created the first comprehensive model of human brains. The new computer model consists of 1400 terabytes of data. That thing would be the model. That consists comprehensive dataset about axons and their connections. And that model is the path to the new models or the human brain's digital twins.  The digital twin of human brains can mean the AI-based digital model. That consists of data about the blood vessels and neural connections. However, the more advanced models can simulate electric and chemical interactions in the human brain.  This project was impossible without AI. That can collect the dataset for that model. The human brain is one of the most complicated structures and interactions between neurotransmitters, axons, and the electrochemica...