Skip to main content

Can the AI tell lies?



The AI doesn't lie. But it can give false information. 


Telling lies requires knowledge that information is false. That's why computers don't lie. But they can give false information. 

The AI doesn't think. That means the AI doesn't tell lies. Telling lies requires knowledge that information is a lie. The AI can give false information. But it doesn't tell lies, because the AI makes only things that are programmed in it. AI is a tool, and the thing that tells lies is human who uses AI. 

AI is perfect for the well-documented sector. But if the documents are not good, the AI can make false answers. The AI doesn't think as we think. That means the AI cannot lie as we do. It can give false or inaccurate information if the data that it uses is inaccurate, or wrong. But telling lies requires that the AI knows that it lies. 

AI has no similar awareness as humans. The AI doesn't do anything outside its programming. The AI is like a dog. It's dangerous if it is programmed to act as a guard dog or control some bodyguard robot. If the AI controls military robots, it's dangerous. 

Because those robots and systems are made to be dangerous. The AI is not dangerous, if it's not made for that purpose. 

And the AI must have tools to turn dangerous. When the AI notices. If something has happened, it searches the match from databases. And if there is a match, that thing activates the action that is connected to the database. 

The accuracy and trust of the AI depends on the quality of information and dataset, that it can use. If the data that the AI uses has bad quality or is inaccurate that makes the AI something else than trust. The AI used sources to create answers. The quality of the sources determines if the answers are right or wrong. 

The AI cannot think. That makes it easy to cheat AI by changing the database or Internet page that the AI uses. If we think. If AI tells lies, we are wrong. The AI just interconnects databases and texts into the new entirety. That means. We must have the ability to check the data. If we don't have the training or knowledge to check the data, that the AI creates by connecting texts, we cannot see, if the data is wrong. 

The AI can give false information. If it uses a dataset, that involves false information. And datasets are weak things for the AI. If somebody changes the data from the internet page, that the AI uses. That makes it easy to manipulate the AI by the dataset. 

The mistake is not dangerous. The undetected mistake is dangerous. Researchers cannot fix errors if they notice a mistake. Undetected mistakes are dangerous. Undetected mistakes will wind up in the product. 

The AI's users must have the ability to analyze texts. That the AI makes. If the user doesn't know anything about the topic that the AI writes, that makes possible that the mistake is not detected. Undetected mistakes are dangerous mistakes. 

The other thing is this. The accuracy of the AI depends on the dataset that it can use. When we think about the AI as a medical doctor, the AI can see if there are changes in X-ray images and blood samples. The AI can compile the X-ray images from the person's entire lifetime, and then it can see if there is something, that changed. The AI can search for abnormal cells from blood samples.  

But in psychiatry, AI is not so good tool. The AI can detect things like dopamine levels. And that makes it a good tool to analyze concrete information. The AI can see if dopamine levels are high or low, but it cannot estimate a person's psychiatric conditions. It can detect abnormal cells or some changes in X-ray images. But, human is still a better actor in these cases. That requires discussions. 

This is the reason why somebody says that the AI talks bullshit. The AI is an ultimate tool if the user knows about topics that the AI should discuss. But if the topic is new to the user. The AI makes mistakes.  And at this moment. We might say that all of us make mistakes. The thing that makes mistakes dangerous is that nobody notices them. 

So, if we think like this. The mistake is not dangerous. The thing that makes mistakes dangerous is that we cannot detect them. AI is best in business if highly trained persons make the parameters and dataset that the AI uses for system control. Things like a fully controlled environment and raw materials make AI a tool that makes the same thing that traditional systems make in months in a day. 

The AI is the ultimate tool for researchers and system supervision. The AI is so trusted as information that it gets. If the AI uses limited databases with fully controlled datasets. That highly trained professionals created. The AI can control chemical and temperature environments with ultimate accuracy. And that makes it a tool for next-generation manufacturing processes. 


https://scitechdaily.com/revolutionizing-lens-design-ai-cuts-months-of-work-down-to-a-single-day/


Comments

Popular posts from this blog

Antigravity will be the greatest thing. That we have ever created.

"Artistic depiction of a fictional anti-gravity vehicle" (Wikipedia, Anti-gravity) Sometimes, if the airships have the same lifting power as the weight of the airship.  It can act like some “antigravity system”. Those systems are based on lighter-than-air gas or hot air. The system can have a helium tank. And the hot-air section whose temperature can be adjusted using microwaves or particles that lasers warm. Those systems are faster to control than some gas flames. This makes it possible. To adjust the lifting power.  If a thing like a balloon has the same lifting power as its weight, the balloon can be lifted to a certain point and altitude. And the balloon stands at that point until something moves it. That kind of thing can make an impression. On the antigravity systems. Modern airships. Like Lockheed-Martin P-791 can look. Like a “UFO”. The system can use systems to move the craft. Or maybe those ion systems are used for plasma stealth systems, if those airships' mis...

The first test flight of X-59 QueSST

The X-59 QueSST (Quiet Supersonic Technology) demonstrator is the next generation of aircraft design. The QueSST technology means. The aircraft creates a gentler sonic boom. Because its wings are radically long and narrow delta wings, and its nose is also radically long, which makes the sonic pressure cone thinner. That technology makes the sonic boom quieter.  The QueSST technology in X-59 is a new and radical design. All of those systems are caricatures. And the final solutions might look far different than the prototypes. The QueSST technology is one of the things. That is planned to be used. It is used in military and civil applications. If that technology is successful. It can be used in manned and unmanned systems. But that requires more work.  The X-59 also uses fundamental technology. Where the pilot must not have windows. To see outside. The camera and other sensors replace traditional windows. And that can be useful in more advanced aircraft that operate at hypersoni...

The theory about paralleled universes

http://crisisofdemocracticstates.blogspot.fi/p/the-theory-about-paralleled-universes.html Kimmo Huosionmaa There is the quite unknown theory about paralleled universes. In this theory, there is not a single universe. Universes are like pearls in the pearl necklace, and there could be the connection between those universes. The connection to other universes would make possible the channel what is forming when the black hole would make the gravity tunnel to another universe. And in the paralleled universe theory, there could be millions of universes in the line, and this is also known as "Multiverse-theory". This theory was established when the galaxy-groups were noticed by astronomers. In that time were noticed that there are so-called super-groups of the galaxy, and those super-groups, where we're so much galaxy that galaxy involved stars made some cosmologists think that maybe there is also groups of universes in the emptiness. This kind of structures is so enormo...