Skip to main content

How to teach social skills to robots and AI?



One of the versions is to make a large-scale database structure where is different versions of words are used in social situations. Then the programmer makes the first connections between databases. After that, the system starts to talk with its creators. 

The thing is teaching social skills for AI is quite an easy thing in theory. It's just connecting databases. But social skills are more than just saying words. They are noticing the facial expressions taking the hat off when going to the church. And other kinds of things.  

But if we want to make the AI what discusses like a real person that requires a lot of databases and connections between them. If the system would make the job interviews that thing requires lots of data. And if somebody says or asks about things that are not in the database, the system might ask the programmer to answer by talking or writing it to the computer. 

The thing is that if we want that AI to talk with us about everyday situations. That thing is really hard to make. The thing is that we don't normally recognize. That people are not using written standard language in everyday speech. So that requires that the databases can understand dialects. If we want to make an AI that has a very large set of skills. That requires large-scale databases. 

The problem with regular computer programs is that they are not using fuzzy logic. In the discussion programs, the fuzzy logic is made by using multiple databases that are connected with a certain social dialogue. Those databases are involving dialect words. 

They are connected to databases there is written standard language. And then those things are connected to the database that is including social dialogue. The thing is that the programmer can write social dialogue by using dialect words. And that makes the robot more like a human. 

If the social behavior like words that the AI uses are right the programmer or teacher would accept or deny that thing. If there is no match for an answer the programmer would write the right answer for the machine. 

The idea is that when the person is discussing with AI. It would record the things that the opponent is saying, and then answer. This kind of AI might have other parameters like images of the things facial expressions in certain situations. The AI is the ultimate tool in cases like job interviews and especially in video interviews. 

The system can follow things like the length of the brakes between the words. And how the voice is chancing when the person is talking. But it can also look for things like touching the nose during the interview. What kinds of things are marks of lies. That kind of system would also pick things like does the interviewed person has some kind of skills what the interviewer wants to find. 

When the interviewer asks about things like computer skills and what to do in certain situations the job seeker would cover the missing skills in long answers. That means that the interviewer would not notice that there are missing parts in the skills that the computer operator requires. 

 Like how to connect systems to the net. Or something like that. The system might record the answer. And then compare the answer with the actions what the worker should do in those cases. If there is no match, the person would not understand what to make. 

In the cases like job interviews, the AI can search if there are lots of the same names in the lists of referees in different job applications. If the same referees are always in the CV:s of the persons whose work is not. There is something wrong with those referees. 


https://likeinterstellartravelingandfuturism.blogspot.com/

Comments

Popular posts from this blog

There is a suggestion that dark matter may have deformed another universe.

The researchers suggest that dark matter is the deformed dark universe. Or in the most exciting theories, dark matter is the dark universe inside our universe. In that theory dark matter is entangled with the visible material. That theory is taken from the multiverse theory. There our visible universe is one of many universes. The other universes can be invisible because their electrons and quarks are different sizes. And that thing makes those other universes invisible to us.  Another hypothesis is that the hypothetical other universes send radiation that radiation from our universe pushes away. Things like invisible 9th. planet causes ideas that maybe there is another universe in our universe. The thing that makes the mysterious dark matter interesting is that. The dark matter can form structures that can be similar to visible material. But those structures are not visible.  The multiverse theory is not new. The thing in that theory is that there are multiple universes at this moment

The neuroscientists get a new tool, the 1400 terabyte model of human brains.

"Six layers of excitatory neurons color-coded by depth. Credit: Google Research and Lichtman Lab" (SciteechDaily, Harvard and Google Neuroscience Breakthrough: Intricately Detailed 1,400 Terabyte 3D Brain Map) Harvard and Google created the first comprehensive model of human brains. The new computer model consists of 1400 terabytes of data. That thing would be the model. That consists comprehensive dataset about axons and their connections. And that model is the path to the new models or the human brain's digital twins.  The digital twin of human brains can mean the AI-based digital model. That consists of data about the blood vessels and neural connections. However, the more advanced models can simulate electric and chemical interactions in the human brain.  This project was impossible without AI. That can collect the dataset for that model. The human brain is one of the most complicated structures and interactions between neurotransmitters, axons, and the electrochemica

Nano-acoustic systems make new types of acoustic observation systems possible.

' Acoustic diamonds are a new tool in acoustics.  Another way to make very accurate soundwaves is to take a frame of 2D materials like graphene square there is a hole. And then electrons or laser beams can make that structure resonate. Another way is to use the electromagnetic field that resonates with the frame and turns electromagnetic energy into an oscillation in the frame.  Nano-acoustic systems can be the next tool for researching the human body. The new sound-wave-based systems make it possible to see individual cells. Those soundwave-based systems or nano-sonars are tools that can have bigger accuracy. Than ever before. The nano-sonar can use nanodiamonds or nanotubes as so-called nano-LRAD systems that send coherent sound waves to the target. In nanotube-based systems, the nanotube can be in the nanodiamond.  The term acoustic diamond means a diamond whose system oscillates. The system can create oscillation sending acoustic or electromagnetic waves to the diamond. Diamond