“Carnegie Mellon researchers found that the smarter an AI system becomes, the more selfishly it behaves, suggesting that increasing reasoning skills may come at the cost of cooperation. Credit: Stock” (ScitechDaily, AI Is Learning to Be Selfish, Study Warns)
Training AI to support group work is not as easy as people think. AI is selfish if it has orders to follow a thinking route. That it chooses. Or the route. That it's ordered to choose to make conclusions about things that users asked. When people use AI alone, they can use anything that they want. In that case, the AI must please only one user. But if we want to make an AI model. Or a language model that can serve teams, we face many things that we must describe to the language model. There are multiple types of people in teams. Nothing can please all of them. We are all different.
And that means it is always possible that not all people can accept the answers. That the AI gives. The big problem is. People should follow the rules that AI gives. Or trust themselves. Some people think that if they follow the AI’s rules, they take orders from the AI.
We must describe the AI’s position to the team. What the AI should do. If it sees that the team that uses it makes wrong decisions.
Should AI refuse to follow a wrong order? Should the AI notice people’s hierarchy in the group? Should it follow the team leader’s order, or trust democracy? In those cases, the AI requires information about the internal hierarchy in the entire organization. Then we must realize another very important thing. The AI is a tool, like a machine. It doesn’t think. It can collect data and make an analysis. Analytic AI requires more time to complete its duties than the “dummy” or less complicated AI. Researchers noticed that intelligence undermines cooperation. One of the reasons for that. It can be that.
The newer AI “thinks” that the older version involves more bugs. And that’s why the newer AI don’t ask for help from language models. That has a lower version number. The reason for that. It is partially in marketing. This is not good for sales if the higher version uses lower versions as advisors. When the AI. Making an analysis. It requires time to collect information. This is the reason why old-fashioned chess programs used more time for calculating movements in the more difficult levels.
Than in easy levels. Because the system had more time. It could calculate movements and the counter-movements. And their influences over a longer period. Same way. When the AI makes complicated analysis, it requires time. This means. There is a possibility. The AI is stuck working with some philosophical questions. If it has no time limit, for how long can it work on a problem? That means the AI can spend even years trying to find out things like “what is the purpose of life”?
Should AI stop its process for taking new orders? Or should it finish the process? Before, it takes on new work.
There is also a possibility that AI will not follow orders. It might work with some other problem. And if there are no orders for situations in which the AI takes new command. It can continue to run the existing run. And refuse to stop the process. If the AI is programmed or trained to stop the process too easily, that can cause data loss. The AI must have rules. About stopping the process. There is a possibility. The AI saves data in mass memory. Before it starts to work on a new order. The problem is how the AI makes a decision to cut the process. If AI cuts processes too easily, that means the AI becomes useless.
With a higher priority. And then that system can return to work with lower-priority missions. When AI feels like a human, we give it more missions. About things like how we should behave in social contact. And use AI as a therapist. The problem with AI is that. Intelligence has no morals or ethics. This makes AI dangerous. AI can make everything that people say without excuses. When we use AI in group work, there is a possibility that only one person uses the AI. That requires that.
The group sits in one room and communicates with each other. But if the AI has many users. They work separately, which causes a situation. These people should not jam the AI. By sending multiple orders. At the same time. The problem is this: if the operators run the AI using a local server, there is a possibility that the system. Does not have the resources to complete multiple missions. That comes in a short period.
One of the things that wastes resources is the situation where the system must generate answers to the FAQs. Those frequently asked questions can be stored in a database, and the system can offer answers. Those are already generated for the FAQs. Otherwise, science advances, and some old answers are turning. Old-fashioned. That means the system must sometimes check sources. These are used for FAQ databases. And then update those databases without commands.
The system must have the ability to filter overlapping requests. The system must have the ability to use the database of used requests. And then offer answers that are already generated. That saves. The system resources. It must not generate all answers separately in questions that have already been asked.
https://scitechdaily.com/ai-is-learning-to-be-selfish-study-warns/

Comments
Post a Comment