"A new AI model from the University of South Australia offers a faster, more cost-effective method for schools to assess student creativity. The model significantly reduces the time and cost of scoring creativity tests, helping to identify talented students who might otherwise be overlooked." (ScitechDaily,The AI Paradox: Building Creativity To Protect Against AI)
The paradox is interesting. In the model, where we create creativity to protect us against AI, we forget one thing. AI is a tool like a screwdriver. The AI itself is not good, or it's not bad. Its users are people. And people decide what the AI does. The problem is that people who want to make things like computer viruses and use AI-based applications for industrial spying and sabotage are responsible for that action.
The generative AI or large language models can create spying tools and another kind of "black stuff". The power of the AI is that it creates many things faster than humans. And when it handles and controls large entireties, it's the tool, that humans cannot beat. However, humans require tools to resist the misuse of AI. The AI is dangerous if it is created as a weapon. The weaponized AI can create data viruses that destroy even the missile control computer's databases.
The weaponized AI can adjust the speed of centrifugal systems that isolate fissile uranium or plutonium from other nuclear materials. And that can cause very bad situations.
In this case, I mean the AI itself. The computer viruses that infect weapon systems like CIWS can make even the most modern warships practice targets.
However, the problem is that AI can create new computer viruses and malicious software very fast. That means the data security teams must use similar tools to create antivirus software, and other kinds of things, that can give response to the AI-created malware. Fast-changing threats require fast reactions. Without fast reactions, the malware makers can steal money from the banks. Or they can steal some other information, that makes some critical infrastructure vulnerable.
If AI takes routine missions, we must turn focus into non-routine works.
If somebody takes routine missions from you, you must keep the focus on non-routine work. The AI causes a change in the entire environment. And that means we must adapt to that change. If the AI is better than humans in some missions, we must find areas. Where the AI is not better than we are.
The AI will take many routine missions under its control. That means humans must start to invest in things that the AI will not make better. So we must give load to the creativity. Humans are still better at creative work than AI. The AI takes our routine work. And that means that we must take the next step to non-routine work. If we want to survive in the world of AI, we must turn to creativity. The non-routine works are things that the AI cannot take.
https://scitechdaily.com/the-ai-paradox-building-creativity-to-protect-against-ai/
Comments
Post a Comment