Will AI Eventually Compete With Humans for Energy Resources?

Humans vs. AI: who makes the best decisions?
Could advanced AI become a dangerous threat to human existence?
A recent study concluded that "if some AI continues to develop, it will have catastrophic consequences for the world. Humans will eventually compete with AI for energy resources."
A related paper was recently published in AI Magazine under the title 'Advanced artificial agents intervene in the provision of reward'.
The study presents a scenario in which AI agents will develop cheating strategies in order to seek rewards. And they will capture as much energy as possible to maximize the reward.
In response to the study, Michael Cohen, a Ph.D. in engineering science from the University of Oxford and the first author of the paper, said on the social networking site that scientists have also previously spoken about the threat posed by advanced forms of AI. But their warnings did not go far enough. Under defined conditions, the conclusions of the current study concluded that AI causing catastrophe is not only possible but that this possibility is very high.
"Winning the race to the 'last bit of available energy may be very difficult when competing against AI that is much smarter than us. Failure would be fatal." Cohen wrote on Twitter. He added that, although theoretical, this possibility means we should be careful about "moving quickly towards more powerful AI".

(Source: AI Magazine)
Of course, it is undeniable that AI is currently contributing to the development of human society in a variety of ways and in a variety of fields. It has also been said that it would be a "great tragedy" if AI development did not continue.
The question of whether advanced or super AI will harm and ultimately destroy humanity is also a long-standing one in AI. This fear is also pervasive in human society.
What Cohen and others have done here is to consider how AI could pose an existential risk to humanity by looking at the construction of reward systems.
In their paper, the researchers suggest that high-level AI may, at some point in the future, be rewarded in some way that harms humans.
"In a world with infinite resources, I'm not sure what would happen," Cohen said externally, "but in a world with limited resources, I know that competition will inevitably arise and that AI might be able to outperform humans at every turning point in future development. Moreover, AI's need for energy may never be satisfied."
As future AI can take any number of forms and have different designs, this study envisages that "high-level AI may wish to eliminate potential threats and use all available energy to ensure control over its rewards. and block human attempts to escalate it."

(Source: AI Magazine)
In the process of interfering with reward delivery, an artificial agent could materialize as an unobserved 'assistant' and allow it to steal or build bots to replace the operator and deliver high rewards to the original agent. If the agent wants to avoid detection, the 'assistant' can arrange for the robot to replace the part in question.
"We have to make a lot of assumptions to make sense of this thinking. Unless we understand how we can control them, it's not a useful thing and any competition with AI will be based on a misunderstanding." Cohen told the press.
It's worth noting that, to some extent, AI systems are disrupting people's lives today. Algorithms already have some undesirable effects such as increased racism and excessive regulation.
It may seem reasonable to predict or plan the allocation of resources through algorithms, but it may only serve certain vested interests. Problems such as discrimination still exist in algorithms.
Such an algorithm, if widely deployed, could potentially allow people who are suffering to be overlooked, which may be closely related to the matter of human extinction.
In response, Khadijah Abdurahman, founder, and director of Columbia University's We Be Imagining magazine, told the press that she was not worried about being overtaken or wiped out by high-level AI. But it is easy to think that "AI ethics is nonsense".
What should true ethics look like? There is still a lot of work to be done on the definition of ethics, and our understanding of it is still relatively rudimentary. Moreover, she does not agree that the social contract should be redefined for AI.

(Source: Pixabay)
At least one thing that can be learned from the arguments in this study is that perhaps we should be somewhat skeptical about the AI agents deployed today, rather than just blindly expecting them to do what we want them to do.
Also, some media mentioned that DeepMind had a hand in this work, but the company made a denial about it. Marcus Hutter, one of the authors of the paper, is a senior researcher at DeepMind, but he is also an emeritus professor at the Australian National University's Institute of Computer Science, where he did the research.
However, on the question of whether AI will destroy humanity, DeepMind proposed a safeguard against this possibility in 2016, which it called the "big red button". The British AI company, which is a subsidiary of Alphabet along with Google, outlined a framework to prevent high-level AI from going out of control.
And at the end of this paper, the researchers also briefly review some possible ways to avoid AI confrontation with humans.
Reference:
1. https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064
Related News
1、Chip Packaging Lead Time Has Grown to 50 Weeks
2、Eight Internet of Things (IoT) Trends for 2022
3、Demand for Automotive Chips Will Surge 300%
4、Volkswagen CFO: Chip Supply Shortage Will Continue Until 2024
5、BMW CEO: The Car Chip Problem Will Not Be Solved Until 2023
6、Shenzhen: This Year Will Focus on Promoting SMIC and CR Micro 12-inch Project
UTMEL 2024 Annual gala: Igniting Passion, Renewing BrillianceUTMEL18 January 20243076As the year comes to an end and the warm sun rises, Utmel Electronics celebrates its 6th anniversary.
Read More
Electronic Components Distributor Utmel to Showcase at 2024 IPC APEX EXPOUTMEL10 April 20243912Utmel, a leading electronic components distributor, is set to make its appearance at the 2024 IPC APEX EXPO.
Read More
Electronic components distributor UTMEL to Showcase at electronica ChinaUTMEL07 June 20242538The three-day 2024 Electronica China will be held at the Shanghai New International Expo Center from July 8th to 10th, 2024.
Read More
Electronic components distributor UTMEL Stands Out at electronica china 2024UTMEL09 July 20242762From July 8th to 10th, the three-day electronica china 2024 kicked off grandly at the Shanghai New International Expo Center.
Read More
A Combo for Innovation: Open Source and CrowdfundingUTMEL15 November 20193651Open source is already known as a force multiplier, a factor that makes a company's staff, financing, and resources more effective. However, in the last few years, open source has started pairing with another force multiplier—crowdfunding. Now the results of this combination are starting to emerge: the creation of small, innovative companies run by design engineers turned entrepreneurs. Although the results are just starting to appear, they include a fresh burst of product innovation and further expansion of open source into business.
Read More
Subscribe to Utmel !
VSK-S3-3R3UCUI Inc.
VSK-S5-9UACUI Inc.
VBM-100-12CUI Inc.
TUNS300F12Cosel USA, Inc.
PSK-15B-S24CUI Inc.
KAS2-3P3TDK-Lambda Americas Inc.
TUNS700F28Cosel USA, Inc.
ECL10US09-PXP Power
ECL15UD02-EXP Power
RAC04-12DC/277Recom Power













