![Illustrative AI photo by Kirill KUDRYAVTSEV](https://static.wixstatic.com/media/a63056_8e1d41c3106d4bce8b7aa48ada8aa20a~mv2.jpg/v1/fill/w_512,h_341,al_c,q_80,enc_auto/a63056_8e1d41c3106d4bce8b7aa48ada8aa20a~mv2.jpg)
By AFP - Agence France Presse
Experts call for regulation to prevent “loss of control” over AI
By Daxia ROJAS
Experts from around the world have called for greater regulation of AI to prevent it from slipping out of human control as global leaders gather in Paris for a summit on the technology.
France, which is co-hosting the meeting on Monday and Tuesday with India, has chosen to highlight the “action” of AI in 2025 rather than putting safety concerns at the forefront, as at previous meetings in Bletchley Park, Britain, in 2023, and the Korean capital, Seoul, in 2024.
The French vision is for governments, companies, and other actors to speak out in favor of global governance for AI and make commitments to sustainability without setting mandatory rules.
“We don't want to spend our time talking only about the risks. There is also the aspect of very real opportunity,” said Anne Bouverot, President Emmanuel Macron's AI envoy.
Max Tegmark, director of the US-based Future of Life Institute, which has regularly warned about the dangers of AI, told AFP that France should not miss the opportunity to act.
“France has been a wonderful champion of international collaboration and has the opportunity to lead the rest of the world,” said the MIT physicist.
“There is a big fork in the road here at the Paris summit, and it must be embraced.”
Tegmark's institute supported the launch on Sunday of a platform called Global Risk and AI Safety Preparedness (GRASP), which aims to map the main risks linked to AI and the solutions being developed around the world.
“We have identified around 300 tools and technologies in response to these risks,” said GRASP coordinator Cyrus Hodes.
The results of the research will be passed on to the OECD's rich countries club and the members of the Global Partnership on Artificial Intelligence (GPAI), a group of almost 30 nations, including the main European economies, Japan, South Korea and the United States, who will meet in Paris on Sunday.
Last week, the first International AI Safety Report was also presented on Thursday, compiled by 96 experts and supported by 30 countries: the UN, the EU, and the OECD.
The risks described in the document range from the well-known, such as fake content online, to the much more alarming.
“Evidence is constantly emerging of additional risks, such as biological attacks or cyberattacks,” Yoshua Bengio, the report's coordinator and a noted computer scientist, told AFP.
In the long term, 2018 Turing Award winner Bengio fears a possible “loss of control” by humans over AI systems, potentially motivated by “their own will to survive”.
“Many people thought that mastering language at the level of ChatGPT-4 was science fiction just six years ago, and then it happened,” said Tegmark, referring to OpenAI's chatbot.
“The big problem now is that many people in power haven't yet understood that we're closer to building an artificial general intelligence (AGI) than we are to figuring out how to control it.”
AGI refers to an artificial intelligence that would equal or better human beings in every field.
Its approach in a few years has been heralded by people like the head of OpenAI, Sam Altman.
“If you look at the speed with which these resources are increasing, it makes you think we'll get there in 2026 or 2027,” said Dario Amodei, Altman's colleague at rival Anthropic, in November.
“In the worst case scenario, these American or Chinese companies will lose control of this, and after that, the Earth will be run by machines,” said Tegmark.
Stuart Russell, professor of computer science at Berkeley in California, said that one of his biggest fears is “weapons systems where the AI that controls the weapons system decides who to attack, when to attack and so on”.
Russell, who is also coordinator of the International Association for Safe and Ethical AI (IASEI), places the responsibility firmly on governments to establish safeguards against weaponized AIs.
Tegmark said the solution is very simple: treat the AI sector in the same way as all other sectors.
“Before anyone can build a new nuclear reactor outside Paris, you have to demonstrate to government-appointed experts that this reactor is safe. That you will not lose control of it... It should be the same for AI,” said Tegmark.
dax/tgb/rl
Comments