The Business & Technology Network
Helping Business Interpret and Use Technology
«  
  »
S M T W T F S
 
 
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
 
 
 
 
 
 

Another day, another AI warning that authorities won’t care

DATE POSTED:June 5, 2024
Another day, another AI warning that authorities won’t care

On Tuesday, an open letter was issued by a group of current and former employees from leading artificial intelligence firms, highlighting the absence of safety oversight within the industry and advocating for stronger protections for whistleblowers.

OpenAI and Google insiders highligh AI dangers, call for change

This letter, advocating for a “right to warn about artificial intelligence”, stands out as one of the most publicly expressed concerns regarding AI risks from insiders of this typically secretive sector. Among the signatories are eleven current and former employees of OpenAI, as well as two current or former employees of Google DeepMind, one of whom had also worked at Anthropic.

“AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm. However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily,” the letter reads.

OpenAI, in response, defended its practices, highlighting that it has mechanisms such as a tipline for reporting issues within the company and asserting that new technologies are not released until appropriate safeguards are in place. Google, however, did not immediately comment.

Another day, another AI warning that authorities won't careConcerns regarding the potential dangers of artificial intelligence have been present for decades (Image credit)

“We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology and we’ll continue to engage with governments, civil society and other communities around the world,” an OpenAI spokesperson stated.

Concerns regarding the potential dangers of artificial intelligence have been present for decades. However, the rapid expansion of AI in recent years has heightened these fears and left regulators struggling to keep pace with technological advancements. While AI companies have publicly pledged to develop technology responsibly, researchers and employees have raised alarms about the lack of oversight, pointing out that AI tools can amplify existing social issues or introduce new ones.

The letter from current and former employees of AI companies, initially reported by the New York Times, advocates for stronger protections for workers at advanced AI firms who raise safety concerns. It urges adherence to four principles focused on transparency and accountability, including a commitment not to compel employees to sign non-disparagement agreements that prevent them from discussing AI-related risks, and establishing a system for employees to anonymously share their concerns with board members.

Another day, another AI warning that authorities won't careThis recent open letter from AI industry employees is not an isolated incident (Image credit)

“So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public. Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,” the letter reads.

Companies like OpenAI have reportedly employed stringent measures to prevent employees from discussing their work openly. According to a Vox report from last week, OpenAI required departing employees to sign highly restrictive non-disparagement and non-disclosure agreements or risk losing their vested equity. In response to the backlash, OpenAI’s CEO, Sam Altman, issued an apology and promised to revise the company’s off-boarding procedures.

The open letter follows the recent resignation of two prominent OpenAI employees: co-founder Ilya Sutskever and leading safety researcher Jan Leike. Post-departure, Leike criticized OpenAI, claiming that the company had shifted its focus from safety to pursuing “shiny products.”

An ongoing issue

This recent open letter from AI industry employees is not an isolated incident. In March 2023, the Future of Life Institute published a similar letter, signed by approximately 1,000 AI experts and tech executives, including notable figures like Elon Musk and Steve Wozniak. This earlier letter urged AI laboratories to pause the development of advanced AI systems beyond GPT-4, citing “profound risks” to human society. It called for a public, verifiable halt in the training of such systems for at least six months, involving all public actors.

The group highlighted that AI systems with human-competitive intelligence pose significant dangers to society and humanity, as supported by extensive research and acknowledged by leading AI labs. They warned that these advanced AI systems could bring about a monumental shift in the history of life on Earth, necessitating careful and well-resourced planning and management. However, they argued that such meticulous oversight is lacking, with AI labs instead racing to create increasingly powerful digital minds that are beyond the understanding, prediction, or reliable control of their creators. In ay 2023, Geoffrey Hinton, who had been referred to as the godfather of artificial intelligence, had left Google and voiced his regrets about his contributions to AI development. Hinton, who had helped pioneer AI systems like ChatGPT, had warned of the significant risks posed by AI chatbots.

The mounting concerns and calls for action from within the AI community underscore the urgent need for robust safety measures and transparent, responsible development practices in the rapidly evolving field of artificial intelligence.

Featured image credit: Google DeepMind/Unsplash