OpenAI, the creator of ChatGPT, has revealed that it deliberated over informing Canadian law enforcement about an individual’s concerning behavior, a decision that could have potentially prevented a tragic school shooting months later. The tech company disclosed that it had considered alerting the authorities last year regarding the actions of an individual who eventually carried out one of Canada’s most devastating school shootings.
This revelation raises important questions about the role of AI in identifying and addressing potential threats within society. While OpenAI’s contemplation did not lead to direct intervention, it underscores the ethical dilemmas surrounding the use of AI for preemptive risk assessment and intervention.
By exploring the scenario where technology could have been utilized to prevent a catastrophic event, the incident prompts discussions on the responsibilities tech companies have in flagging potential dangers based on algorithmic assessments, while also considering the privacy and civil liberties implications.
Source: Tech-Economic Times