A recent study has raised concerns about the potential of generative AI in identifying the real identities of anonymous internet users, as reported by The Guardian. The research highlights how large language models (LLMs) can now successfully link anonymous online profiles to actual individuals by analyzing seemingly innocuous data shared on various social media platforms.
The study conducted by AI researchers Simon Lermen and Daniel Paleka demonstrated that platforms utilizing advanced AI technologies like ChatGPT can enable sophisticated privacy breaches at a relatively low cost. In an experiment, anonymous accounts were input into an AI system, which then scoured multiple platforms for information.
For instance, in a simulated scenario, an anonymous user mentioned experiences at school and walking their dog in “Dolores Park.” By cross-referencing this information, the AI was able to accurately associate the anonymous profile with a known real-world identity, showcasing the alarming effectiveness of these AI-driven identification techniques.
The researchers emphasize that the lowered barrier to executing such attacks allows hackers with basic tools and access to publicly available language models to compromise individual privacy significantly. This development necessitates a reevaluation of what can be considered private in the online sphere, underlining the potential risks posed by AI-enabled de-anonymization.
While the study’s example was hypothetical, it underscores the broader implications of AI’s capacity to de-anonymize individuals, including the potential for governmental surveillance on dissidents and activists and the heightened risk of personalized scams conducted by malicious actors.
Source: mint – technology