Google DeepMind Hires Philosopher Henry Shevlin to Focus on Machine Consciousness and Human-AI Relationships

This article was generated by AI and cites original sources.

Google DeepMind has appointed Henry Shevlin to a philosopher position focused on machine consciousness, human-AI relationships, and AGI readiness. The hire signals that leading AI labs are integrating academic expertise from philosophy and related fields into their research operations.

The Appointment

According to mint, DeepMind’s new hire is not an AI engineer or researcher. Instead, the lab has created a role explicitly titled as a philosopher position. Shevlin will work on topics including “machine consciousness,” “human-AI relationships,” and “AGI readiness.”

In a post on X (formerly Twitter), Shevlin announced that he would be joining DeepMind in May. He also indicated he would continue his research and teaching at Cambridge on a part-time basis. This part-time arrangement suggests DeepMind is integrating the role into ongoing academic and industry work streams rather than relocating the entire research agenda around this position.

Who Henry Shevlin Is

Shevlin currently serves as Associate Director (Education) at the Leverhulme Centre for the Future of Intelligence, University of Cambridge. According to mint, he has expertise across cognitive science, AI ethics, animal minds, and consciousness. He has published multiple papers in journals including the Journal of Consciousness Studies.

Originally from rural England, Shevlin earned a BA in Classics and a BPhil in Philosophy from the University of Oxford. He later completed his PhD in philosophy at the CUNY Graduate Center between 2010 and 2016, and served as a lecturer at Baruch College during that period.

Research Focus Areas

DeepMind’s stated focus areas—machine consciousness, human-AI relationships, and AGI readiness—form a cluster of research themes. The mint article does not provide technical deliverables, evaluation methods, or specific integration points with DeepMind’s model development process.

The choice of topics reflects a pattern in the AI industry: as systems become more capable, labs increasingly discuss not only performance but also interpretation, interaction, and readiness for more general capabilities. A philosopher role could help operationalize questions that are difficult to reduce to standard benchmarks.

For example, “machine consciousness” is presented as a research area rather than a specific engineering feature or measurement. Similarly, “human-AI relationships” and “AGI readiness” are listed as focus topics without technical definition in the source material.

Industry Precedent

This hiring move reflects a broader trend in AI research. According to mint, this is “not the first time that an AI company has hired a philosopher.” Late last year, Anthropic hired Amanda Askell, a PhD philosopher and AI researcher, to work as an in-house philosopher on areas including AI alignment and fine-tuning.

The Anthropic example suggests that philosopher roles in AI labs can be tied to technical work such as alignment and fine-tuning, rather than serving only public relations or ethics functions. For DeepMind’s appointment, the source material does not specify whether Shevlin’s work will connect to model training, alignment methods, or evaluation.

What This Signals

DeepMind’s appointment of Henry Shevlin indicates that “human-AI relationships” and “machine consciousness” are being treated as research topics worth staffing at a major AI lab. The practical impact—what changes in systems, processes, or evaluation—remains unspecified in the source material. However, the creation of a philosopher position suggests that DeepMind is investing in conceptual frameworks that could influence how teams reason about advanced AI capabilities and their interaction with people.

Industry observers may watch whether the role produces publications, technical guidance, or internal frameworks that align the lab’s engineering work with the stated research focus areas.

Source: mint – technology