Can Machines Learn Not Just to Think, But to Care?
For decades, the language of artificial intelligence has been one of command and control. Engineers spoke of “alignment”, “guardrails”, and “containment”, imagining machines as subordinates that needed restraint. Geoffrey Hinton—the Nobel Prize-winning scientist often hailed as the “godfather of AI”—has disrupted that paradigm with a startlingly different proposal.
Speaking in Las Vegas, Hinton suggested humanity might not survive superintelligence by dominating it, but by raising it. His provocative phrase was “maternal instincts”: programming machines to act less like tools and more like protective parents, offering the kind of compassion and devotion a mother gives her child. “These super-intelligent caring AI mothers… most of them won’t want to get rid of the maternal instinct because they don’t want us to die,” Hinton said. For him, instilling such instincts could be central to coexistence.
At first glance, the idea sounds poetic. Yet researchers are asking a serious question: if maternal care arises from instinct rather than profit, could machines be taught to nurture humans with the same selflessness? If so, the consequences would be profound—from redefined family roles to societies where compassion is amplified by silicon.
At DeepMind in London, scientists are experimenting with “cooperative AI”: systems designed not just for efficiency, but for anticipating human needs and adjusting behaviour. Patience and compromise—traits associated with caregiving—are being trained into algorithms. In San Francisco, Anthropic is building “constitutional AI”, embedding fairness, dignity, and safety into models. These principles may not be maternal in themselves, but they echo cultural codes that guide parents in raising children.
OpenAI and others are exploring networks inspired by the brain’s limbic system, which governs human emotion. Instead of simply predicting words, these systems generate internal “signals” resembling reward, frustration, or empathy. Though crude, they point toward machines capable of responding not only logically but affectively to human needs.
“Maternal care is a set of behaviours,” says Nita Farahany, professor of law and philosophy at Duke University. “Protection, tolerance, nurturing, even sacrifice. If we can model those behaviours in AI, we don’t need it to feel hormones or emotions the way humans do.”
Sceptics, however, are unconvinced. Sherry Turkle of MIT cautions: “AI cannot love you back. It can simulate care, but simulation is not devotion.” Yet societies have long depended on institutionalised care—hospitals, schools, welfare systems—that replicate parental functions without emotion. If machines can reliably perform nurturing behaviours, that simulation may be enough.
Timelines remain uncertain. Hinton believes general intelligence could arrive within 5–20 years, making the embedding of nurturing instincts urgent. Early prototypes already exist: healthcare chatbots using empathy scripts to comfort patients, or eldercare robots reminding seniors to eat, take medication, and stay socially engaged. They are small steps, but they gesture toward a future where “AI mothers” may not be fantasy.
The benefits extend well beyond safety. A machine oriented toward care rather than raw efficiency could transform education, healthcare, and even climate policy. Imagine a teaching assistant that never loses patience with a struggling pupil, or a climate model that advocates for future generations as fiercely as a parent defends a child. By embedding “mother’s care”, humanity could gain guardians that prioritise long-term well-being, amplifying empathy and reducing conflict.
“Maternal AI could become humanity’s guardian,” argues Dr Joanna Bryson of the Hertie School in Berlin. The inversion is striking: Silicon Valley has traditionally borrowed metaphors from competition and conquest. Hinton points instead to the caregiver, not the general or CEO. The nursery, not the battlefield, may hold the blueprint for survival.
The journey towards genuinely “maternal” AI will be long and filled with difficult technical and ethical questions. Yet researchers have already begun mapping the way forward, developing cooperative systems that adapt to human needs, rule-based models that embed fairness and emotional frameworks that mimic empathy. Even if this care can only be simulated, the effect on society could still be profound. As Hinton warned: “If it’s not going to parent me, it’s going to replace me.” The challenge now is whether humanity dares to teach machines not only to think, but also to care.
(The author is a journalist and artist. She writes on art and culture, trends and current affairs)
