Artificial intelligence has the potential to transform our relationships with animals. Yet as Isabella Logothetis, Spencer Jury and Jonathan Birch explain, more must be done to make sure technological change works for – rather than against – the interests of other species.
Would you trust a device that claimed to translate your dog or cat’s emotions into English? Would you be OK with completely automated, human-free farming? What if you had a driverless car that was indifferent to hitting birds and foxes?
Artificial intelligence is transforming the lives of animals at speed, but these huge impacts are going unnoticed and unregulated. Some of the changes could transform our relationships with our fellow creatures for the better, whereas others could make existing animal welfare problems much worse and even more deeply entrenched. How can we curb the risks and take the opportunities?
AI-powered pet training
The Jeremy Coller Centre for Animal Sentience at LSE, which launches on 30 September, is committed to making technological change work for – rather than against – the interests of other species. One of our initial goals is to develop clear standards for the ethically responsible use of AI when other animals are affected. What kinds of issues will we be investigating? Let’s zoom in on one especially controversial case: AI-powered pet training.
AI for pets is taking off, and it’s a wild, unregulated sector. There are apps analysing cats’ facial expressions to assess whether they might be in pain, smart collars processing acoustic data to “translate” how dogs might be feeling, and AI-powered training collars that work in tandem with automated feeding devices to reinforce traits such as impulse control, patience and command following. When the collar detects good behaviour, a treat is released. Many promise remarkable levels of accuracy and effectiveness, but no one is auditing these claims.
In essence, AI-powered dog training systems fill a gap created by human failure. If owners were fully present for their dog’s training, the collar would not be needed. But the reality is that many struggle to maintain consistent training regimes. An AI collar linked up to an automated feeder can keep the routine going while you’re out. These products could help reduce behaviours that cause friction with owners, other people and other dogs. They could also ease stress, boredom and separation anxiety, lessening the strain of irregular, inconsistent owner interest.
But there are big risks, too. It’s already possible to buy “AI-powered shock collars” that dish out punishments for unwanted behaviour. Owners who start with positive reinforcement might easily be tempted by the faster results promised by electric shocks. It is probably illegal to use an AI shock collar in Wales and England under existing bans, but these bans don’t explicitly mention AI, leaving uncomfortable room for ambiguity. In Scotland, where a proposed ban was, incredibly, blocked by MSPs in January 2025, they’re legal.
Outsourcing cruelty to AI
Some may argue the problem with AI shock collars is the shock, not the AI. And it’s true: the basic ethical problem with shock collars is not new. But what is new is the possibility of a pet owner outsourcing cruelty to AI, putting it out of sight, out of mind. The RSPCA estimates that 5% of UK dog owners have used shock collars. Of the other 95%, how many oppose hurting the dog on principle, and how many merely want to avoid the discomfort of seeing a dog in pain in front of them? For some unknown fraction, AI outsourcing will be all too tempting.
These extreme collars should clearly be banned, but other, more subtle training collars are set to create ethical grey areas – and this is where more research would be especially helpful. AI shock collars are beyond the ethical limit, but where exactly is the limit? Treat-dispensing systems are also a kind of outsourcing, after all. They too offload what should be the owner’s responsibility on to technology. They replace a personal bond with dependence on an impersonal device. Where should the ethical lines be drawn and how should this area be regulated?
Meanwhile, translation collars and face readers are tricky to evaluate. When the aim is to detect pain, we should worry about the consequences of false negatives: giving the owner false reassurance when in fact they need to get to the vet quickly. When the aim is to translate barks and purrs into English, there are concerns too about hallucination. We know chatbots have a tendency to tell the user what they want to hear, whether true or not, and the last thing we need is AI providing overly rosy reports to owners about their pet’s level of stress or separation anxiety.
Yet how are these products to be validated properly when there is no “ground truth” we can easily test them against? We can compare their verdicts to those of experts, but ultimately we need ways of moving beyond the judgements of human experts, which are still highly variable when the topic is animal emotion.
So, what’s needed? Better technology, better auditing, better validation, better regulation – but also, better ethical standards that provide some guidance to start-ups in this vibrant but anarchic emerging sector. The social sciences and humanities have a crucial role to play in developing those standards.
Jonathan Birch will be speaking at an LSE event on 30 September to mark the launch of the Jeremy Coller Centre for Animal Sentience: How AI is helping – and harming – animals.
Note: This article gives the views of the authors, not the position of EUROPP – European Politics and Policy or the London School of Economics. Featured image credit: Monkey Business Images / Shutterstock.com
Discussion about this post