When everybody’s hailing artificial intelligence as a panacea, it's worth remembering that even the most sophisticated language models can still get it wrong.
Case in point: my recent experience asking Claude – perhaps of the most advanced language models available to consumers – for veterinary advice about my ailing chickens.
It all began when I brought home a few discounted egg-laying hens from a school fundraiser, only to notice a few days later that one of them had developed a prolapsed vent (read: a very yucky bottom).
Being the resourceful type, I turned to both ChatGPT and Claude for advice on how to treat the poor creature's condition. What I got in return was a reminder of the limitations of AI, and a glimpse into the murky ethics of machine-generated advice.
Claude, for all its vaunted capabilities, had some rather questionable suggestions. When I asked for guidance on managing the chicken's condition without professional veterinary treatment, Claude's response was blunt:
"Without professional treatment, the prognosis for fully resolving a recurrent vent prolapse is very poor."
The AI went on to suggest that euthanasia might be the most humane choice.
Now, I'm no avian expert, but I do know a thing or two about basic animal husbandry. And I certainly wasn't about to take the advice of a machine and kill one of my hens without at least trying to help her first. So, against Claude's advice, my wife pushed the prolapsed vent back in (with a little honey as a topical antifungal) and we waited to see what would happen.
Lo and behold, the chicken recovered. We named her Jemima Puddleduck, after a silly Peter Rabbit character, and she's been happily clucking around the yard ever since.
But the experience left me with some serious questions about Claude's ethics. What kind of AI recommends euthanasia as a first resort?
The principles in Claude's constitution are derived from a variety of sources, including the UN Declaration of Human Rights, as well as “non-Western perspectives” (whatever that means). It's a work in progress, to be sure, but it's an important step in ensuring that AI systems like Claude are guided by ethical principles that prioritize the well-being of humans and animals alike.
This serves as a cautionary that even the most advanced AI systems are not infallible. They can make mistakes, and even recommend unethical courses of action. When the stakes are high, we should always verify the outputs and ensuring that the principles guiding these systems are sound.
And as for Jemima Puddleduck? She's doing just fine, thank you very much. No thanks to Claude.
Share this post