NEWSAR
Multi-perspective news intelligence
SRCNew York Times - World
LANGEN
LEANCenter-Left
WORDS879
ENT6
TUE · 2026-02-17 · 18:09 GMTBRIEF NSR-2026-0217-16982
News/Question of True Remorse When A.I. Helps Write Your Court Ap…
NSR-2026-0217-16982News Report·EN·Technology

Question of True Remorse When A.I. Helps Write Your Court Apology

In a New Zealand arson case, Judge Tom Gilbert questioned the sincerity of apology letters submitted by the defendant after discovering they were partially written using AI. The judge's concern, voiced in the Christchurch District Court, highlights a growing debate about the use of AI in personal communication, specifically regarding expressions of remorse, which can influence sentencing.

Ephrat LivniNew York Times - WorldFiled 2026-02-17 · 18:09 GMTLean · Center-LeftRead · 4 min
NEW YORK TIMES - WORLD
Reading time
4min
Word count
879words
Sources cited
2cited
Entities identified
6entities
Quality score
100%
§ 01

Briefing Summary

AI-generated
NEWSAR · AI

In a New Zealand arson case, Judge Tom Gilbert questioned the sincerity of apology letters submitted by the defendant after discovering they were partially written using AI. The judge's concern, voiced in the Christchurch District Court, highlights a growing debate about the use of AI in personal communication, specifically regarding expressions of remorse, which can influence sentencing. While not criticizing the defendant directly, Judge Gilbert indicated that computer-generated letters do not demonstrate genuine remorse. The case, initially reported by The New Zealand Herald, reflects a broader trend of individuals outsourcing tasks like writing apologies and eulogies to AI, raising questions about authenticity and the potential for over-reliance on technology in personal matters. Social scientists suggest this trend reflects deeper questions about human values and priorities.

Confidence 0.90Sources 2Claims 5Entities 6
§ 02

Article analysis

Model · rule-based
Framing
Technology
Legal & Judicial
Tone
Measured
AI-assessed
CalmNeutralAlarmist
Factuality
0.70 / 1.00
Factual
LowHigh
Sources cited
2
Limited
FewMany
§ 03

Key claims

5 extracted
01

Jim A.C. Everett said AI use reflects human values.

quoteJim A.C. Everett
Confidence
1.00
02

Judge Gilbert said a computer-generated letter doesn't show genuine remorse.

quoteJudge Gilbert
Confidence
1.00
03

Judge Gilbert used AI tools to draft remorse letters and found similarities to the defendant's letters.

factual
Confidence
1.00
04

A judge questioned the sincerity of apology letters in an arson case because they were written with AI.

factual
Confidence
0.90
05

People believe some activities should require effort to seem genuine.

factual
Confidence
0.80
§ 04

Full report

4 min read · 879 words
In Arson Case, a Judge Wrestles With A.I.-Assisted Apology LettersThe use of artificial intelligence gave a New Zealand judge pause about the genuineness of the remorse expressed in the apology. It reflects a wider discussion about using A.I. for personal communication.The arson case was heard in the District Court in Christchurch, New Zealand.Credit...Sanka Vidagama/Agence France-Presse — Getty ImagesFeb. 17, 2026, 1:09 p.m. ETA judge in New Zealand who discovered last week that apology letters from a defendant in an arson case had been written with the help of artificial intelligence raised questions about the sincerity of her sentiments.“The issue of remorse is interesting,” said the judge, Tom Gilbert, of the District Court in Christchurch, as he mulled the punishment of a woman who had pleaded guilty to arson and other charges. Remorse can be a mitigating factor in sentencing. Her letters to the victims and the court were nicely written, the judge said. He decided to do some sleuthing.“Out of curiosity I punched into two A.I. tools ‘draft me a letter for a judge expressing remorse for my offending,’” the judge said, according to a transcript of the sentencing hearing that was shared with The New York Times. “It became immediately apparent that these were two A.I.-generated letters, albeit with tweaks around the edges.”The case illustrates a global debate about using machines for meaningful communication.Judge Gilbert said he was not criticizing the defendant’s use of A.I. “But certainly when one is considering the genuineness of an individual’s remorse, simply producing a computer-generated letter does not really take me anywhere as far as I am concerned,” he said, according to the transcript.The judge was not alone in wrestling with the question of authenticity in A.I.-assisted writing in the case before him, which was initially reported by The New Zealand Herald. Increasingly, people are outsourcing many tasks to machines, including writing apologies, eulogies and wedding vows, perhaps saving precious time but also inviting the ire of some of their fellow humans.Growing reliance on A.I. has led some cultural commentators to bemoan “the rise of the LLeMmings,” people who depend on large language models known as LLMs to aid much of their thinking and production, including in their personal lives. Social scientists say the questions raised by use of these tools go beyond etiquette.“It’s a mirror into who we are and what we care about as humans,” Jim A.C. Everett, an associate professor of psychology at the University of Kent in Britain, said in an email about the case in New Zealand and his own work on what is described as the “outsourcing penalty.” Dr. Everett worked on a series of recent studies on the perception of A.I. use and users.Generative A.I. tools like ChatGPT are promoted as time savers that produce better work faster, experts say. But people apparently believe that certain activities should take work in order to seem genuine, and more personal A.I.-generated efforts are received particularly poorly, his research suggests.Across the six studies Dr. Everett worked on, 4,000 participants were asked about 20 tasks — including writing computer code, concocting recipes and drafting love letters and apologies. The aim was to understand how people perceive those who use A.I. and how that perception might shift depending on the activity and the way the tools were used.“A.I. is a tool for efficiency, and it can be helpful, but it also typically involves, and signals, reduced effort,” Dr. Everett said.But the use of such tools acts as a kind of proxy for character traits, the researchers found. The findings, published in the journal Computers in Human Behavior, suggest that people generally perceived those using A.I. as lazier, less competent and less trustworthy, and their work as less meaningful and authentic.“When you spend time crafting a piece of writing or completing a task yourself, others can assume that message reflects your priorities: that what you write is authentically yours and represents what you care about,” Dr. Everett said.The situation in the New Zealand courtroom was a real-life test of the sentiments and perceptions that he and his fellow researchers sought to identify and understand. “An A.I. could be perfectly trained on all apologies that a person has ever written, but one might still think that a specific apology it then generates in a new instance is not an authentic apology because it does not come from the kind of processes deemed important in an apology: a personal recollection of the wrong, a commitment to change,” the study said.The study detected a difference in how people perceived using A.I. for social tasks, like writing a love letter, versus a practical tasks, like writing computer code. The more personal a task the more negative was their impression.Judge Gilbert ultimately said he was willing to give the defendant some credit for genuine remorse, looking beyond the letters.But he granted only a 5 percent reduction in the sentence instead of the 10 percent the defense lawyer had requested based on the defendant’s remorse.“I do not consider this is a case where 10 percent is justified and, indeed, 5 percent might be viewed as reasonably generous,” Judge Gilbert concluded.In the end, he sentenced the defendant to 27 months in prison.Ephrat Livni is a Times reporter covering breaking news around the world. She is based in Washington.SKIP
§ 05

Entities

6 identified
§ 06

Keywords & salience

9 terms
a.i.-assisted apology
1.00
artificial intelligence
0.90
remorse
0.80
genuineness
0.70
court apology
0.70
large language models
0.60
authenticity
0.50
sentencing
0.50
arson case
0.40
§ 07

Topic connections

Interactive graph
Network visualization showing 51 related topics
View Full Graph
Person Organization Location Event|Click node to navigate|Edge numbers = shared articles