Can AI avatars of dead loved ones ease the grief of death?

24 July 2023

Special relatively and cosmic consciousnesses may one day possibly, maybe, precipitate interactions (of who knows what sort) with deceased family and friends. But that day, if it ever arrives, will be in the far, far, distant future. The idea though of making contact with the dead is fantastical, but nonetheless one which has probably preoccupied people since the dawn of time.

And seeking comfort, following the death of someone close, may be why some people give the idea thought. Perhaps a deceased near and dear could somehow alleviate the grief of those left behind, if only there were a way to reach them. And possibly some people have found a way to make this happen, by way of LLM chat bots such as ChatGPT, says Aimee Pearcy, writing for The Guardian:

At the peak of the early buzz surrounding ChatGPT in March, [Sunshine] Henle, who works in the artificial intelligence industry, made a spur-of-the-moment decision to feed some of the last phone text messages and Facebook chat messages she had exchanged with her mother into the platform. She asked it to reply in Linda’s voice. It had been a few months since her mother had died, and while Henle had previously connected with a local therapist to help her cope with her grief, she found it disappointing. “It felt very cold and there was no empathy,” she says.

Indeed some people have found solace in their AI interactions with deceased family members, but for others the experience has been anything but comforting. It’s a concept though that gives rise to numerous ethical and legal problems. Can we go ahead and create AI avatars of the dead without the permission of the person in question? But what of the potential for misuse of the technology, and possible misrepresentation of the thoughts of the deceased?

Last year, the Israeli AI company AI21 Labs created a model of the late Ruth Bader Ginsburg, a former associate justice of the supreme court. The Washington Post reported that her clerk, Paul Schiff Berman, said that the chatbot had misrepresented her views on a legal issue when he tried asking it a question and that it did a poor job of replicating her unique speaking and writing style.

RELATED CONTENT

,