Unit 3: Synchronous Session Questions for Joanna Stern
Questions
In your documentary How Tech Can Bring Our Loved Ones to Life After They Die, you discuss the two distinct parts of digital legacy. What the person wants to leave behind, and what survivors want in their experiences of being left behind. Many of the current services walk a line between remembrance and emulation. Hereafter’s Dadbot is a clearly positioned product of remembrance. Storyfile’s videos are one of convincing emulation. You were one of the first reporters to experience the Apple Vision Pro. Do you see these technological threads of remembrance and emulation converging with such a device? And if so, what do you think are the ethical questions raised by a hypothetical HereafterVR?
We’re already beginning to see generative AI appear in political campaign materials ahead of next year’s election. With little legislation or enforceable guidance in place, and building upon previous digital weaponized misinformation in the 2016 and 2020 elections, what role do you see platforms such as MidJourney or ChatGPT4 playing over the next 12-18 months?
Much of the normalization of digital presence beyond death comes from the movies, a natural extension of the digital de-aging we’ve grown accustomed to in recent years. But now that we can see actors from bygone eras performing in new films, should we? What kinds of responsibilities do the estates of performers have in empowering or hindering such projects, especially those who’ve lived before this was possible and couldn’t have had the foresight to agree?
While those who trade in images are struggling to put the genie back into the generative bottle, the music industry has come down much harder, much faster. With artists like Grimes open-sourcing their voice for sampling, and the possibilities of a new AI-generated Beatles record later this year, is it really just a matter of time before audio and video go the same way as still images?
Comment
One of the things we’ve recently been reading about concerns generative model collapse. The point where so much of the web is the output of generative AI that human-generated content moves into the minority. As models begin to train themselves on their own outputs over time, they magnify mistakes, but also misrepresent and misunderstand less popular or common data. This causes irreversible scaling of discriminatory defects inside of these, arguably already biased systems. Researchers are suggesting this is 2-3 years away, but given this prediction, I’d love to get your thoughts on what you think the web looks like a year from now.