Part One: Transcending Citizenship
The ethical responsibilities of grieftech’s harvesting of human essence
Key Points:
There often emerges a perceptual dissonance which motivates ethical fears of the future, especially around the dystopian ideas of artificial intelligence accelerating beyond our control. A discomfort with the idea of digital memory and preservation beyond death taps into the very real problems we already have with our own mortality.
Resurfacing memory, feelings of nostalgia, and the ability to digitally recall the events of our lives and those we love are powerful motivators of engagement and attention.
While individuals may lack the language to articulate the question, we still possess the agency and responsibility to be curious about consequence of equitable exchange.
The means by which we create meaning from our individual experiences of the world surface issues of what’s possible, what’s culturally defined as ethical, and ultimately what’s legal. All of these collide inside of grieftech experiences, which at least for now, are often constructed from a western, individualistic and affluent perspective.
The All-Too-Human Questions Posed By The Arrival Of The Future
Immortality has long been a trope of science fiction. The ability to live beyond our years, to see the future made manifest, and to extend the physical limits of our bodies has been an inaccessible dream for thousands of years. But what if that dream came true? What if artificial intelligence made it possible for those who survive us to still experience our personalities and presences, interact with us, laugh with us, and and feel as if we hadn’t passed on at all? What would that do to our understanding of the world and each other.
What I’m circling here is the idea of when technological experiences transcend our existing understanding what it means to be alive. And this idea really begins with the ideas of framing and categorical perception. Bateson’s ideas of psychological framing (Bateson, 1954) and later Goffman’s extensions into notions of frames of reference (Goffman, 1974) strongly align with the ethics of artificial intelligence in surfacing these all too human questions. And in particular, what happens when we begin to break that frame, or when the frame becomes less clear. i’ll be applying these ideas to artificial intelligence in the context of the emerging product practice of what’s come to be known as grieftech. The development of chatbot-based services which seek to capture the essence of a living human, but preserve it in digital amber for remembrance by those who survive. We might think of this as a digital extrapolation of the old family photographs many of us hang on the walls of our homes as a way to remember our ancestors. Or the scrapbooks and albums we keep on dusty bookshelves made interactive and engaging.
But when we wrestle with artificial intelligence, which as of this writing in 2023 is definitely experiencing its public coming out party as the future begins to arrive, we surface all-too-human questions. Questions of language, consequence, agency, faith, authenticity and value. We might follow the money to learn more, but we might also follow our feelings of faith. In many ways, the ‘troubling question that lies beneath the surface comes from our lack of understanding about how the human mind works and what its limitations and true potential are’ (Krieger, 2023).
In the following articles, I’m going to lean on some existing ethical scholarship to help us navigate this ambiguity, and help us turn the cacophony of artificial intelligence hype into signal we can apply. As we mentioned above, let’s start with Bateson and Goffman. Bateson’s ideas of psychological framing describe a class or set of messages framed around a meaningful action (Bateson, 1954). The frame might be a frame of reference, a frame of context, or a frame of prior experience. They help us navigate our interactions in the world, and the degree to which we are conscious of them enables us to engage and respond in the most appropriate manner. It echoes what the Ancient Greeks called kairos, or the ability to recognize when to talk and when to listen. Our the Korean virtue of nunchi, our ability to read the room and know how to be with ourselves and each other. As Bateson describes, "in many instances, the frame is consciously recognized and even represented in vocabulary (‘play,’ ‘movie,’ ‘interview,’ ‘job,’ ‘language’ etc.) In other cases, there may be no explicit verbal reference to the frame, and the subject may have no consciousness of it..." (Bateson, 1954). Sometimes we get it. And sometimes we don’t.
Expanding on Bateson’s ideas of framing, sociologist Erving Goffman argued that it’s not just how we frame, but how we define a situation that’s important. And from that definition we develop meaning. It’s a critical component to the necessary sense-making we need to navigate the world. How do we organize the individualized, subjective principles which govern and organize a situation? (Goffman, 1974). Essentially, how might we examine how we organize our experiences. These notions of framing, and how we organize our experiences in the world are what grieftech’s platforms are pushing against. They break the frame of our understanding of life and death. They challenge our organizational understandings of past and present. And they offer experiences where we are actively doing something we might feel shouldn’t be possible. This is why they’re problematic.
These adjustments in perceptual phenomenon, the way in which we experience the world, are further explored by Romero in extending our organization of the world to the limits of our language (Romero, 2021). We wrestle with ethical questions because we often lack the capacity and language to categorize within those frames of reference. Our physical reality ends with death, but our digital reality does not. Such frames are often out of focus or broken as we grasp at definition and our relationship towards that which we don’t have the language. We might lean on aspects of biomimicry and the life sciences to understand neural networks or machine learning, but we’re still constrained by the language available to us. Out of this there often emerges a perceptual dissonance which motivates ethical fears of the future, especially around the dystopian ideas of artificial intelligence accelerating beyond our control. A discomfort with the idea of digital memory and preservation beyond death taps into the very real problems we already have with our own mortality.
Harvesting Attention, Agency & Resurfacing Remembrance
As humans, we have a limited capacity to pay attention to things. And the number of things which want our attention far exceeds our capacity to indulge them. In many ways, attention is the oil on which the digital economy runs, commodified and sold as much as it is engineered and extended (Bombaerts et al., 2023). We doom scroll and binge watch into bouts of endless mobile distraction. The longer we engage, the more advertising inventory becomes available. And the more inventory available, the greater the revenue opportunity for advertisers seeking to reach their intended audiences. But in this engineering, there are conscious user experience decisions being made by humans to actively reduce the agency of the product’s users. To diminish the capacity and capabilities of individuals to have the power and the resources to effect change and to independently choose to act (Krieger, 2023).
Humans become data, organized into targeted cohorts and assigned propensity scores. These scores then determine what gets served up next, framed within the context of recommendations and suggested playlists. They present as choice, but actively work against agency. In Bateson’s terms, such tactics seek to keep us within a defined frame (in the digital sense it’s the ‘viewport’) for as long as possible. So it’s also that we’re often willing to submit to such tactics. As Dr. Shanen Boettcher neatly concludes, “we as a species are data hungry. It almost it never ceases to amaze me how much we seek new data, we seek information we react, we love to react to feeds put in front of us, in an insatiable way" (Boettcher, 2023).
So where we might think of attention as a commodifiable resource, it's also one which is infinitely renewable, and where relational proximity is a powerful motivator of attention, especially the preservation of memory around relation. Resurfacing memory, feelings of nostalgia, and the ability to digitally recall the events of our lives and those we love are powerful motivators of engagement and attention. We pay closer attention to those we care about. And when we pay closer attention to something, the value of our engagement increases.
Is It Worth It? Understanding The Value Exchange
The question we have to ask is… is it worth it? Do we have the capacity and language to even understand such a value exchange, especially when multiple interests are harvesting our attention at the same time? These notions of value exchange are important to understand as being culturally defined. They’re shaped by the existing relationships within our cultures and societies before they submit to digital translation. And in doing so for our context, there are broad variations in cultural and faith-based relations with death. Sometimes we have the language to understand these cultural differences. Very often we don’t. As Li (2020) motivates, very often when we talk of value exchange, we’re referring to the value users can expect to receive from personal data sharing. Examples of this might be saving time or money, benefitting their community, or doing something kind for others. These differences in value exchange adjust based on the existing values between individualistic and collectivistic communities.
How and why we name something shapes how we perceive concepts which are alien to us (Romero, 2021). For example, an individual who knows very little about artificial intelligence may try to infer meaning from its name, but with only superficial comprehension. The names we use shape a discipline’s destiny, especially those of the new and complex, and we project our own values onto the technologies we use. But inside a product actively seeking to reduce our agency through engineered attention, these technological innovations still function within a cultural context which requires institutional support and financial investment. Or another way, when the product is free, you are the product. Or as Coeckelbergh more forcefully articulates, these products "change the economy in a way that turns us all into smartphone cattle milked for our data" (Coeckelbergh, 2020).
Individuals often lack the attention, language or framing capacity to understand this exchange. Terms of Service agreements are often presented in obfuscated, opaque, lengthy, legal language. We click OK without knowing what’s being offered in our race to engage. But I don’t think the ‘if we only knew’ argument holds up here either, as we often do know what’s being exchanged, and still do it anyway. So while we may lack the language to articulate the question, we still possess the agency and responsibility to be curious about consequence of equitable exchange.
Not Just Attention. An Economy Of Belief.
So it’s not just attention that’s being harvested, it’s also the responsibility to be curious and informed about what we believe, optimally before we begin to engage. As Romero describes, "How we name things affects greatly how we perceive those things, and by extension, the world around us. Reality influences language. We talk and think about objects around us and events that happen to us. Our reality defines what we use language for. But language also influences reality" (Romero, 2021). This phenomenology, the means by which we create meaning from our individual experiences of the world, surfaces issues of what’s possible, what’s culturally defined as ethical, and ultimately what’s legal. All of these collide inside of grieftech experiences, which at least for now, are often constructed from a western, individualistic and affluent perspective.
But curiosity is still a tool in our arsenal of responsible citizenship. We can still assume responsibility for our own well-being and expect to participate in decisions and encounters based on the accomplishment of particular tasks (Arntson, 1989). Indeed, the healthiest organizations exhibit the ability for individuals to raise their concerns and have discussions around such ethical dilemmas. But such practice is currently not common, especially within grieftech startups. Developers and investors need to their our game on awareness about where, how information is being presented to their users, where it's coming from and what biases it could contain (Boettcher, 2023).
In the next article we’ll talk more about these tactics of data extraction, ethical issues of privacy, unintended consequences and our rights to be both forgotten and remembered.
References:
Arntson, P. (1989). Improving Citizens' Health Competencies. Health Communication. Lawrence Erlbaum Associates, Inc.
Bateson, G. (1954). A Theory of Play and Fantasy. Steps to an Ecology of Mind: Collected essays in anthropology, psychiatry, evolution, and epistemology. Jason Aronson, 1972.
Boettcher, S. (2023). Unit 1.5 Guest Lecture: Shanen Boettcher (42:41). [Digital Video File]. Retrieved from https://canvas.upenn.edu/courses/1693062/pages/unit-1-dot-5-guest-lecture-shanen-boettcher-42-41?module_item_id=26380220.
Bombaerts, G. et al. (2023). Attention as Practice: Buddhist ethics responses to persuasive technologies. Global Philosophy Vol. 33, No. 25. Retrieved from: https://doi.org/10.1007/s10516-023-09680-4.
Coeckelbergh, M. (2020). AI Ethics. MIT Press.
Goffman, E. (1974). Frame Analysis: An essay on the organization of experience. University Press of New England: Lebanon, NH, 1974, pgs. 10-11.
Krieger, M. (2023). Unit 1.1 Contextualizing the philosophical side of meaning – some basic concepts (15:36). [Digital Audio File]. Retrieved from https://canvas.upenn.edu/courses/1693062/pages/unit-1-dot-1-contextualizing-the-philosophical-side-of-meaning-some-basic-concepts-15-36?module_item_id=26380190.
Li, Y. (2022). Cross-Cultural Privacy Differences. Modern Socio-Technical Perspectives on Privacy. B. P. Knijnenburg et al. (eds.). [Digital File]. Retrieved from https://doi.org/10.1007/978-3-030-82786-1_12.
Romero, A. (2021). What Would the World Look Like if Ai Wasn't Called Ai? Towards Data Science. [Digital File]. Retrieved from https://towardsdatascience.com/what-would-the-world-look-like-if-ai-wasnt-called-ai-bfb5ae35e68a.