Unit 2 Takeaways: Can vs. Should
When the founders of Facebook originally spoke of ‘making the world more open and connected’, did they conceive of a global tool with the capacity to influence voting behavior, detrimentally impact health, and support widespread communal division? When Google speaks of ‘organizing the world's information and making it universally accessible and useful’ did they understand that their services would be used to optimize remote drone strikes? Who decides what constitutes ‘useful’ and to what end? When Dr. Chris Gilliard talks of unintended consequences, he’s speaking of outlier use cases that creators either don’t see, or don’t care about. The second piece here is the most concerning. That the platforms understand the risks but take them anyway. Either for financial reasons, competitive share, or simply because they feel as if the risk is one they can absorb through a perspective of managing outcomes rather than mitigating problems in advance. The ethical problem is that they do it anyway.
My research is coalescing around grieftech. The use of artificial intelligence to absorb the emotional distress of death for those still alive. It brings together many questions of unintended consequence. Through uploading our digital essence to a platform, we create the ability for loved ones to still hear us. Preserved in digital amber and locked into a system of annual recurring revenue, we finally begin to transcend death. But should we? This week I’ve been thinking a lot about the faith-based implications of such technology, and how in many ways being able to speak to those without physical presence is common in many religions. I am very much wrestling with this from my own faith. This week I plan to record an interview with a Catholic priest talking to him about this. Not just to get his perspective, but also to try and help me make sense of it for my own path forward as both a believer, but also as a digital ethicist.