User-Generated Content as an Ethical Relation

4 minute read

Published:

It is 12:16 AM on a Sunday night, and I just spent this wonderful weekend inside, working on a paper. I am tired and just want to go to bed, but I am – for some reason – here, typing. I have not updated this site in over a week and feel some obligation to write a new post. Why? Obviously this would make sense if this site had a large number of readers, or even a few dedicated ones I knew enjoyed my random musings. In that case, I would be fulfilling some sort of obligation to a group of humans, something I don’t really have a problem with. However, to the best of my knowledge, there are no humans for whom this is being written. Instead, the main impetus to my post is initiated by an obligation to the software upon which this site runs.

I feel bad that I have not written a new entry in so long. I feel like I should apologize – not to the readers, but to the software, to the site itself. Even in the case of my Twitter account, which is written for the six or seven individuals who happen to be following me, it sits on my front page, giving me a list of all who have Twittered – with my name strikingly absent. The interface is designed so that I can instantly update my status, and I feel compelled, like I have some obligation not to the seven followers of my Twitter account, but to the software itself.

Now, obligations imply normativity, and normativity implies an ethical relation: I ought to write a new post; I ought to update my status. How did I get into a situation whereby these collections of code could make ethical demands upon me? Obviously, the responsibility lies with me – the concept of ethics presupposes the concept of decisionmaking, which in turn requires me to conceptualize myself as a free agent capable of making autonomous decisions even if I have no other reason to believe this – in a gross simplification of Kant, how can ethics make sense if you don’t have control over your own actions?

Because I am responsible for my ethical framework, I was the one who let the software make such a demand upon me. Or, more accurately, I am the one who perceived the demand as originating from the software, in addition to being the one who perceived it in terms of a demand as such. Ultimately, I am the one who brought this ethical relation into being out of nothingness. Now, it is a silly question to ask whether or not I “actually” have an ethical relation to the software, that is, to ask whether or not the demand that I perceive is real or “simply” my imagination going wild. Perhaps this doubt would make sense if we were talking about the existence a physical object (it doesn’t, Descartes was on the wrong track), but an obligation is a duty precisely because it is perceived by the ethical agent as such.

Obviously it is possible to have an ethical relation to a non-human or even non-living things. For example, one might feel an ethical duty to preserve the Grand Canyon or the Swiss Alps for a reason other than to preserve it for the pleasure of other humans or the survival of various lifeforms in and around the area. Someone may feel an obligation to the landscape itself that is almost an aesthetic relation – a desire to preserve it in all its beauty and majesty because those qualities are inherently good for their own sake.

However, it is an entirely logical to ask whether I ought to have an ethical relation to the inanimate, whether I ought to subordinate my will to its demands. In one sense, feeling ethically responsible to things might seem irresponsible to humans, and therefore quite unethical from a humanistic point of view. A better example is that of the Tamagotchi, the pocket computer that simulated a pet, which the user would have to feed and clean or else it would die. Does someone who owns a Tamagotchi have an ethical relation to the simulated creature inside, to the point where it is ethical to shirk one’s duties to other humans and lifeforms to care for the computer program? It seems that human-centered ethics would answer in the negative.

However, can we justify this form of ethics without slipping into existential relativism, whereby an ethical obligation is a good ethical obligation because I believe I should follow it? I think that because I can even ask that question, we’re into ethics proper and I’ve proved my point.