Machine Learning

Machine Learning Algorithms in Social Media: the Emergence of a Split Subject?

by Keldt Schoeman [1]

[1]Department of Philosophy, University of Pretoria

AI Ethics Journal 2021, 2(1)-1, https://doi.org/10.47289/aiej20210910

Received 1 September 2020 || Accepted 1 April 2021 || Published 31 May 2021

Keywords: Machine learning, split subject, the subjebct, virtual reality, artificial intelligence, social media.

0.1 Abstract

Machine learning algorithms are the most common way in which most people interact with artificial intelligence. Wide scale usage of Machine learning has grown dramatically during the last decade, particularly within social media platforms. Considering the almost three billion monthly active users at Facebook and that most of their services rely heavily on machine learning, the aim of this essay is to investigate some of the social and moral implications of ML algorithms employed in social media. Guided by the adage ‘we shape our tools and then they shape us’ the common thread among several varied effects of social media was the outsourcing of important social actions from our physical reality to a virtual one. And, with current ML algorithms being successfully utilised to increase user time expenditure, social media platforms are likely to operate as an amplifier of social media effects i.e., greater time expenditure leads to greater amounts of important social actions outsourced to virtual reality. Now, considering that such extraordinary change as could be wrought by a fourth industrial revolution has historically been accompanied by change in the philosophical subject, it is not unreasonable to consider the possibility that change is occurring once more. Yet, I posit the view that we are currently in an intermediary phase between the physical and virtual realities, that we stand today as split subjects. For, while devices like our phones, consoles, watches and computers mean we are always on, many important social actions remain in the physical real. Though, even the effects of a partial transformation of the subject are substantial, as the kind of splitting many of us do today is reminiscent of compartmentalization, a psychologically significant coping mechanism known for its corrosion of moral agency. As such, with a potentially transient contemporary subject and a variety of associated effects the split subject is rich ground for further research.

1.0 Introduction

Historically, philosophical work on Artificial Intelligence centres around topics like defining intelligence or consciousness; or nowadays, topics like mind-uploading and the extended mind in the field of philosophy of mind; or, in the ethics of AI, around the moral implications of social robotics, or the ethics of autonomous weapons systems. This essay is difficult to place in any of the above fields; instead, its focus is on an aspect of Artificial Intelligence which is more closeted yet simultaneously more commonly interacted with by the average person – AI machine learning algorithms employed in social media. As such, for the purpose of clarification I would say that the paper is at the intersection of information ethics and AI.

Social media has only existed for the last fifteen years and only been popular for around the last ten. As such, it is an emerging phenomenon the effects of which are not yet well understood. Therefore, my aim is not to claim some essential truth about social media, Machine Learning (ML) algorithms or Artificial Intelligence (AI); rather, the aim is to give a detailed account of imminent emerging concerns in this domain. In particular, the adage, “we shape our tools and then they shape us” directs the research in this essay.

The essay will investigate the social and moral implications of AI; specifically, the way social media is outsourcing important actions and changing the manner in which users are made subjects, paying particular attention to the role that machine learning algorithms play in this transformation. The work of Sherry Turkle and Dominic Pettman is particularly useful for illuminating the potentially novel ways in which this transformation is being manufactured by social media AI.

2.0 Social media; relied on by us and reliant on machine learning

To begin, let us look at the social and moral effects of social media through Facebook’s ranking machine learning algorithm and personalised advertising algorithm. Like most services rendered at Facebook personalised advertisements and the ranking of content is crucially leveraged by machine learning algorithms. The ranking algorithm is responsible for ranking content based off each individual user’s interaction with their newsfeed, aiming to ‘streamline’ their experience by displaying most ‘relevant’ content. The aim of the algorithm in this instance is to increase the amount of time each user spends on their newsfeed, thereby keeping them in the virtual world longer. On the other hand, the personalised advertising algorithm is responsible for profiling users and selecting which advertisements will have the most impact; thereby, increasing the intensity of paid advertising content which users engage with. One algorithm increases time expenditure and one algorithm increases intensity of content through profiling, yet the arguments laid out in this paper aim to demonstrate that, ultimately these services and many others amplify the effects of social media.

The choice to analyse Facebook over smaller platforms like Instagram, WeChat, WhatsApp, Twitter and YouTube is justifiable by both size and information availability. Facebook is the largest social media network in history and disclose important data publically in investor reports. As of the second quarter of 2020 Facebook has 1.79 billion accounts statistically grouped as daily active users (Facebook Inc, 2020: 1). Facebook usage increases to 2.70 billion when looking at accounts statistically grouped as monthly active users (ibid.).

Considering the above, more than a third of the planet’s population interacts with Facebook monthly and almost a quarter interact daily. Therefore, with Facebook usage existing on such a large scale, the question must necessarily become, what kind of effect does Facebook have on its users? Of course this essay is not capable of tackling even a fraction of that question but approaching such an inquiry by analysing the aims built into the ML algorithms at Facebook may give useful information for directing research. Thus, the question becomes where do machine learning algorithms fit into the bigger picture on Facebook and what do they aim to achieve?

Scholars who work in the field of computer culture or social media may not deny the aims of the algorithms, but rather argue that ML algorithms play only an insignificant role in these platforms and their effects. To address this concern, the magnitude of ML algorithm application in social media platforms must be determined. For example, McQuillan argues that

“Machine learning is a form of knowledge production native to the era of big data. It is at the core of social media platforms and everyday interactions.” (2018: 1).

Yet, to be more specific, the length and breadth of Facebook’s employment of ML algorithms is substantial. Facebook leverages several of their major services such as news feed ranking, personalised advertisements, search priority, spam detection, anomaly detection, content classification, image analysis, face detection, language translation and speech detection with ML algorithms (Hazelwood et al, 2018: 2-3). Additionally, Facebook uses a complex array of different ML algorithms in the above domains to process information and streamline the user experience. Some examples of the different types of machine learning algorithms are Deep Neural Networks (of which there are three types), Logistical Regression, Gradient Boosted Decision Trees and Support Vector Machines (ibid.: 3). Furthermore, Facebook requires these ML algorithms to provide their services.

The purpose of the above is not to throw complex terms at the reader, but rather to make the point that AI machine learning algorithms are integral to Facebook. In other words, not only do AI machine learning algorithms exist in several major services offered by Facebook, but the company also requires a wide variety of different and expensive types of ML algorithms to provide many of their basic services. Therefore, the magnitude of application of ML algorithms in the instance of Facebook is substantial enough that we can consider it to be crucial to the platform.

Though, it is important to highlight that social media has only become a regular practice for a substantial amount of people in the last ten years and it is still a new phenomenon; what we may call an emerging phenomenon. As such, the full effects of social media platforms on their users is not yet clear. However, this is not to discourage attempts to understand the effects of social media, but rather to pay fidelity to the complexity and diversity of the phenomenon and its impact on individuals and larger society.

There are two major opposing arguments I identify relating to the effects of social media on its users. The first makes a moral claim, ‘social media is bad for you’. This is too vague to be taken seriously in and of itself; however, it can instead be understood as a general header under which certain moral claims about the effects of social media can be grouped. The second argument makes a factual claim, ‘social media connects people’.

In terms of the argument that ‘social media is bad for you’ one study found that “Facebook usage and direct social contact interacted significantly to predict changes in affective well-being” (Kross et al, 2013: 4). Simply put, this study found that, in the case of Facebook, those who experience greater direct social contact as opposed to virtual social contact report greater happiness and mood satisfaction (ibid.). Additionally, the article points towards the outsourcing of important social actions as the potential source of the mood decrease. In other words, spending time in the physical world conversing and interacting with other people is being outsourced to the virtual world through services like the Facebook newsfeed and is being linked to mood declines.

Subsequently, let us consider this example in relation to the ML algorithms. The Facebook newsfeed algorithm is a ranking ML algorithm, and as such, depending on what content a user interacted with in the past, measured with Facebook metrics like views, reacts and comments, the algorithm ranks the order in which the user will encounter future content. Furthermore, the aim of the algorithm is to maximise time spent on Facebook. Therefore, the algorithm searches for patterns in the data which indicate the potential for increased time usage by its users. Additionally, the programming of such algorithms has been successful as Facebook continues to see growth across the board as recent as the second quarter of 2020 (Facebook Inc, 2020: 1).

For the present, however, let us not assume the negative effects of this kind of outsourcing; instead, let us simply explicate the scenario further. McCarthy (2019) suggests approaching new technology, like social media and machine learning, as a case of “we shape our tools and then they shape us”. He derives this approach from the maxim of Marshall McLuhan in his book Understanding Media: The Extensions of Man (1994) [1]. McCarthy goes on to argue that, in many cases, our tools end up shaping us substantially more than we expect; to be specific, any purpose which they are supposed to serve (Facebook connecting us) is outweighed by the influence they have on our politics, culture and behaviour.

Now, consider that the founder of Facebook, Mark Zuckerberg, has consistently reiterated that his platform connects people. This kind of statement is consistent with the second type of argument, a factual claim. For example,

“The idea of bringing the world closer together has animated and driven Zuckerberg from the beginning. His speeches, his letters to investors, his essays on Facebook, his interviews with journalists, and the quiet tour he took of the united states in early 2017 all resonate with that theme. He believes that his company can and should unite people from across the globe. He also believes that the consequences of that process of connecting people are predictable and largely beneficial” (Vaidyanathan, 2018: 1).

Yet, considering that the original purpose of our tools may be inconsequential when compared to the unintended effects they eventually have on us, the work of McCarthy (2019) on machine learning algorithms points to a scenario which undermines Zuckerberg’s key argument for what Facebook is achieving.

To pull this thread further let us make an abstraction about Facebook consistent with McCarthy’s analogy and consider it as a tool. The aim of this tool, as posited by its founder, is to connect people; yet, as we are beginning to see, the effects may not necessarily be consistent with the aims. This is the key contradiction McCarthy and McLuhan’s work illuminates, that the real effects of tools may be very different from their original purpose. For example,refer back to the Kross et al (2013) findings on decreased mood satisfaction. For, as has been mentioned, the creator of Facebook did not intend for his platform to negatively affect users’ moods. Instead, his explicit goal was to connect people. Facebook, as a tool, may intend to connect us, but the reality may well be in opposition to such a vision; that it disconnects us from one another by outsourcing important social actions.

Thus, is Zuckerberg deceiving us with what his aims are, or, more plausibly, are the effects of his tool on the culture, behaviour and politics of humans far wider and deeper than the initial purpose it was shaped for? It is my position that, continuing to guide inquiry with the maxim “we shape our tools and then they shape us”, we will find with increasing regularity important social actions being outsourced to the virtual world by Facebook in particular and social media in general.

Consider the pivotal role Facebook played in Brexit and the election of Donald Trump as president of the United States of America; “through 2016 Facebook had hosted and promoted propaganda that influenced the referendum to move the United Kingdom out of the European Union and the election of Donald Trump in the United States” (Vaidyanathan, 2018: 2). Basically, what happened in the case of the United States, was that through Instagram and Facebook,Cambridge Analytica, a British political consulting firm, bought very particular advertising for the elections. These adverts were targeted to reach exactly 126 million Americans, mostly from swing states (ibid.). We do not know what would have happened without the purchase of these advertisements, but we do know that Donald Trump won the election and that Cambridge Analytica played a role in his victory.

It was largely machine learning algorithms which enabled Cambridge Analytica’s involvement to achieve such success in the United States election process. Those involved in the interference recognized, on some level, how effective the algorithms had become at leveraging personalised advertisements; taking advantage of a publicly available tool, the potential of which its makers did not fully understand.

Social media has become a source of what is considered legitimate news and is thus involved in the spread of ideology and particular politics. In particular, Facebook is an archetypal case study in how traditional forms of debate, news and dispersion of ideology are being outsourced to social media. Furthermore, considering the role which debate, news and dispersion of ideology play in the formation of individual values, this change has significant implications for how subjects are being formed today. The adage ‘do not believe everything you see on television’ is now most applicable to the internet and social media. Furthermore, the above demonstrates how deeply problematic the assumption that ML algorithms are value free is becoming.

For a moment though let us take a step back and ask, what role, specifically, do ML algorithms play in advertisements on Facebook? Machine learning algorithms are the foundation on which social media advertising rests (Hazelwood et al, 2018: 2-3). These algorithms process large amounts of data on personal Facebook accounts and build profiles on individual users. These profiles are then used to determine what kind of advertisements will be most effective on which types of users. Then, once this information has been determined, it can be sold to companies in the form of personalised advertisements (ibid.). For very rudimentary example of the power of Facebook’s profiling, consider a company advertising tampons. Such a company will seek to target female profiles,as they are more likely to buy their products. Facebook can then target profiles appropriate to the product with incredibly specific physical locations. It was these types of advertisements which Cambridge Analytica purchased. Not only did the information derived from these algorithms allow them to target accounts which may be sympathetic towards to the Trump campaign, but they could target users with specific advertisements more likely to be well received and in the areas which could swing a national election.

Considering the above we begin to see how ML algorithms have a complex interplay with the effects of social media. Yet, precious little has been explained about ML algorithms. For a start

“… machine learning is nothing like the emergent general intelligence that characterizes cultural representations of AI and is instead a set of mathematical methods that can perform amazing yet utterly thoughtless feats of classification” (McQuillan, 2018: 1).”

Machine learning is, in the simplest terms, a piece of code which analyses groups of data larger than the human mind can conceive and attempts to find patterns. Then, using the patterns derived from the data, the ML algorithm can produce predictions consistent with the aim. The real process is much more complex however; involving data trained multiple times to create an algorithm which achieves the desired aim.

Consider an example we have already engaged with, if Facebook aims to get people to spend more time on the newsfeed part of the platform, then the ML algorithm would look for patterns which predict increased newsfeed usage. In other words, if I keep seeing content which I am interested in, regardless of whether the interaction is positive or negative, I am more likely to continue to remain engaged in the content and remain on the platform. Yet, the problem with allowing ML algorithms to do this calculation is that

“… we cannot understand exactly what is being weighed in the balance, it is very hard to tell under what circumstances harm may be caused or in what ways the operations might be unethical” (ibid.: 2).

In other words, how is this information being weighed? This is a question which raises issues; firstly, the algorithm is promoting an arbitrary set of values which is most effective at achieving its aim and, secondly, these values are not value free. (McCarthy, 2019). For, the algorithm is by its own nature a form of arbitrary discrimination as the aim comes above all else.

The algorithm does not ‘know’ what it is trying to achieve, it has the mathematical task of finding the most effective way to achieve its aim. An algorithm does not ‘know’ in the way that human beings ‘know’. However, the problem of not knowing what is being weighed up in the calculation remains. For, in many cases, the researcher does not necessarily know what their ML algorithm will do until after it has done it; nor do they know exactly what data was selected in achieving the goal. Additionally, if the data is biased to begin with, then the ML algorithm will merely reproduce that bias on a grand scale. Yet, it is now justified by its mathematical nature. Both problems necessitate a moral need for accountability and transparency from the side of ML research. Such algorithms need to be ‘trained’ to take human morality into account when making decisions.

However, currently in machine learning research, the ends justify the means. The algorithm chooses the most effective path to its aim and researchers take little interest in what is being weighed in the balance. It has very few, if any, social or ethical considerations which direct its actions.

For example, if interaction with over-sexualized images of women predicts greater time spent on Facebook, the newsfeed algorithm will simply rank that content higher and the user will receive more of that content. If a user’s interaction with pictures of flowers predicts greater time spent on Facebook, the newsfeed algorithm will simply rank that content higher and they will receive more content of that nature too. The point of these examples is to demonstrate that even a value free algorithm will inevitably reproduce bias if biases exist in the data. For, the data being analysed is biased because it is us.

Considering all of the above, in this chapter I am emphasizing the ampliative nature of ML algorithms in social media. For, what machine learning algorithms excel at is taking a massive data set and finding useful information from the patterns within it and what Facebook excels at is using that data to make users spend more time on the platform. The combination of these two elements indicate that whatever the effects of Facebook may be, algorithms increasing the intensity and time users spend on the platform will only amplify them. Some examples of the effects of social media have been given in a more general sense in chapter one, but in the next chapter these effects will be further explicated in the context of Sherry Turkle’s work. In particular, I will focus on the split between a physical and virtual reality originating from outsourcing important social actions to new forms of technology like social media.

3.0 Split reality: anti-social social media

To analyse the effects of social media on individual users we begin by considering some of the effects which Turkle outlines in Alone Together (2011). For, our research continues to be directed by the adage, ‘we shape our tools and then they shape us’. Turkle identifies four central effects. In the first place, she emphasizes the link between social media and stress. Secondly, Turkle argues that social media also promotes individuals treating themselves as brands. Thirdly, she suggests that the culture of online interaction in spaces like social media tempts its users “into narcissistic ways of relating to the world” (Turkle, 2011:179). Finally, Turkle argues that social media does nothave a positive effect on productivity.

Considering the above, the effects of social media appear varied; however, as varied as they may be, there remains a common thread among all, the outsourcing of important social actions. The consequence of which is a split reality. Yet, I am getting ahead of myself. Before explaining the tendency towards a split reality, it is first necessary to demonstrate how social media outsources important social actions.

For example, a teenager whom Turkle interviews, Brad, finds that social media stresses him out, he much prefers interacting with his friends in person or over a phone call. He does not like the stress related to self-representation in social media and finds in person interaction to be easier, more satisfactory and less stressful. Brad is a teenager and before social media, self-presentation in adolescence was certainly already an ordeal; however, “what is new is living it out in public, sharing every mistake and false step” (ibid.:186). In this case, the storm of adolescence is being outsourced to exist online. More importantly however, the emotions and reactions characterized by this phase are being outsourced too.

Next, consider individuals treating themselves as brands and recognizing one another by profiles and not faces. This represents a tendency to simplify one’s identity on social media in order to make the self more recognizable and more easily likeable to others (ibid.:191).This tendency emerged because the social information a user can upload or interact with on social media is far more simplistic than in the physical real. In such virtual platforms, gone is complex body language,reading of facial expressions, tone of voice, level of eye contact and physical interaction; which all combine to form our experiences and evaluations of other people. Instead, in most cases of social media, such complex social information must be reduced and outsourced to a picture or video or a few lines of text. As such, it is much easier to get away with creating an idealized, simplified version of oneself. For example, in terms of oversimplification

“… you get reduced to a list of favourite things… in a conversation, it might be interesting that on a trip to Europe with my parents, I got interested in the political mural art in Belfast. But on a Facebook page, this is too much information. It would be the kiss of death. Too much, too soon, too weird. And yet… it is part of who I am, isn’t it?” (ibid.: 185).

Or, in terms of idealisation,

“… you’re not going to advertise the bad aspects of you. You’re not going to post pictures of how you look every day. You’re going to get your makeup on, put on your cute little outfit, you’re going to take your picture and post it up as your default and that’s what people are going to expect that you are every day, when you’re really making it up for all these people…You can write anything about yourself; these people don’t know. You can create who you want to be. You can say what kind of stereotype mould you want to fit in with… [while] maybe in real life it won’t work for you, you can’t pull it off” (ibid.: 191).

Thus, to call social media simplistic by comparison is to not go far enough. Social media is, at least in terms of interpersonal information, a mere shadow of real-world social interactions which encourages idealisation. In other words, social media teaches its users to oversimplify themselves into an easily understandable grey blob, exploring identity only as far as it is positively received by others.

Next, consider the narcissistic ways of interacting which emerge from outsourcing interpersonal connections to social media; for, in searching for positive reception from others, we now have far stronger means of putting up boundaries which enable us to interact with people only in ways that we like or find acceptable.

“In the psychoanalytic tradition, one speaks about narcissism not to indicate people who love themselves, but a personality so fragile that it needs constant support. It cannot tolerate the complex demands of other people but tries to relate them by distorting who they are and splitting off what it needs, what it can use” (ibid.: 177).

This is not too say that searching for support is a bad thing, for it is not. Instead, it is exclusively pursuing support at the expense of any contrary emotion, opinion or thought which can become unhealthy. Consider direct message services on social media, alone they do not

“… cause this new way of relating to our emotions and other people. But it does make it easy. Over time, a new style of being with each other becomes socially sanctioned. In every era, certain ways of relating come to feel natural. In our time, if we can continually be in touch, needing to be continually in touch does not seem a problem or pathology” (ibid.).

Yet, on closer inspection, a need to be constantly in touch and constantly supported, appears narcissistic by the very definition above. For, social media is helping us to become people so fragile that we need unrelentingly reassurance. Today, we attempt only to take what we need from those on our Facebook or smartphone contact list, and if they cannot provide it, we move on. No longer are we challenged to engage with the complexity of real people and real intimacy. Social media allows us to outsource this by choosing only the bits and pieces that we are comfortable with. Thus, while our interpersonal options seem greater and the potential for connection appears more numerous, the reality is that the potential for connection has become so abundant that the true need for others has disappeared, and with it, any pressure to deal with the full complexity of other people. Today, after a fight with a friend, partner or family member we can simply seek connection elsewhere, instead of learning to compromise with and accept the difficulty of those around us. Social media gives us a variety of options too wide for our own good. Consequently, it can outsource large quantities of our social interaction which can lead us into narcissistic behavioural tendencies.

Fourth and finally, consider how we seemingly outsource productivity to the domains of our phones and social media. I say seemingly because outsourcing productivity to a state of always being on, in Turkle’s view (ibid.: 17 & 151), may be more compatible with being always distracted than always being productive. Yet, for the purpose of this argument let us concede the above point and assume that increased productivity can be achieved by always carrying a phone and being connected to work through social media. With that concern out of the way another arises; interpersonal connections being outsourced to social media by always being on. It is

“… the now familiar paradox that too much media leads to anti-social situations, such as the proverbial friend or family member who would rather check their phone than talk to people sitting at the same table. The social here migrates away from the directly interpersonal, to a simulated version with a much narrower bottleneck for giving and receiving social cues” (Pettman, 2016).

Therefore, even if we accepted the shaky assumption that social media can make us more productive, there are an array of other problems raised by this development.

For example, consider a parent obsessing over their phone during what has previously been family time; making the excuse that its ‘for work’ is a common phenomenon today. In this way, not just work-related communication may be outsourced, but interpersonal connections too, as the line between home and work is blurred by social media. Turkle expands on it, arguing that the very nature of being always on nowadays means that while we are always here in the physical world, we are also always somewhere else too (Turkle, 2011: 152). Consequently, our attention is divided, and we spend less effort and attention on what is immediately around us. Turkle saliently dubs this a split reality.

And, this work on a split reality may begin to link all the varied social outsourcings discussed above. For, in these cases, the effects of social media are related to outsourcing important social actions from physical reality to virtual reality.Thus, many users of social media are stuck in a contemporary limbo as they try to exist in both their physical and virtual worlds. Consequently, in terms of the adage, “we shape our tools and then they shape us”, social media has begun to shape us is by splitting our reality.

Turkle’s first encounter with this particular kind of split was in the 1990s. Working at MIT she witnessed some very enthusiastic early adopters of the personal computer; they called themselves ‘cyborgs’ and carried around the very cumbersome kind of personal computers available in the early 90s. The so called ‘cyborgs’ carried keyboards, actual computers and radio transmitters. Such a large amount of effort was expended by these individuals to ensure that they could always be connected to the internet. Yet, it achieved exactly what it was meant to for these people lugging around such heavy equipment, “the cyborgs could not only search the Web but had mobile e-mail, instant messaging, and remote access to desktop computing. The multiplicity of worlds before them set them apart: they could be with you, but they were always somewhere else as well” (Turkle, 2011: 152).

Today, by this definition, anybody with a smartphone and an internet connection is a modern reincarnation of the MIT cyborgs. Furthermore, the argument could certainly be made that we today have greater connection to our virtual world than the cyborgs did back then; where they had rudimentary forms of social media (email and instant messaging), we have incredibly sophisticated social media networks like Facebook, Instagram and Snapchat. Today, by comparison with the MIT cyborgs, we are less human than they were; yet, social media cyborgs are so ubiquitous that they are completely ordinary to us and exist relatively unchecked by society at large.It seems peculiar that something that was relatively taboo only twenty-five years ago is so socially acceptable today.

Yet, emerging phenomena are often dismissed as a passing trend or ignored on a misreading of how significant their effects are. For example, before Turkle was well known and well respected,her work ran into substantial opposition from colleagues. In the late 1970s and early 1980s people preferred to think of computers as just one-dimensional tools. Yet, Turkle sided with McLuhan’s analysis of tools,arguing that perhaps computers were just tools, but this does not make the implications of their existence any less serious.

Turkle uses an interaction she had with a thirteen-year-old Deborah in the early 1980s to illustrate computers’ multi-dimensional effects. For, after doing a year of programming the girl remarked how, in order to program, you necessarily put a small piece of your mind into the computers mind and that this makes you see yourself differently (ibid.: X). This response clearly demonstrates the relational interaction between tool and toolmakers. Yet, this example is specifically around the toolmaker of today, and most people who use social media are not programmers and are not shaped by that particular relationship. But they are shaped by other relationships – their family, friends, romantic partners, colleagues, or any other human within their social circle. And, almost all these relationships today are mediated by social media.

It must be noted however, that Turkle was not always so sceptical of new technology. She started out optimistic, but by 1995 she had grown substantially more troubled by the costs of a life lived in this increasingly virtual world. By 2011, as smartphones replaced computers and became our new gateway to virtual reality, Turkle’s reservations deepened.

“… these days, insecure in our relationships and anxious about intimacy, we look to technology for ways to be in relationships and protect ourselves from them at the same time. This can happen when one is finding one’s way through a blizzard of text messages; it can happen when interacting with a robot. I feel witness for a third time to a turning point in our expectations of technology and ourselves. We bend to the inanimate with new solicitude. We fear the risks and disappointments of relationships with our fellow humans. We expect more from technology and less from each other” (ibid.: XII).

The above saliently explains the reasons for our current outsourcing and the subsequent split reality; to interact with the virtual world is far easier than the physical one. The risks are lower, the required effort less, the complexity of social interaction substantially diminished and the gratification far more instantaneous.

And the very nature of the technology which we carry around with us every day makes expecting less from people and more from technology easy. Consider the always on nature of the smartphones we use to access our social media. Our attachment to these devices is powerful enough that

“Teenagers tell me they sleep with their cell phone, and even when it isn’t on their person, when it has been banished to the school locker, for instance, they know when their phone is vibrating. The technology has become like a phantom limb, it is so much a part of them. These young people are among the first to grow up with an expectation of continuous connection: always on, and always on them” (ibid.: 16 & 17).

Now, regardless of the accuracy with which these individuals can anticipate messages they are incredibly aware of the presence of their phone. Additionally, consider for a moment just how invasive our smartphones are – these devices flash lights at us, make noises and the screens light up with notifications; and these invasions are just the factory settings. Consequently, the basic disposition of these devices is to be always on; an understanding that certainly begs serious enquiry into the intentions of those selling smartphones and profiting from the social media we use on them. As such, what I am getting at, is that the always on disposition of our smart devices plays a significant role in the creation and maintenance of the split reality; attempting to drag us further down the rabbit hole of social media and virtual reality. Many smartphone users today may never be completely in physical reality, instead choosing to outsource large swathes of important social actions to social media.

With the effects of social media being linked to stress, narcissistic ways of relating to the world, treating the self as a brand and potentially reduced productivity, the costs of living on social media seem to outweigh the benefits. Furthermore, these effects fall under the more general effect of users living in a split reality between the physical world and virtual world as a result of outsourcing important social actions. Thus, even if one was the hardest of sceptics towards the arguments put forward in this chapter, doubting both the scope and intensity of the effects put forward, one would still have to concede some small relation between social media and a splitting of reality towards the virtual. As such, an investigation into the implications of such a splitting for the subject is warranted in the next chapter, beginning with Dominic Pettman’s Infinite Distraction (2016).

4.0 The split subject: a story ending in lotus eaters?

I intend to employ Pettman’s analysis in this chapter to begin discussing the implications of the split reality for the subject. In Infinite Distraction (2016), Pettman implies that we have become a new kind of subject, the virtual subject; clicking and tapping our lives away in a feverish haze; distracted from the physical real by the most trivial of virtual information on social media.

Pettman begins to paint the picture of the virtual subject by looking at the reasons for why social media has become so prevalent, “so addictive: the new opium of the masses” (Pettman, 2016). To put it simply, social media is so appealing because it distracts us, and in doing so dulls us and draws our attention away from that which ails our minds. In other words, social media has become so prominent because it draws us into the virtual world, and thereby, distracts us from our trauma. A trauma, which for Pettman, is merely symptomatic of human existence; the knowledge of our own mortality and other miserable conditions of the physical real are difficult truths to swallow. Therefore, to not think about them, to be distracted from their very existence, can help us cope with our own. It is social media that gives us this option, the option to not think about it (ibid.). As the morally ambiguous fictional scientist Rick Sanchez once said after destroying his own reality and switching to another one, “the answer is don’t think about it, Morty” (Roiland & Harmon, 2013).

Yet, the term social media is often ambiguous too, and it is important to ask, what is social media for Pettman? He considers it “not a thing or a place or a new medium. It is a constellation, a concept. It is a virtual, evolving assemblage of elements, including – and especially – older forms of media, now diagrammed in novel articulations” (Pettman, 2016). Consequently, he warns against seeing social media as a stable concept and prefers understanding the concept as a transient, unstable and changing phenomenon. Which, in part, justifies my preference for the word ‘phenomenon’, as we are not certain of what exactly social media is yet, and to use particular words (such as thing or medium) is to assume to have information which we do not have.

However, let us not see distraction as an innately bad thing in and of itself. For, what Pettman takes issue with is not distraction; because, like many other components of social media, distraction existed long before this new phenomenon; and is not necessarily a bad thing. For example, sitting in a waiting room reading a magazine certainly beats staring at a wall for most people, and this is a form of distraction. Instead, what Pettman takes issue with, is the origin of the movement towards distraction. He

“… points offstage to a whole industry of meshing mechanisms carefully calibrated to narrow our focus, clip our capacity for sustained attention and shepherd as many of us as possible into the sphere of reflexive consumption”(ibid.)

In other words, he points at a movement geared towards enticing us into the sphere of virtual reality. Reflexive consumption characterised by the tapping of fingers and clicking of electronic mice is merely the mechanism by which we are seduced to stay in the sphere of virtual reality. Just one more video you say before reflexively tapping on the next ten.

Simply put, Pettman interprets this tendency of social media companies to attempt to take up greater and greater amounts of our time through distraction as a somewhat political process. This process may be transforming users into a virtual subject by leading them to ‘the promised land’ of reflexive consumption. In his view, information has become the new commodity which consumers are encouraged to buy, and in doing so, not only are we being distracted from important events in the physical real, we are also drawn deeper into our personal virtual reality. It is like the famous Greek fable about the Lotus-eaters. Along the journey home from Troy, Odysseus’ ship is waylaid by bad winds and he encounters a tribe who eat from a local lotus plant. Of this encounter he says that

“… on the tenth we set foot on the land of the Lotus-eaters, who eat a flowery food. There we went on shore and drew water, and straightway my comrades took their meal by the swift ships. But when we had tasted food and drink, I sent forth some of my comrades to go and learn who the men were, who here ate bread upon the earth; two men I chose, sending with them a third as a herald. So they went straightway and mingled with the Lotus-eaters, and the Lotus-eaters did not plan death for my comrades, but gave them of the lotus to taste. And whosoever of them ate of the honey-sweet fruit of the lotus, had no longer any wish to bring back word or to return, but there they were fain to abide among the Lotus-eaters, feeding on the lotus, and forgetful of their homeward way. These men, therefore, I brought back perforce to the ships, weeping, and dragged them beneath the benches and bound them fast in the hollow ships: and I bade the rest of my trusty comrades to embark with speed on the swift ships, lest perchance anyone should eat of the lotus and forget his homeward way” (Homer, 1994: 309).

Today, I find this story particularly salient when read in consideration of social media and the virtual real. For, in our contemporary landscape, it is us who are the Lotus-eaters; it is we who have tasted of the delicious fruit of the virtual world and forgotten our homeward way. Thus, social media is merely the portal to the virtual world and distraction simply the mechanism by which we are seduced into remaining inside. Presumably the motivation for distracting and dulling us is justified by the furthering of company interests. For, having us docile and distracted certainly makes for both easier subjects to rule over and better consumers to sell to.

Considering some past notions of the subject in the next paragraph,I must make an aside. Notions such as that of the subject or the notion of a self have a deep philosophical history of which I am aware; yet, I am unable to engage with these concepts as they fall outside of the scope of where my essay is directed. My aim in this essay is simply to explicate my suspicion that there is a new way of looking at the notion of a subject through Pettman and Turkle.

Around five hundred years ago Machiavelli outlined a traditionally stereotypical understanding of the subject in The Prince (1532). In this system the state was ruled by a monarch and individuals were mere subjects of the monarch; objects of their moods, policies and whims (129-131&189). Next, Kant conceptualized of the transcendental subject, one which could gain freedom through a process of intellectual enlightenment and create a world according to the limits of human reason (Lang, 1997: 414). Thereafter, Marx deconstructed classist and capitalist systems, identifying a new subject under capitalism, the proletarian who is treated as just a cog in the capitalist machine (Marx & Engels, 1848: 39). After World War Two however, Sartre emerges as a central voice on the idea of the subject, acknowledging the fundamental influence of societal forces while still positing the idea of the free subject, existentially burdened with freedom (Sartre, 1943: 481&483).Thereafter comes Foucault, with a subject under the thumb of societal forces, deterministically unable to move enough under the weight of these forces to be considered free (Seitz, 2004:96).

In the above, each change in the situation of the subject is marked by a substantial change in the world at that time. Machiavelli saw Italy fragmented by a weak monarchy, Kant and Marx saw the changes wrought by the industrial revolution, while Sartre and Foucault wrote in a post-World War Two landscape recently stripped of transcendental meaning by the deaths of millions. These authors did not change the notion of the subject, they merely identified its transition into a new situation. Now, it remains to be seen whether machine learning algorithms and social media stand to change the concept of a subject as significantly as the above examples, but I do suspect that through Pettman and Turkle we can better attempt to situate the subject within our transient contemporary landscape.

For example, Pettman already has some ideas about how this is currently happening.For one, the nature of distraction itself has changed. When Pettman talks about distraction, what he is referring to is a phenomenon far more sophisticated than simply making us look at Miss America pageants instead of the Vietnam war. This new form of distraction

“… is harder to mobilize against, for the simple reason that no one can accuse ‘the media’ of trying to cover up ‘the truth’. Rather, incessant and deliberately framed representations of events are themselves used to obscure and muffle those very same events” (Pettman, 2016).

In other words, this form of distraction drowns us in information, making every story just as true or false as the next. Therefore, a direct consequence for the subject has been the erosion of truth. As such, in trying to distract us, the way in which we perceive information has been fundamentally altered. People are finding it more difficult to discern reliable news sources from unreliable ones and this is epitomized by the term ‘fake news’. This term was popularized by Trump in relation to alleged partisan news coverage toward him. Yet, the data tells another story.For, while his coverage during the general election was more negative than Clinton’s (Patterson, 2016: 3); the reality was that

“… over the full course of the election, it was Clinton, not Trump, who was more often the target of negative coverage … Overall, the coverage of her candidacy was 62 percent negative to 38 percent positive, while his coverage was 56 percent negative to 44 percent positive” (ibid.).

In this way Trump used the erosion of truth to his advantage, painting unfavourable news coverage towards him as ‘fake news’. Thus, considering that the aim of such an information overload was distraction but the effects more numerous, we find an example of how social media is consistent with the adage of ‘we shape our tools, and thereafter our tools shape us’.

Throughout the book however, Pettman expands upon a variety of other ways in which the effects of distraction are escaping the designs of the toolmakers and keeping us constantly distracted. However, it is my suspicion that we are not quite at the level of distraction which Pettman assumes to make many of his arguments possible. In his own words,

“… this is where we find ourselves, a decade and a half into the twenty-first century: suspended between bot and not, between anonymous and tagged, generic and specific. We hover between the older conceptions of what it is to be a person – a citizen, with rights, responsibilities, character, agency, identity, and so on – and new, emerging types of being – a consumer, with cravings, likes, profiles, and opinions, leaving a trail of cookie crumbs in our wake” (Pettman, 2016).

Consequently, I think that the distracted subject found in Pettman, is only one half of the subject we find in contemporary society and he himself identifies this limitation. Yet, his analysis and implicit understanding of a virtual subject may become far more relevant as children develop alongside social media, unable to distinguish the difference between the physical real and virtual real.

For, in the case of my generation and our elders, most of us remember a time before technology became as all-pervasive as it is today, and as such, we are less at risk of being completely assimilated into the virtual world. However, newer generations, those who do not remember a time without technology invading every space, may be far easier to completely assimilate into the virtual world. Consider Turkle’s warnings about how different age groups are more and less vulnerable to interaction with robots and related AI.

“Growing up with robots in roles traditionally reserved for people is different from coming to robots as an already socialized adult. Children need to be with other people to develop mutuality and empathy; interacting with a robot cannot teach this. Adults who have already learned to deal fluidly and easily with others and who choose to ‘relax’ with less demanding forms of social ‘life’ are at less risk” (Turkle, 2011: 56).

In the same way that children growing up with robots are more vulnerable to them, children growing up with social media are more vulnerable to this phenomenon. For, they are far less capable of perceiving the range of possibilities outside of these types of interactions, as they are immanent to this domain. As such, they may struggle to imagine even a moment of life outside of the always on culture of social media and be left without a model of healthy, interpersonal interaction.

Yet, as mentioned before, I do not think we are entirely stuck in virtual reality just yet. For, the generation which is growing up immanently in the domain of social media remains a minority. Instead, we are somewhere in between virtual reality and physical reality, not existing entirely in either. Thus, to formulate a more complete conception of the contemporary subject we must necessarily refer back to Turkle and chapter two, to her idea of a split reality. For, while her notion of a split reality may refer, more generally, to technology and smartphones, the content of what is distracting us and splitting our reality is found again and again to be social media. Therefore, I will now attempt to formulate Turkle’s idea of a split reality into a conception of a new kind of subject; a subject which fits more accurately into our contemporary landscape, the split subject.

5.0 A season for compartments

Turkle’s idea of a split reality is derived from an analysis of a very particular group, the cyborgs at MIT. For, in this unique 1990s demographic, every one of us who today uses smartphones, social media or the internet can find at least a piece of ourselves. Both groups “split the real and the virtual to give the virtual the breathing space it needs” (ibid.: 196). In other words, the cyborgs reveal something about us today; specifically, that the subject is now characterized by a separation between physical and virtual worlds for the sake of the latter.

For example, I remember once confusing someone’s official surname with their Instagram handle. It was an arbitrary mistake and I meant nothing by it, yet, the look of shock which I received is something I remember to this day, I had crossed a line. Like many others, this individual had drawn a line in order to interact with her reality; a reality which was now split and compartmentalized. And she is not alone, many today have one self for virtual reality and another self for physical reality. Consequently, I tentatively wonder whether many people today feel as if they are leading double lives; one foot in the virtual real, the other in the physical real and no synthesis between the two anywhere in sight. In Turkle’s own words,

“I once described the computer as a second self, a mirror of mind. Now the metaphor no longer goes far enough. Our new devices provide space for the emergence of a new state of the self, itself, split between the screen and the physical real, wired into existence through technology” (ibid.: 16).

Though, naturally, you may wonder why such a split is bad. For, not all things go well together; while peanut butter and jelly may be a wonderful combination, peanut butter and marmite inspires only a nauseous shudder. Thus, we may wish to keep certain things separate – like our virtual and physical reality. However, splitting of realities within the mind is no sandwich; instead, it is characteristic of a psychological concept called compartmentalization, which refers to a tendency for an individual to process disruptive information by splitting it into different compartments within the mind. For example, childhood trauma is commonly compartmentalized and doing so helps the individual cope with either types of information their brain is not yet ready to handle (often sexual) or information that the brain deems too complex (Rozuel, 2011: 690). As such, compartmentalization is not necessarily an innately bad way of processing information; instead, it is situationally useful, especially in cases of trauma. Yet, the very idea that social media users are tending towards compartmentalization implies a certain level of trauma which these individuals are undergoing.

Furthermore, Rozuel argues that beyond the psychological effects, compartmentalization has serious moral implications. For instance, he argues that in corporate settings compartmentalization of the self through partaking in a “role-based life game” (685) erodes moral responsibility. Rozuel holds integration of all parts of the self as ideal because,

“… if roles jeopardize moral agency, moral autonomy and moral responsibility, we are left in need of a stable, core self which could provide a safe anchor for unfolding our individuality and enhancing our moral agency” (ibid.: 688).

In his view, with a stable core notion of the self it is substantially more difficult to behave in ways inconsistent with one’s fundamental values, since doing so does not contradict a mere compartment of the self; instead, it contradicts an entire, holistic self. Applied to compartmentalization in social media, the potential for eroding moral agency in this sphere is consistent with Rozuel’s argument too. Consider an older social media platform, Second Life. In this virtual world

“It is not uncommon for people who spend a lot of time on Second Life and roleplaying games to say that their online identities make them feel more like themselves than they do in the physical real” (Turkle, 2011: 158 & 159).

For example,

“Pete has created an avatar, a buff and handsome young man named Rolo. As Rolo, Pete has courted a female avatar named Jade … As Rolo, he “married” Jade in an elaborate Second Life ceremony more than a year before … Pete has never met the woman behind the avatar Jade and does not wish to … Online, Pete and Jade talk about sex and Second Life gossip, but they also talk about money, the recession, work, and matters of health. Pete is on cholesterol-lowering medication that is only partially successful. Pete says that it is hard to talk to his “real” wife Alison about his anxieties; she gets “too worried that I might die and leave her alone.” But he can talk to Jade” (ibid.: 159).

And this is indicative of a more general trend in social media to

“… use social networking to be “ourselves,” but our online performances take on lives of their own. Our online selves develop distinct personalities. Sometimes we see them as our ‘better selves’” (ibid.: 160).

Thus, considering the above, it is easy to see how both corporate and social media settings encourage us to take on different roles to do our ‘best’ in each of these settings. Yet, as has been mentioned above, the consequence of compartmentalizing may well be an erosion of moral agency and responsibility as well as the illumination of the trauma related to social media.

Next, the increasing amount of people splitting their reality on social media may well be linked to the profitability of having users spend increased time on social media. For, more time in the virtual world is equal to greater profits for the shareholders of social media platform companies. As such, a split subject or virtual subject, may not be the original aim ofsocial media, but its emergence is certainly aligned with the interests of said social media companies.

For example, think back to the applications of ML algorithms in chapter 1, the employment of these algorithms is to achieve certain ends aligned with company interests. There is very little consideration for what is being weighed in the balance, what the other effects are. The most fundamental aims in these companies is to increase the total time usage of its users. Time usage underpins advertising revenue, greater information for the ML algorithms and increased influence over its users. In other words, time usage on social media platforms which employ ML algorithms is a vicious cycle. The more time you spend on the platform, the more information they can model in the ML algorithm. And, the more information they can run through the ML algorithms, the better they understand their users’ behavioural patterns. The result of which is more effective algorithms that successfully get users to spend more time on the platform.

As noted before, the algorithms are not evil or biased in and of themselves, it is the programmers and data which impart the bias. Consequently, what such companies as Facebook are intending, is to increase our reflexive consumption on their platforms, taking up more of our time, receiving more of our information and in turn pulling us deeper into the virtual real. In this way, making us subjects, in particular split subjects, is quite literally their business.For, as it was a Prince’s business in Machiavelli’s time to make individuals good sovereign subjects it is their business to make us split subjects; both actions maintain the power of each system. Today however, power is not maintained through military might, but knowledge of behaviour wrought from employing ML algorithms.

Yet, as mentioned in chapter one regarding the amplificatory nature of ML algorithms, we find ourselves only at the infancy of exposure to such powerful AI technology, and as such, we have not been drawn completely into the virtual world and the unintended effects of social media and ML algorithms remain reversable. Thus, we sit in limbo, with effects surfacing but not dominating. Though, if we continue to employ such formidable technology without the substantial oversight necessary to ensure the social and moral ethicality of research, the unintended effects of such a powerful tool will certainly shape us in unintended ways far beyond the split subject outlined here.

Thus, contemporarily,it appears as though the consequence is a split subject. Individuals lead one life in the physical world and a different one in the virtual world. Try as one might to homogenize these worlds the reality is that the virtual world, social media in this instance, thrives off the split reality, requiring at minimum a division between the two. Consider how demanding social media can be. In order to answer the avalanche of communications coming from our devices often one must split and say, ‘let me just finish on my phone’. In other words, ‘right now I am finishing up in the virtual world and then I can join you in the physical one’. As such, what social media companies are selecting for, intentionally or unintentionally, is a new type of subject; a split subject, who is capable of juggling both worlds.

Yet, with the tendency of ML algorithms to amplify time usage, the split subject may only be a temporary phenomenon. One day soon, these platform companies may seek more than a portion of our time, because our time is now their business.Consequently, with the incredible power of ML algorithms and gargantuan data sets at their disposal, the ability of these algorithms to amplify the effectiveness of whatever the aim is, may well transform us intoa virtual subject.

Thus, we take seriously Pettman’s work, which illuminates the notion of a subject which may well exist in the near future. If we are not careful of the transient phenomenon that is social media, we may turn from the split subject into the virtual subject. Machine learning algorithms already give evidence for the argument that social media corporations seek to make us spend more time on their platforms; thereby dragging us deeper into virtual reality. Yet, currently it seems as if we do not find ourselves completely in the virtual world. Consequently, Turkle’s split subject is more appropriate to the contemporary context. However, it is only when we combine Pettman, ML algorithms and Turkle,that we can point out a potentially new perspective to consider the notion of the subject from, with Turkle seeming to illuminate the situation of the contemporary subject and Pettman appearing to direct us towards the situation of the subject in the near future.

6.0 Conclusion

In conclusion, considering that machine learning algorithms are the most common way in which most people interact with AI, and that they are being utilised to grow social media time expenditure, questions about how this change shapes us become inevitable. For, as time expenditure increases, so must the effects. Therefore, if you are to remember anything of this matter let it be ‘we shape our tools and then they shape us’; in our pursuit of connection, we have created something with effects far more complex and varied than we ever bargained for. Every day, empowered by data driven psychology, machine learning stretches its users’ time thinner and outsources greater and greater amounts of important social actions to virtual reality. Yet, many people today still have a substantial portion of their lives in the physical real. The consequence of this tension between the two worlds is an emerging subject, the split subject. Though, even the effects of such a partial transformation of the subject are substantial and have implications for moral agency. Thus, in itself, the notion of the split subject is rich ground for further research and considering its potential to evolve into an even more socially alienating conception, the virtual subject, the richness of the research material grows further.

Declaration of Interest

The attached research by Keldt Schoeman was only made possible by the contributions of a number of different institutions and people. Firstly, the research was undertaken in the Department of Philosophy at the University of Pretoria. As such, the research is affiliated with the department in particular and university in general. Secondly, the research was supervised by Professor Emma Ruttkamp-Bloem of the University of Pretoria during the course of a postgraduate honors degree year and her contribution through both editing and direction was invaluable.

Disclosures of Funding

This research was made possible by a bursary for 75000 ZAR from the Centre for Artificial Intelligence Research (CAIR). The bursary meant that the research must necessarily be directed into the field of AI.

Acknowledgements

None

Footnotes

[1] McCarthy states that McLuhan is responsible for the original idea behind the phrase ‘we shape our tools and then they shape us’, but the origin of this maxim is contested. Some would argue that the first to say something along these lines was Max Weber, but the purpose of this essay is not to trace authorship of this maxim but rather to use it to better understand the phenomena of social media.

References

[1] Facebook. (2020).Facebook Reports second Quarter 2020 Results. Available at: https://s21.q4cdn.com/399680738/files/doc_news/Facebook-Reports-First-Quarter-2019-Results.pdf (Date accessed: 10/04/19).

[2] Facebook. (2019). Facebook Reports Second Quarter 2019 Results. Available at: https://investor.fb.com/investor-news/press-release-details/2020/Facebook-Reports-Second-Quarter-2020-Results/default.aspx (Date Accessed: 31/08/20).

[3] Hazelwood, K et al. (2018) Applied Machine Learning at Facebook: A Datacenter Infrastructure Perspective. Available at: https://research.fb.com/wp-content/uploads/2017/12/hpca-2018-facebook.pdf (Date Accessed 07/04/19)

[4] Homer (1946) The Odyssey. Murray, A, T (Trans.). Harvard University Press: London.

[5] Kross, E, et al. (2013). Facebook Use Predicts Declines in Subjective Well-Being in Young Adults. Available at: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0069841&type=printable (Date Accessed: 27/05/19)

[6] Lang, K. (1997). The Dialectics of Decay: Rereading the Kantian Subject. Available at:www.jstor.org/stable/3046260. (Date accessed 24/10/19).

[7] MacCarthy, M. (2019). The Ethical Character of Algorithms – and What It Means for Fairness, the Character of Decision-making, and the Future of News. Available at: https://ai.shorensteincenter.org/ideas/2019/1/14/the-ethical-character-of-algorithmsand-what-it-means-for-fairness-the-character-of-decision-making-and-the-future-of-news-yak6m (Date accessed: 04/06/19).

[8] Machiavelli, N. (1532). The Prince. Trans : Atkinson, J, B (2008). Hackett PublishingCompany : Indianapolis.

[9] Marx, K & Engels, F. (1848). Manifesto of The Communist Party. Trans: Samuel Moore (1888) (1970). Foreign Languages Press Peking: Peking.

[10] McQuillan, D. (2018). People’s Council for Ethical Machine Learning. Available at : https://journals.sagepub.com/doi/pdf/10.1177/2056305118768303(Date accessed: 10/06/19).

[11] Patterson, E. (2016). News coverage of the 2016 election: how the press failed the voters. Harvard Kennedy School Press: Harvard. Available at: https://ssrn.com/abstract=2884837 (Date accessed: 11/10/19).

[12] Pettman, D. (2016) Infinite Distraction. Available at : file:///C:/Users/User/Documents/honors%20year/thesis%20FIL%20770/sources/[Theory%20Redux]%20Dominic%20Pettman%20-%20Infinite%20Distraction%20(2015,%20Polity).epub (Date accessed 10/03/19)

[13] Roiland, J & Harmon, D. (2013). Rick and Morty. Season 1. Episode 6 – Rick Potion #9. Available at : https://www.springfieldspringfield.co.uk/view_episode_scripts.php?tv-show=rick-and-morty-2013&episode=s01e06 (Date accessed: 25/08/19)

[14] Rozuel, C. (2011). The Moral Threat of Compartmentalization: Self, Roles and Responsibility. Journal of Business Ethics.Available at:http://www.jstor.org.uplib.idm.oclc.org/stable/41475971 (Date accessed: 10/10/19)

[15] Sartre, J. (1943). Being and Nothingness. Trans: Barnes, H, E (1994). Gramercy Books: New York.

[16] Seitz, B. (2004). Sartre, Foucault, and the Subject of Philosophy’s Situation. Available at: http://www.jstor.org/stable/23512878 (Date accessed: 25/10/19)

[17] Turkle, S. (2011) Alone Together: why we expect more from technology and less from each other. New York: Basic Books.