By Caroline Tagg (@carotagg) and Philip Seargeant (@philipseargeant), Open University.
Diversity, in all its forms, is central to the work carried out by the TLANG Project. Sitting at the heart of ideas about sociolinguistic diversity is the internet, which enables us to connect with others regardless of geographical distance. But it is important not to overstate the unbounded nature of online diversity, nor of internet users’ freedom from social constraints and ties. The research project Creating Facebook: the management of conflict and the pursuit of online conviviality, carried out by TLANG member Caroline Tagg and her colleague Philip Seargeant, both at the Open University, draws on survey and interview data to suggest that social networks on Facebook are often structured by people’s existing social roles and relationships, with important implications for how they engage with news and political opinion. In this blog post we look at what our Facebook research shows about ‘intradiversity’ – our term for describing the type of diversity facilitated by the architecture and user practices of Facebook – and what this means for Facebook’s role as a forum for civic debate.
Fake news and filter bubbles
The fact that people increasingly get their news from Facebook has gained a lot of public attention, given the role the site is alleged to have played in polarising debate and creating online political ghettos during the 2016 US presidential election. The main focus of the media coverage has been on the personalisation algorithm that Facebook uses to push information onto users’ newsfeeds so as to prioritise stories which, as Mark Zuckerberg puts it, are deemed most ‘meaningful’ to individuals. One supposed outcome of this is the creation of ‘filter bubbles’, a concept put forward by Eli Pariser which describes the way Facebook users see only news or information with which they tend to agree, and are shielded from opposing viewpoints. This ghetto-isation of the internet, it is argued, increases the likelihood that fake news stories spread quickly and widely, as they go unchallenged through communities who agree with the sentiments they reinforce. Calls have been made for Facebook to implement technological solutions to deal with Facebook ghettos, for example by flagging up or removing stories – such as Pizzagate or the Pope’s support for Donald Trump – that are clearly untrue.
Creating Facebook highlights, however, that behaviour on Facebook is not determined solely by technology, but that people themselves create the context in which they communicate. The personalisation algorithm plays a significant role, but of equal importance is what people themselves do on the site, and how they fashion their experience of it as a communicative space through the choices they make about how and what to post, and how to react to the behaviour of others. For example, our research revealed an overwhelming belief that Facebook is not particularly well suited to serious debate around political issues, and that things should be kept trivial and light-hearted. Political opinions are expressed (and often offend), but there is a reluctance to engage in controversial discussion. Instead, to avoid conflict, people respond to posts they disagree with or find offensive by simply ignoring them, blocking them or, in cases where it is politic to do so, unfriending the offender. In this way they contribute to the creation of the filter bubble effect through their own actions by removing opposing viewpoints from their newsfeeds. This is not to say that people deliberately shield themselves from other perspectives, but that they inadvertently do so through their management of the complex social relations within their online social network. The implication, then, is that solutions to the problem of fake news that focus solely on altering the technology will not succeed if they don’t also take into account the social aspects of filter bubble creation.
What has this all got to do with diversity? Our findings suggest – despite arguments elsewhere that personalisation algorithms present people only with views they already agree with – that the Facebook users in our research were exposed to a great variety of different opinions and values. This is due to the particular nature of such sites, which often involve users connecting with large numbers of ‘friends’, many of whom they may only have met once or twice. But what makes the resulting diversity interesting is the very fact that a user has likely met all their online ‘friends’ offline. Sites like Facebook are often described as being ‘ego-centred’ in the sense that a user’s online social network is structured around their (offline) connections. These connections are in turn shaped by the individual’s background and life trajectory (where they have been and what they have done), so that the audience for any one of their posts can comprise old schoolfriends, family members, work colleagues, a range of acquaintances and chance encounters, and so on. This is a particular kind of diversity which we call ‘intradiversity’. As with superdiversity, intradiversity foregrounds difference rather than homogeneity; but, unlike superdiverse contexts, intradiverse networks are structured around an individual’s background, experiences and connections. Online intradiversity is similar in some respects to that which you might come across at a wedding, where the guests won’t necessarily form a homogenous group in terms of geographical and cultural background, education or age, yet the diversity is not completely unpredictable or random, as it’s structured around one individual’s life experiences (or two, in this case!).
On Facebook, intradiversity has two implications. The first is that this type of social network is likely to bring together varied political and cultural viewpoints around particular issues within the same communicative space. The second is that the various ties between a Facebook user and members of this network (who constitute the potential audience for their posts) will constrain and shape not only what they feel comfortable writing about, but also the extent to which they are willing to challenge views they disagree with. It may be difficult, for example, to argue with a friend of your cousin’s wife in front of members of your family. The research shows that Facebook users can be acutely aware of the different people who make up their online network of ‘friends’, and that they implement various strategies to ensure that they do not inadvertently upset them. One unintended consequence of this combination of factors – as mentioned above – is that people often eschew political debate and, by blocking unfavourable views rather than engaging with them, inadvertently contribute to the narrowing rather than widening of civic discourse.
Our research has real-world implications regarding the need for critical and social digital literacies education (see our article in the Times Higher Education). But it also contributes to developing ideas about contemporary social diversity, and the need to recognise how our various social ties are reproduced across contexts and the complex ways in which they can act to constrain and shape our behaviour. Although particularly salient on social media sites like Facebook, these observations can also inform our understanding of the diverse city neighbourhoods and workspaces explored by the TLANG project.
For further discussion of the topic, please see The Conversation. Our book Taking offence on social media: conviviality and communication on Facebook, based on the research, is published by Palgrave Macmillan in the early spring.