The algorithm medley: explaining facebook.tracking.exposed

Claudio Agosti is a software engineer that has always been fascinated by the social impact new technologies can have. On 8 April he set out to explain and discuss his quest for diversity for the Facebook algorithm and the dataset facebook.tracking.exposed. First he proceeded to present the concept and aims of his work, and then the dataset methodology itself.

What is the objective?

Agosti was explicit that his work was not at all aimed at demonising the Facebook algorithm. On the contrary, he made it clear that “we do need algorithms [as] sequences of actions are necessary for any physical or non-physical action” just as properly cooked pasta does not require only the right ingredients but also a set of acts to ‘process’ them. However, it is undeniable that algorithms do interfere with our capacity to see the world.

What can go wrong then? Through their filtering process, algorithms may enhance human psychological tendencies such as grouping like-minded people together, as well as reinforcing our confirmation biases.

Agosti used the Black Lives Matter movement in the United States as an example in which case, Agosti said, the algorithm limited the diffusion: “If you were not aware of what was happening, and did not know what you were looking for, the algorithm would not get the information to you.”

This led Agosti to wonder “whether it [was] the role of an individual to explore news or [if] it is the algorithm of Facebook that should help doing so?” However, even if one dislikes Facebook’s algorithm, the alternatives to it remain limited.

Quitting Facebook is not an option, as it de-facto puts the former user in a sort of ‘social isolation’. Another way to resolve the matter could be to “pop the information bubble,” effectively enhancing personal feeds with varied information, but this too can be difficult to achieve. Removing the algorithm would also be a non-solution in Agosti’s view, as users do need to filter the flow of facts they receive.

Therefore, should the algorithm be considered as the prioritisation of somebody’s own values, a way to demystify it would be to introduce diversity in the rules of the game: borrowing the prioritisation of somebody else and looking through their eyes.

This is the idea at the core of Agosti’s work and the facebook.tracking.exposed dataset.

The dataset aims to: provide usable evidence for researchers to understand Facebook’s personalisation algorithm with, contribute to an open data feed usable to see trends, give a less immersive experience of the users [of Facebook and the dataset], and develop “the eyes of others.”

How does it work?

The dataset extracts data from an ‘extension’ which Facebook users can install on their Google Chrome. The extension then copies the users’ public posts, without affecting the ones available to friends only, and gathers them in a server. In practice, by installing the extension, users become a information contributor to the facebook.tracking.exposed dataset. This then enables them to analyse their own Facebook feed, with a data analysis tool called CSVKit, and extract the data collected by the dataset. The project also allows users to see how many accesses their Facebook posts have, how many times a post appears on the timeline of other users, at what position on the timeline etc.

The dataset, Agosti explained, would allow for the possibility to conduct a number of experiments. For instance, one could compare the Facebook news feed of two users with the same friends and different ‘likes’, and then check how much the Facebook ‘likes’ matter and how influential is the algorithm. Other questions that the data might help in answering are: how quickly does Facebook react to changes in personal ‘likes’ preferences, i.e. how fast the algorithm is, what is the average life of a post, who has more influence among your friends, etc.

He insisted however that the dataset is providing a wealth of data and creating new questions, but without effectively answering them: to extract evidence and findings people would need scientific research to work through this data, and the data would need to be expanded further to be representative. Therefore, the invitation was of course to support the data accumulation, by installing the extension, or even to join Agosti’s efforts to make it easier to further develop and expand the dataset.

Concluding, Agosti clarified that his project is not against Facebook, but rather against the concept that one algorithm can fit all programmes.