About admin



View all posts by admin

The Spectre of Anonymity

Prologue: In preparation for the A=Anonymous: Reconfiguring Anonymity Conference this week in Kampnagel, Hamburg, I am republishing an older article of mine as a blog post. The original article was published in 2012 in a book titled “Sniff, Scrape, Crawl … {On Privacy, Surveillance, and Our Shadowy Data Double} edited by Renee Turner and published by Mute Books. At the time of the text’s completion, the “Arab Spring” had entered its “Fall”, facing a period of backlash from governments. The turn of events made me think about the various lists of demands that the people would put up on banners rolled out across large buildings. Often times the banners were anonymous, posing as an expression of the masses. As tantalizing as the demands were, their messages were too easily erased or coopted by governments who were marching forward with “counter-revolutions”. The piece tries to respond to the various manifestations of anonymity during the course of these events, with an eye on the Internet and social movements.
 
The piece is somewhat dated and the writings of a younger scholar. It also includes the famous “Anonymous City” animation which is now over 10 years old. About time!

 

“Anonymity is our first line of defence.”
Professor Xavier, XMEN: First Class

 

Anonymity is a powerful concept and strategy. It transgresses concepts like authorship, the original, and the origin, and presents itself across important elements of our lives like songs, poems, oral histories, urban legends, conspiracy theories, and chain mails.
 
For centuries anonymity has been a strategy used by communities to articulate their collective voice. This definition is related to an understanding of anonymity as it relates to individual autonomy, and yet, it shifts the focus from its individual use to its collective effect. Anonymously produced statements or artefacts have expressed the cultural practices, beliefs and norms of the past, while creating a space in which future collectives can manifest themselves.
 
In some contexts, anonymity allows the individual to melt into a body of many, to become a pluralistic one, for which communicating a message is more important than the distinction of the participating individual(s). Whether at a demonstration or a football match, the power of the anonymous collective produces a field of protection and cohesion around its participating individuals.
 
And yet, the seemingly unbreakable bond can be fragile, since participation is fluid; individuals and groups enter and leave as they please; and, the organisation of the anonymous collective is distributed. The anonymous perseveres only as long as the common line is held. This volatility is also what distinguishes spontaneously gathered anonymous groups from purposefully assembled collective anonymous bodies.
 
And, in this difference we understand that anonymity is more a means, than an end in itself. It can be utilised in multiple ways for a variety of purposes. For example, a centrally organised form of anonymity can be found with the uniformed soldiers of a brigade or managers of a corporation — the latter also known as the “anonymous limited”.[2] In organised anonymity, participation is mandatory and actions are heavily controlled. The objective is still to protect, but this time to protect the organizing authorities rather than the participating individuals — the latter often being consumed in the process. Control mechanisms are there to utilise the anonymous group to reinforce existing power hierarchies, e.g., the state, the nation, or the shareholders, and to render divergences from this goal impossible.
 
Anonymity, in its more fluid and in its more centrally organised form, when used as a strategy in networked systems like the Internet, provides some parallels. As in the physical world, it manifests itself in various mechanisms for a multitude of ends and hence, has different potentials and limitations.
 
The Internet and Anonymity
 
The power of anonymity in Internet communication has long been recognised by computer scientists and hackers. For example ‘Anonymous communications’ technologies — of which Tor is a popular implementation — strip messages (in the case internet traffic) of any information that could be used to trace them back to their senders. Powerful observers can identify that Tor users are communicating (with each other or websites), but cannot identify who is communicating with whom. In other words, individual communication partners are not distinguishable within a set (of Tor users). Communicating partners also remain anonymous towards each other. These measures are intended to provide individuals in the set protection against any negative repercussions resulting from inferences that can be made from who and how often they are communicating with others or the websites they are visiting.

Anonymous City: A short animation by Thibaut D’alton and myself describing the mechanisms used to design anonymous communications. All misrepresentations of anonymous communications are our own.

 
Anonymous communications are designed to circumvent the traceability of interactions on the Internet. They work around the default architecture of the Internet that makes it possible to trace all messages, online actions, and other ‘data bodies’ to their origins and, through that, to their individual authors in physical space and time. This capability allows service providers to collect, scrutinise, dissect, reconfigure, and re-use these data bodies. By masking the origin and destination of communications, services like Tor remove the link between individuals and their data bodies.
 
Despite the diversity of the groups and communities using anonymous communications, such technologies are usually cast in a negative light in mainstream policy papers and in the media. Anonymous communication infrastructures are generally depicted as providing channels for criminal activity or enabling deviant behaviour.
 
It seems, however, what bothers authorities the most is not anonymity as such, but rather the characteristics of the user base and the distributed nature of anonymous communications. This becomes evident in the keen interest that data miners and regulators have in a centralised form of anonymity applied to large databases, a strategy that fits squarely with the interests of the growing data economy.
 
The Market, Governance and Anonymity
 
We are currently in the midst of an economic hype driven by data. The ideology behind this hype suggests that the data collected is going to make the behaviour of populations more transparent, easier to organise, control, and predict. Data collected en mass is expected to reveal to their collectors ways of improving the efficiency of markets as well as their systems of governance.
 
Improvement is promised through mastering the application of statistics to the gathered data sets. Large scale collection of all-encompassing data sets are expected to reveal ways of improving market efficiency and systems of governance, by applying methods of statistical analysis to these data sets and inferring knowledge from these statistics. According to behavioural advertisers and service providers, these data sets are becoming ‘placeholders’ for understanding populations and allowing organisations to provide them with refined individualised services. In the process, elaborate statistical inferences replace ‘subjective’ discussions, reflections or processes about societal needs and concerns. The data comes to speak for itself. Hence, in this ideology, the promise of control and efficiency lies in data and the processing power of its beholders.
 
However, the collection and processing of such mass amounts of data about consumers or citizens is historically and popularly coupled with the `privacy problem’. It has been commonly understood that addressing this issue requires limiting the power these organisations can exercise when using this data. These constraints need to hold as long as the people to which the data in a given database relate are uniquely identifiable.
 
It is in this series of reductions of the problem that service providers discover anonymity for their own ends. The database is to be manipulated in such a way that the link between any data body included in the data set and its individual ‘author’ is concealed, while the usefulness of the data set as a whole is preserved. If this is somehow guaranteed, then the dataset is declared ‘anonymised’, and it becomes fair game. Inferences can be made freely from the data set as a whole, while ideally no individual participant can be targeted.
 
Scrubbing data until it becomes sufficiently anonymised for any one to process as they wish is not only endorsed by service providers, but also reinforced by regulation. The European Data Protection Directive excludes anonymised data sets from its scope [1]. If the database is anonymised, then the data is set free. This free flow of data is then only constrained by the markets, in line with one of the two principle objectives of the same Directive.
 
The Surrogates to Anonymity
 
What is common to anonymity on the Internet and elsewhere is the breaking of the link between the original author(s) and the message. This is an important element of anonymity as a communication strategy. Once the message is released, it is likely to be subverted and reclaimed by others. This is one of the charms of the fluid anonymous message: any individual or group can claim it as their own. But when a group subverts the message to negate all other linkages and continuities, monopolising the interpretation of the message’s senders, destination, and content, the relationship between ‘the anonymous’ and the message can become vulnerable.
 

Trailer of “Whose song is this? A documentary on folk songs by Adela Peeva.

 
An example of this kind of dynamic at work, can be seen in Adela Peeva’s film “Whose is this song?” [2]. In the documentary, Peeva searches across the Balkans for the origins of an anonymous folk song. In each country or region that she visits the song changes, becoming a love song, a song of piety, a song about a girl from the village behind the hills, or even a war song. However, with every variation, the question about the song unravels a chorus of claims about its authentic origins. In each claim, the song is cut anew from its traveling past. It is extorted and burdened with carrying the truths of a national past and with shaping the future identity of the referred community in barely subtle archetypes: from the young Turks to amorous Greeks, from proud Albanians to pious Bosnians, from debauch Serbians to superstitious Gypsies, all the way to unwincing Bulgarians.
 
Peeva’s film captures a dilemma that can be associated with any anonymous action or artefact. Anonymity allows for the articulation of a collective message that can travel without the burdens of authorship and origin. This allows for some lightness that opens the way for the message to flow freely and to be reshaped creatively. However, this void is easily filled when a group, community, or organisation claims and bends the message to suit its own interpretation of the past and future. The message is then fixed, and its interpretation is monopolised. This happens because anonymity frees the message and, simultaneously, leaves it up for grabs.
 
If this is the case, the message can even be used to shape the story of the anonymous community that created the message. The anonymous message may boomerang back to hit its authors, often as a collective. The hijacking of popular uprisings by a few that establish their power, the re-writing of folk songs into chauvinistic hymns, the utilisation of anonymous cyber-actions to introduce draconian security measures are examples of such de-contextualised anonymous messages coming back to haunt its origins.
 
In the data economy, the anonymised data set is fashioned as a digital mirror of populations’ activities and tendencies. The organisations that hold a monopoly over these data sets get to assert their own categories of desired and undesired activities as it is seen fit to improve the markets and forms of governance. Since the data in such data sets cannot be directly linked to individuals, privacy is claimed to be intact. Since the data sets are anonymised, the targeted populations cannot expect answers to their questions about the quality, repurposing, and use of this data for or against them.
 
Continuity, Articulation and Anonymity
 
Given its historical persistence across centuries, anonymity appears to be here to stay. It is hence not surprising that this viral strategy replicates itself on the Internet. In its most powerful and at times even heroic moments, it is used to counter targeted surveillance by creating collective protection around individuals. Yet, we also need to recognise that the same strategy is concurrently used to create discrete, de-contextualised, and yet linked data sets, which are imminent to the data economy.
 
The current economy based on data fetish leads to bizarre data collections. We now have gargantuan databases of “friends” who “rate” information to their “like”-ing from which our interests, desires, opinions, and soft spots can be inferred. The anonymisation of these databases is not done to protect the participants of these data sets — never mind that even in their sophisticated forms these anonymisation techniques provide no formal guarantees [3]. Rather, the strategy is used to disempower their subjects from understanding, scrutinising, and questioning the ways in which these data sets are used to organise and shape their access to resources and connections in a networked world.
 
Given the backdrop of the data economy, our societies should continue to savour anonymity as a strategy to protect individuals on the Internet and we should reject its reincarnation as an instrument for creating discontinuity between the context in which these data sets were authored and the contexts in which they get used, with the intention to manage and manipulate people’s lives. Database anonymization may be useful for additional protection, but should not be the basis for service providers to shed their responsibilities with respect to the collection and processing of our data bodies.

 
The discontinuity inherent to anonymization and its dis/empowering affects needs to be further theorized in political movements, where anonymity remains a powerful means to achieve political objectives and disseminate collective messages to a greater public. From the perspective of social movements it is clear that the technical instantiations of anonymous communications must remain a fundamental function available in our communication networks. However, especially in political contexts, the vulnerability that is inherent to anonymous collective requires that multiple strategies are available to its participants. For instance, in order to create a continuity with activities that are initiated anonymously, collectives with political agendas may publish statements or organize activities that are explicit, precise, situated and that include their origins. Such coupling of strategies would build on the power and lightness of anonymous messages while making their cooptation more difficult.

 
[1] European Union (1995). Data Protection Directive (Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data).
http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:31995L0046:en:HTML
(accessed March 15, 2012)
[2] A recent article in The Economists states, “In dozens of jurisdictions, from the British Virgin Islands to Delaware, it is possible to register a company while hiding or disguising the ultimate beneficial owner.”
The Economist, “Corporate Anonymity: Light and Wrong”, Jan 21st 2012: http://www.economist.com/node/21543164 (accessed March 15, 2012) [name of author not given in on-line issue]
[2] Adela Peeva, Dir. Whose Is This Song?, film, 2003.
[3] Arvind Narayanan and Vitaly Shmatikov “Myths and Fallacies of ‘Personally Identifiable Information’”, Communications of the ACM vol.53, issue. 6, 2010

Fourth International Workshop on Privacy Engineering CFP is out!


Great news: the Call for Papers for the fourth iteration of the International Workshop on Privacy Engineering (IWPE) is out! This year’s program seeks to highlight challenges to privacy posed by widespread adoption of machine learning and artificial intelligence technologies. One motivation for this focus stems from goals and provisions of the European General Data Protection Regulation (GDPR), including requirements for privacy and data protection by design, providing notices and information about the logic of automated decision-making, and emphasis on privacy management and accountability structures in organizations that process personal data. Interpreting and operationalizing these requirements for systems that employ machine learning and artificial intelligence technologies is a daunting task and we hope to attract papers from researchers, civil society and industry on the topic.

This year we decided to co-locate IWPE with the European IEEE S&P which will take place in London between the 24th and 26th of April. With this, we hope in the coming years to establish a tradition of moving the workshop (for now) between the US and EU.

Workshops are the product of all the dedicated researchers who agree to serve on our PC as well as the hard work of the organizers of the conferences where we co-locate our workshop. We are delighted to once again have a fantastic and interdisciplinary PC. There is also great effort that goes into establishing a new workshop. For this, I would like to thank Jose M. del Alamo (Universidad Politécnica de Madrid) who has pulled the heavy weight of putting together our workshop for the last four years. Special thanks also goes out to our current program co-chairs Anupam Datta (Carnegie Mellon University) , Aleksandra Korolova (University of Southern California) and Deirdre K. Mulligan (UC Berkeley); our industry chair Nina Taft (Google); our mentoring and local chair Jose Such (King’s College London); and, our publicity chair Arunesh Sinha (University of Michigan). We look forward to seeing you at IWPE’18.

Symposium on Obfuscation

My first encounters with the concept of obfuscation go back to discussions that the privacy research group at COSIC/ESAT (KUL) had about TrackMeNot in 2012. Back then we were discussing the efficacy of the possible protections offered by TrackMeNot when faced with a “learning” machine. Little did I know that one day I would have close encounters with the creators of TrackMeNot, Helen Nissenbaum, Vincent Toubiana and Daniel Howe. All three will be at the Symposium on Obfuscation which takes place next week at NYU. The line up of speakers include Susan Stryker, Nick Montfort, Laura Kurgan, Claudia Diaz, Günes Acar, Finn Brunton, Hanna Rose Shell, Joseph Turow as well as Rachel Greenstadt representing her research group that developed “Anonymouth”, Daniel Howe, the creator of “Ad Nauseam”, and Rachel Law, the maker of “Vortex”. You can find out more about the event here.

, February 7, 2014. Category: news.

welcome, vous etes ici!

I am currently a post-doctoral fellow at CITP, Princeton University. More coming soon! Until then, please check my old homepage here.

, January 2, 2014. Category: meta, news.