Category Archives: Uncategorized


The Spectre of Anonymity

Prologue: In preparation for the A=Anonymous: Reconfiguring Anonymity Conference this week in Kampnagel, Hamburg, I am republishing an older article of mine as a blog post. The original article was published in 2012 in a book titled “Sniff, Scrape, Crawl … {On Privacy, Surveillance, and Our Shadowy Data Double} edited by Renee Turner and published by Mute Books. At the time of the text’s completion, the “Arab Spring” had entered its “Fall”, facing a period of backlash from governments. The turn of events made me think about the various lists of demands that the people would put up on banners rolled out across large buildings. Often times the banners were anonymous, posing as an expression of the masses. As tantalizing as the demands were, their messages were too easily erased or coopted by governments who were marching forward with “counter-revolutions”. The piece tries to respond to the various manifestations of anonymity during the course of these events, with an eye on the Internet and social movements.
 
The piece is somewhat dated and the writings of a younger scholar. It also includes the famous “Anonymous City” animation which is now over 10 years old. About time!

 

“Anonymity is our first line of defence.”
Professor Xavier, XMEN: First Class

 

Anonymity is a powerful concept and strategy. It transgresses concepts like authorship, the original, and the origin, and presents itself across important elements of our lives like songs, poems, oral histories, urban legends, conspiracy theories, and chain mails.
 
For centuries anonymity has been a strategy used by communities to articulate their collective voice. This definition is related to an understanding of anonymity as it relates to individual autonomy, and yet, it shifts the focus from its individual use to its collective effect. Anonymously produced statements or artefacts have expressed the cultural practices, beliefs and norms of the past, while creating a space in which future collectives can manifest themselves.
 
In some contexts, anonymity allows the individual to melt into a body of many, to become a pluralistic one, for which communicating a message is more important than the distinction of the participating individual(s). Whether at a demonstration or a football match, the power of the anonymous collective produces a field of protection and cohesion around its participating individuals.
 
And yet, the seemingly unbreakable bond can be fragile, since participation is fluid; individuals and groups enter and leave as they please; and, the organisation of the anonymous collective is distributed. The anonymous perseveres only as long as the common line is held. This volatility is also what distinguishes spontaneously gathered anonymous groups from purposefully assembled collective anonymous bodies.
 
And, in this difference we understand that anonymity is more a means, than an end in itself. It can be utilised in multiple ways for a variety of purposes. For example, a centrally organised form of anonymity can be found with the uniformed soldiers of a brigade or managers of a corporation — the latter also known as the “anonymous limited”.[2] In organised anonymity, participation is mandatory and actions are heavily controlled. The objective is still to protect, but this time to protect the organizing authorities rather than the participating individuals — the latter often being consumed in the process. Control mechanisms are there to utilise the anonymous group to reinforce existing power hierarchies, e.g., the state, the nation, or the shareholders, and to render divergences from this goal impossible.
 
Anonymity, in its more fluid and in its more centrally organised form, when used as a strategy in networked systems like the Internet, provides some parallels. As in the physical world, it manifests itself in various mechanisms for a multitude of ends and hence, has different potentials and limitations.
 
The Internet and Anonymity
 
The power of anonymity in Internet communication has long been recognised by computer scientists and hackers. For example ‘Anonymous communications’ technologies — of which Tor is a popular implementation — strip messages (in the case internet traffic) of any information that could be used to trace them back to their senders. Powerful observers can identify that Tor users are communicating (with each other or websites), but cannot identify who is communicating with whom. In other words, individual communication partners are not distinguishable within a set (of Tor users). Communicating partners also remain anonymous towards each other. These measures are intended to provide individuals in the set protection against any negative repercussions resulting from inferences that can be made from who and how often they are communicating with others or the websites they are visiting.

Anonymous City: A short animation by Thibaut D’alton and myself describing the mechanisms used to design anonymous communications. All misrepresentations of anonymous communications are our own.

 
Anonymous communications are designed to circumvent the traceability of interactions on the Internet. They work around the default architecture of the Internet that makes it possible to trace all messages, online actions, and other ‘data bodies’ to their origins and, through that, to their individual authors in physical space and time. This capability allows service providers to collect, scrutinise, dissect, reconfigure, and re-use these data bodies. By masking the origin and destination of communications, services like Tor remove the link between individuals and their data bodies.
 
Despite the diversity of the groups and communities using anonymous communications, such technologies are usually cast in a negative light in mainstream policy papers and in the media. Anonymous communication infrastructures are generally depicted as providing channels for criminal activity or enabling deviant behaviour.
 
It seems, however, what bothers authorities the most is not anonymity as such, but rather the characteristics of the user base and the distributed nature of anonymous communications. This becomes evident in the keen interest that data miners and regulators have in a centralised form of anonymity applied to large databases, a strategy that fits squarely with the interests of the growing data economy.
 
The Market, Governance and Anonymity
 
We are currently in the midst of an economic hype driven by data. The ideology behind this hype suggests that the data collected is going to make the behaviour of populations more transparent, easier to organise, control, and predict. Data collected en mass is expected to reveal to their collectors ways of improving the efficiency of markets as well as their systems of governance.
 
Improvement is promised through mastering the application of statistics to the gathered data sets. Large scale collection of all-encompassing data sets are expected to reveal ways of improving market efficiency and systems of governance, by applying methods of statistical analysis to these data sets and inferring knowledge from these statistics. According to behavioural advertisers and service providers, these data sets are becoming ‘placeholders’ for understanding populations and allowing organisations to provide them with refined individualised services. In the process, elaborate statistical inferences replace ‘subjective’ discussions, reflections or processes about societal needs and concerns. The data comes to speak for itself. Hence, in this ideology, the promise of control and efficiency lies in data and the processing power of its beholders.
 
However, the collection and processing of such mass amounts of data about consumers or citizens is historically and popularly coupled with the `privacy problem’. It has been commonly understood that addressing this issue requires limiting the power these organisations can exercise when using this data. These constraints need to hold as long as the people to which the data in a given database relate are uniquely identifiable.
 
It is in this series of reductions of the problem that service providers discover anonymity for their own ends. The database is to be manipulated in such a way that the link between any data body included in the data set and its individual ‘author’ is concealed, while the usefulness of the data set as a whole is preserved. If this is somehow guaranteed, then the dataset is declared ‘anonymised’, and it becomes fair game. Inferences can be made freely from the data set as a whole, while ideally no individual participant can be targeted.
 
Scrubbing data until it becomes sufficiently anonymised for any one to process as they wish is not only endorsed by service providers, but also reinforced by regulation. The European Data Protection Directive excludes anonymised data sets from its scope [1]. If the database is anonymised, then the data is set free. This free flow of data is then only constrained by the markets, in line with one of the two principle objectives of the same Directive.
 
The Surrogates to Anonymity
 
What is common to anonymity on the Internet and elsewhere is the breaking of the link between the original author(s) and the message. This is an important element of anonymity as a communication strategy. Once the message is released, it is likely to be subverted and reclaimed by others. This is one of the charms of the fluid anonymous message: any individual or group can claim it as their own. But when a group subverts the message to negate all other linkages and continuities, monopolising the interpretation of the message’s senders, destination, and content, the relationship between ‘the anonymous’ and the message can become vulnerable.
 

Trailer of “Whose song is this? A documentary on folk songs by Adela Peeva.

 
An example of this kind of dynamic at work, can be seen in Adela Peeva’s film “Whose is this song?” [2]. In the documentary, Peeva searches across the Balkans for the origins of an anonymous folk song. In each country or region that she visits the song changes, becoming a love song, a song of piety, a song about a girl from the village behind the hills, or even a war song. However, with every variation, the question about the song unravels a chorus of claims about its authentic origins. In each claim, the song is cut anew from its traveling past. It is extorted and burdened with carrying the truths of a national past and with shaping the future identity of the referred community in barely subtle archetypes: from the young Turks to amorous Greeks, from proud Albanians to pious Bosnians, from debauch Serbians to superstitious Gypsies, all the way to unwincing Bulgarians.
 
Peeva’s film captures a dilemma that can be associated with any anonymous action or artefact. Anonymity allows for the articulation of a collective message that can travel without the burdens of authorship and origin. This allows for some lightness that opens the way for the message to flow freely and to be reshaped creatively. However, this void is easily filled when a group, community, or organisation claims and bends the message to suit its own interpretation of the past and future. The message is then fixed, and its interpretation is monopolised. This happens because anonymity frees the message and, simultaneously, leaves it up for grabs.
 
If this is the case, the message can even be used to shape the story of the anonymous community that created the message. The anonymous message may boomerang back to hit its authors, often as a collective. The hijacking of popular uprisings by a few that establish their power, the re-writing of folk songs into chauvinistic hymns, the utilisation of anonymous cyber-actions to introduce draconian security measures are examples of such de-contextualised anonymous messages coming back to haunt its origins.
 
In the data economy, the anonymised data set is fashioned as a digital mirror of populations’ activities and tendencies. The organisations that hold a monopoly over these data sets get to assert their own categories of desired and undesired activities as it is seen fit to improve the markets and forms of governance. Since the data in such data sets cannot be directly linked to individuals, privacy is claimed to be intact. Since the data sets are anonymised, the targeted populations cannot expect answers to their questions about the quality, repurposing, and use of this data for or against them.
 
Continuity, Articulation and Anonymity
 
Given its historical persistence across centuries, anonymity appears to be here to stay. It is hence not surprising that this viral strategy replicates itself on the Internet. In its most powerful and at times even heroic moments, it is used to counter targeted surveillance by creating collective protection around individuals. Yet, we also need to recognise that the same strategy is concurrently used to create discrete, de-contextualised, and yet linked data sets, which are imminent to the data economy.
 
The current economy based on data fetish leads to bizarre data collections. We now have gargantuan databases of “friends” who “rate” information to their “like”-ing from which our interests, desires, opinions, and soft spots can be inferred. The anonymisation of these databases is not done to protect the participants of these data sets — never mind that even in their sophisticated forms these anonymisation techniques provide no formal guarantees [3]. Rather, the strategy is used to disempower their subjects from understanding, scrutinising, and questioning the ways in which these data sets are used to organise and shape their access to resources and connections in a networked world.
 
Given the backdrop of the data economy, our societies should continue to savour anonymity as a strategy to protect individuals on the Internet and we should reject its reincarnation as an instrument for creating discontinuity between the context in which these data sets were authored and the contexts in which they get used, with the intention to manage and manipulate people’s lives. Database anonymization may be useful for additional protection, but should not be the basis for service providers to shed their responsibilities with respect to the collection and processing of our data bodies.

 
The discontinuity inherent to anonymization and its dis/empowering affects needs to be further theorized in political movements, where anonymity remains a powerful means to achieve political objectives and disseminate collective messages to a greater public. From the perspective of social movements it is clear that the technical instantiations of anonymous communications must remain a fundamental function available in our communication networks. However, especially in political contexts, the vulnerability that is inherent to anonymous collective requires that multiple strategies are available to its participants. For instance, in order to create a continuity with activities that are initiated anonymously, collectives with political agendas may publish statements or organize activities that are explicit, precise, situated and that include their origins. Such coupling of strategies would build on the power and lightness of anonymous messages while making their cooptation more difficult.

 
[1] European Union (1995). Data Protection Directive (Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data).
http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:31995L0046:en:HTML
(accessed March 15, 2012)
[2] A recent article in The Economists states, “In dozens of jurisdictions, from the British Virgin Islands to Delaware, it is possible to register a company while hiding or disguising the ultimate beneficial owner.”
The Economist, “Corporate Anonymity: Light and Wrong”, Jan 21st 2012: http://www.economist.com/node/21543164 (accessed March 15, 2012) [name of author not given in on-line issue]
[2] Adela Peeva, Dir. Whose Is This Song?, film, 2003.
[3] Arvind Narayanan and Vitaly Shmatikov “Myths and Fallacies of ‘Personally Identifiable Information’”, Communications of the ACM vol.53, issue. 6, 2010

Fourth International Workshop on Privacy Engineering CFP is out!


Great news: the Call for Papers for the fourth iteration of the International Workshop on Privacy Engineering (IWPE) is out! This year’s program seeks to highlight challenges to privacy posed by widespread adoption of machine learning and artificial intelligence technologies. One motivation for this focus stems from goals and provisions of the European General Data Protection Regulation (GDPR), including requirements for privacy and data protection by design, providing notices and information about the logic of automated decision-making, and emphasis on privacy management and accountability structures in organizations that process personal data. Interpreting and operationalizing these requirements for systems that employ machine learning and artificial intelligence technologies is a daunting task and we hope to attract papers from researchers, civil society and industry on the topic.

This year we decided to co-locate IWPE with the European IEEE S&P which will take place in London between the 24th and 26th of April. With this, we hope in the coming years to establish a tradition of moving the workshop (for now) between the US and EU.

Workshops are the product of all the dedicated researchers who agree to serve on our PC as well as the hard work of the organizers of the conferences where we co-locate our workshop. We are delighted to once again have a fantastic and interdisciplinary PC. There is also great effort that goes into establishing a new workshop. For this, I would like to thank Jose M. del Alamo (Universidad Politécnica de Madrid) who has pulled the heavy weight of putting together our workshop for the last four years. Special thanks also goes out to our current program co-chairs Anupam Datta (Carnegie Mellon University) , Aleksandra Korolova (University of Southern California) and Deirdre K. Mulligan (UC Berkeley); our industry chair Nina Taft (Google); our mentoring and local chair Jose Such (King’s College London); and, our publicity chair Arunesh Sinha (University of Michigan). We look forward to seeing you at IWPE’18.

Attitudes towards “Spiny CACTOS”


It is one thing to ask people if they want to control the appropriate flow of their disclosures (or disclosures of others about them) on Online Social Networks (OSNs), it is another to ask who they think should be responsible for ensuring the appropriate flow of this information. In the first part of a small study conducted last summer at CMU, which Ero Balsa will present next week at USec 2014 , participants were asked these two questions. The objective of the study was to find out if users feel that they should be responsible for taking extra measures to avoid privacy problems that Cryptographic Access Control Tools for Online Social Networks (CACTOS) hope to mitigate. Namely, privacy problems resulting from disclosure of all user data (including private messages) by default to the OSN provider and delegation of privacy setting enforcement to OSN providers. In other words, are the privacy concerns that the developers of CACTOS have aligned with that of the users, and, if so, who do the users think should be responsible for mitigating these privacy problems?

http-::rbedrosian.com:Folklore:hp11to14a

Somewhat unsurprisingly, the study participants said they want full control over determining the disclosure and appropriate flow of their public and private messages on OSNs, but Facebook (the OSN used in the study) should share responsibility for making sure that their privacy is respected. For example, despite identifying increasingly permissive privacy settings in Facebook as a problem, the participants thought that it is their responsibility to configure their privacy settings correctly. However, they found that it is the responsibility of the OSN to make sure that privacy settings are effective. When it came to undesirable disclosures about the person by other OSN users, some participants expected that the OSN should ensure removal. While they were aware that facebook had provided advertisers access to their profiles and tracked their activities across the web, many participants agreed that it is the responsibility of the OSN to make sure that their disclosures do not all of a sudden pop up in the databases of third parties.

In the second part of the study, participants were offered a cryptographically powered tool called “Scramble!” that would allow its users to have “strict and highly granular controls” on who sees a disclosure on OSN and that prevents the OSN from accessing the content of the disclosure. The tool had the usual problems that security tools have with usability and complexity, but all in all the participants thought the tool was useful and could provide them with a desired granularity of control. However, due to a variety of reasons, they found it too cumbersome to use a CACTOS like Scramble! to ensure that their information flows appropriately. Even though the participants complained about the indeterministic privacy settings and the OSN provider’s data usage practices, they thought that to use a cryptographic tool to eliminate these problems was a disproportionate measure — such heavy duty tools were found appropriate for “others” who have secrets. They also did not want to establish trust towards yet another entity, in this case the CACTOS provider. They shied away from sharing their data with the CACTOS provider, although Scramble! would not be “seeing” their disclosures in clear text. Others even suggested that they would trust the tool if Facebook certified it. Many participants also agreed that if a disclosure is jeopardizing, they could send it as a private message (which they assumed would be kept confidential by the OSN), or, most strikingly, they could just remain silent.

ScrambleKuLeuvenLogo

At this point it is reasonable to ask why shift the focus of a study from usability to responsibility? The idea of the study was developed within the SPION project where responsibilization of users with respect to protecting their privacy is one of the main themes. The argument is that information systems that mediate communications in a way that also collects massive amounts of personal information may be prone to externalizing some of the risks associated with these systems onto the users. This can easily happen under the label “privacy” which can be leveraged to put the individual at the center of responsibility. Hence, privacy protection can become a way of burdening the users with the risks externalized by those systems and an apparatus for disciplining them. The objective of the SPION project is hence to critically assess the ways in which privacy technologies may intensify the responsibilization of OSN users, or explore whether they can be designed to shift back responsibilities to those providing the OSN services.

This idea of “responsibilization” is borrowed from David Garland’s article titled “The Limits of the Sovereign State: Strategies of Crime Control in Contemporary Society” which was applied in the domain of “identity management systems” by David Barnard Wills and Debi Ashenden in their article titled Public Sector Engagement with Online Identity Management . Responsibilization (a terrible word to pronounce) is a complex concept in studies of governmentality to which I can do no justice here, so I will stick to the basic definition above.

These researchers that write about the topic note that people may have various reactions to responsibilization, including one of rejection and pointing back to the institutions that cause the problem in the first place. For example, OSN privacy policies will often say it is the responsibility of the users to avoid undesirable information flows e.g., by watching over themselves and their privacy settings. As I mentioned earlier, OSNs will often change the semantics of privacy settings, this is a very slippery responsibility to put on the users shoulders. Nevertheless, the participants of our study seemed to have internalized that message. They mainly thought that they should be responsible for what they post and how they use functionality. Yet, although our study was small and limited, it seemed that the participants also pushed back on the configuration of responsibilities. Scramble! here functioned as an artefact through which they could imagine a different way of controlling information flows and express their needs: for most participants it was too cumbersome to make up for the unreliability of the OSN by enforcing appropriate information flows through Scramble!. Instead, following from the first part of the study, it should be the responsibility of the OSN to get privacy settings right and not share their information with third parties.

There are many limitations to this small scale study. It is small, and it is about one OSN and one CACTOS. Further, if for many people “technology” is a scary thing, then “encryption” is likely to give them nightmares. Surely, mentioning that Scramble! was based on “encryption” primed the users in a certain way and influenced their responses. “Responsibility” itself is just as loaded a concept as privacy, control or encryption. Shanto Iyengar has shown in his paper titled “Framing Responsibility for Political Issues: The case of Poverty” that framing has an important impact on who people will see as responsible for political issues. Exactly how this may also apply with respect to the framing of privacy and responsibilization is a great question for future research.

Finally, it seems that many participants of our study preferred to censor their speech or control their actions over deploying tools to protect their privacy. This is a troubling matter. Most computer science research on privacy looks to provide techniques and tools that are expected to support users in their everyday negotiation of privacy, e.g., CACTOS, anonymous communication tools, adblockers, identity management systems or privacy nudges. As computer scientists, we may have become too comfortable with a world-view in which “privacy protecting machines” can protect users or aid them in protecting themselves from “privacy intrusive machines”. In doing so, we may have overestimated the part “the users” may actually want play in this challenging game between machines. I hope by providing some insight into attitudes of users towards responsibilization in OSNs as well as towards CACTOS, this paper serves to think about where users may or may not want to enter this game.

Ero Balsa (KU Leuven), Laura Brandimarte (Carnegie Mellon University), Alessandro Acquisti (Carnegie Mellon University), Claudia Diaz (KU Leuven), Seda Gürses (New York University), Spiny CACTOS: OSN users attitudes and perceptions towards cryptographic access control tools , USec’14, San Diego.