The Spectre of Anonymity

Prologue: In preparation for the A=Anonymous: Reconfiguring Anonymity Conference this week in Kampnagel, Hamburg, I am republishing an older article of mine as a blog post. The original article was published in 2012 in a book titled “Sniff, Scrape, Crawl … {On Privacy, Surveillance, and Our Shadowy Data Double} edited by Renee Turner and published by Mute Books. At the time of the text’s completion, the “Arab Spring” had entered its “Fall”, facing a period of backlash from governments. The turn of events made me think about the various lists of demands that the people would put up on banners rolled out across large buildings. Often times the banners were anonymous, posing as an expression of the masses. As tantalizing as the demands were, their messages were too easily erased or coopted by governments who were marching forward with “counter-revolutions”. The piece tries to respond to the various manifestations of anonymity during the course of these events, with an eye on the Internet and social movements.
 
The piece is somewhat dated and the writings of a younger scholar. It also includes the famous “Anonymous City” animation which is now over 10 years old. About time!

 

“Anonymity is our first line of defence.”
Professor Xavier, XMEN: First Class

 

Anonymity is a powerful concept and strategy. It transgresses concepts like authorship, the original, and the origin, and presents itself across important elements of our lives like songs, poems, oral histories, urban legends, conspiracy theories, and chain mails.
 
For centuries anonymity has been a strategy used by communities to articulate their collective voice. This definition is related to an understanding of anonymity as it relates to individual autonomy, and yet, it shifts the focus from its individual use to its collective effect. Anonymously produced statements or artefacts have expressed the cultural practices, beliefs and norms of the past, while creating a space in which future collectives can manifest themselves.
 
In some contexts, anonymity allows the individual to melt into a body of many, to become a pluralistic one, for which communicating a message is more important than the distinction of the participating individual(s). Whether at a demonstration or a football match, the power of the anonymous collective produces a field of protection and cohesion around its participating individuals.
 
And yet, the seemingly unbreakable bond can be fragile, since participation is fluid; individuals and groups enter and leave as they please; and, the organisation of the anonymous collective is distributed. The anonymous perseveres only as long as the common line is held. This volatility is also what distinguishes spontaneously gathered anonymous groups from purposefully assembled collective anonymous bodies.
 
And, in this difference we understand that anonymity is more a means, than an end in itself. It can be utilised in multiple ways for a variety of purposes. For example, a centrally organised form of anonymity can be found with the uniformed soldiers of a brigade or managers of a corporation — the latter also known as the “anonymous limited”.[2] In organised anonymity, participation is mandatory and actions are heavily controlled. The objective is still to protect, but this time to protect the organizing authorities rather than the participating individuals — the latter often being consumed in the process. Control mechanisms are there to utilise the anonymous group to reinforce existing power hierarchies, e.g., the state, the nation, or the shareholders, and to render divergences from this goal impossible.
 
Anonymity, in its more fluid and in its more centrally organised form, when used as a strategy in networked systems like the Internet, provides some parallels. As in the physical world, it manifests itself in various mechanisms for a multitude of ends and hence, has different potentials and limitations.
 
The Internet and Anonymity
 
The power of anonymity in Internet communication has long been recognised by computer scientists and hackers. For example ‘Anonymous communications’ technologies — of which Tor is a popular implementation — strip messages (in the case internet traffic) of any information that could be used to trace them back to their senders. Powerful observers can identify that Tor users are communicating (with each other or websites), but cannot identify who is communicating with whom. In other words, individual communication partners are not distinguishable within a set (of Tor users). Communicating partners also remain anonymous towards each other. These measures are intended to provide individuals in the set protection against any negative repercussions resulting from inferences that can be made from who and how often they are communicating with others or the websites they are visiting.

Anonymous City: A short animation by Thibaut D’alton and myself describing the mechanisms used to design anonymous communications. All misrepresentations of anonymous communications are our own.

 
Anonymous communications are designed to circumvent the traceability of interactions on the Internet. They work around the default architecture of the Internet that makes it possible to trace all messages, online actions, and other ‘data bodies’ to their origins and, through that, to their individual authors in physical space and time. This capability allows service providers to collect, scrutinise, dissect, reconfigure, and re-use these data bodies. By masking the origin and destination of communications, services like Tor remove the link between individuals and their data bodies.
 
Despite the diversity of the groups and communities using anonymous communications, such technologies are usually cast in a negative light in mainstream policy papers and in the media. Anonymous communication infrastructures are generally depicted as providing channels for criminal activity or enabling deviant behaviour.
 
It seems, however, what bothers authorities the most is not anonymity as such, but rather the characteristics of the user base and the distributed nature of anonymous communications. This becomes evident in the keen interest that data miners and regulators have in a centralised form of anonymity applied to large databases, a strategy that fits squarely with the interests of the growing data economy.
 
The Market, Governance and Anonymity
 
We are currently in the midst of an economic hype driven by data. The ideology behind this hype suggests that the data collected is going to make the behaviour of populations more transparent, easier to organise, control, and predict. Data collected en mass is expected to reveal to their collectors ways of improving the efficiency of markets as well as their systems of governance.
 
Improvement is promised through mastering the application of statistics to the gathered data sets. Large scale collection of all-encompassing data sets are expected to reveal ways of improving market efficiency and systems of governance, by applying methods of statistical analysis to these data sets and inferring knowledge from these statistics. According to behavioural advertisers and service providers, these data sets are becoming ‘placeholders’ for understanding populations and allowing organisations to provide them with refined individualised services. In the process, elaborate statistical inferences replace ‘subjective’ discussions, reflections or processes about societal needs and concerns. The data comes to speak for itself. Hence, in this ideology, the promise of control and efficiency lies in data and the processing power of its beholders.
 
However, the collection and processing of such mass amounts of data about consumers or citizens is historically and popularly coupled with the `privacy problem’. It has been commonly understood that addressing this issue requires limiting the power these organisations can exercise when using this data. These constraints need to hold as long as the people to which the data in a given database relate are uniquely identifiable.
 
It is in this series of reductions of the problem that service providers discover anonymity for their own ends. The database is to be manipulated in such a way that the link between any data body included in the data set and its individual ‘author’ is concealed, while the usefulness of the data set as a whole is preserved. If this is somehow guaranteed, then the dataset is declared ‘anonymised’, and it becomes fair game. Inferences can be made freely from the data set as a whole, while ideally no individual participant can be targeted.
 
Scrubbing data until it becomes sufficiently anonymised for any one to process as they wish is not only endorsed by service providers, but also reinforced by regulation. The European Data Protection Directive excludes anonymised data sets from its scope [1]. If the database is anonymised, then the data is set free. This free flow of data is then only constrained by the markets, in line with one of the two principle objectives of the same Directive.
 
The Surrogates to Anonymity
 
What is common to anonymity on the Internet and elsewhere is the breaking of the link between the original author(s) and the message. This is an important element of anonymity as a communication strategy. Once the message is released, it is likely to be subverted and reclaimed by others. This is one of the charms of the fluid anonymous message: any individual or group can claim it as their own. But when a group subverts the message to negate all other linkages and continuities, monopolising the interpretation of the message’s senders, destination, and content, the relationship between ‘the anonymous’ and the message can become vulnerable.
 

Trailer of “Whose song is this? A documentary on folk songs by Adela Peeva.

 
An example of this kind of dynamic at work, can be seen in Adela Peeva’s film “Whose is this song?” [2]. In the documentary, Peeva searches across the Balkans for the origins of an anonymous folk song. In each country or region that she visits the song changes, becoming a love song, a song of piety, a song about a girl from the village behind the hills, or even a war song. However, with every variation, the question about the song unravels a chorus of claims about its authentic origins. In each claim, the song is cut anew from its traveling past. It is extorted and burdened with carrying the truths of a national past and with shaping the future identity of the referred community in barely subtle archetypes: from the young Turks to amorous Greeks, from proud Albanians to pious Bosnians, from debauch Serbians to superstitious Gypsies, all the way to unwincing Bulgarians.
 
Peeva’s film captures a dilemma that can be associated with any anonymous action or artefact. Anonymity allows for the articulation of a collective message that can travel without the burdens of authorship and origin. This allows for some lightness that opens the way for the message to flow freely and to be reshaped creatively. However, this void is easily filled when a group, community, or organisation claims and bends the message to suit its own interpretation of the past and future. The message is then fixed, and its interpretation is monopolised. This happens because anonymity frees the message and, simultaneously, leaves it up for grabs.
 
If this is the case, the message can even be used to shape the story of the anonymous community that created the message. The anonymous message may boomerang back to hit its authors, often as a collective. The hijacking of popular uprisings by a few that establish their power, the re-writing of folk songs into chauvinistic hymns, the utilisation of anonymous cyber-actions to introduce draconian security measures are examples of such de-contextualised anonymous messages coming back to haunt its origins.
 
In the data economy, the anonymised data set is fashioned as a digital mirror of populations’ activities and tendencies. The organisations that hold a monopoly over these data sets get to assert their own categories of desired and undesired activities as it is seen fit to improve the markets and forms of governance. Since the data in such data sets cannot be directly linked to individuals, privacy is claimed to be intact. Since the data sets are anonymised, the targeted populations cannot expect answers to their questions about the quality, repurposing, and use of this data for or against them.
 
Continuity, Articulation and Anonymity
 
Given its historical persistence across centuries, anonymity appears to be here to stay. It is hence not surprising that this viral strategy replicates itself on the Internet. In its most powerful and at times even heroic moments, it is used to counter targeted surveillance by creating collective protection around individuals. Yet, we also need to recognise that the same strategy is concurrently used to create discrete, de-contextualised, and yet linked data sets, which are imminent to the data economy.
 
The current economy based on data fetish leads to bizarre data collections. We now have gargantuan databases of “friends” who “rate” information to their “like”-ing from which our interests, desires, opinions, and soft spots can be inferred. The anonymisation of these databases is not done to protect the participants of these data sets — never mind that even in their sophisticated forms these anonymisation techniques provide no formal guarantees [3]. Rather, the strategy is used to disempower their subjects from understanding, scrutinising, and questioning the ways in which these data sets are used to organise and shape their access to resources and connections in a networked world.
 
Given the backdrop of the data economy, our societies should continue to savour anonymity as a strategy to protect individuals on the Internet and we should reject its reincarnation as an instrument for creating discontinuity between the context in which these data sets were authored and the contexts in which they get used, with the intention to manage and manipulate people’s lives. Database anonymization may be useful for additional protection, but should not be the basis for service providers to shed their responsibilities with respect to the collection and processing of our data bodies.

 
The discontinuity inherent to anonymization and its dis/empowering affects needs to be further theorized in political movements, where anonymity remains a powerful means to achieve political objectives and disseminate collective messages to a greater public. From the perspective of social movements it is clear that the technical instantiations of anonymous communications must remain a fundamental function available in our communication networks. However, especially in political contexts, the vulnerability that is inherent to anonymous collective requires that multiple strategies are available to its participants. For instance, in order to create a continuity with activities that are initiated anonymously, collectives with political agendas may publish statements or organize activities that are explicit, precise, situated and that include their origins. Such coupling of strategies would build on the power and lightness of anonymous messages while making their cooptation more difficult.

 
[1] European Union (1995). Data Protection Directive (Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data).
http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:31995L0046:en:HTML
(accessed March 15, 2012)
[2] A recent article in The Economists states, “In dozens of jurisdictions, from the British Virgin Islands to Delaware, it is possible to register a company while hiding or disguising the ultimate beneficial owner.”
The Economist, “Corporate Anonymity: Light and Wrong”, Jan 21st 2012: http://www.economist.com/node/21543164 (accessed March 15, 2012) [name of author not given in on-line issue]
[2] Adela Peeva, Dir. Whose Is This Song?, film, 2003.
[3] Arvind Narayanan and Vitaly Shmatikov “Myths and Fallacies of ‘Personally Identifiable Information’”, Communications of the ACM vol.53, issue. 6, 2010

Fourth International Workshop on Privacy Engineering CFP is out!


Great news: the Call for Papers for the fourth iteration of the International Workshop on Privacy Engineering (IWPE) is out! This year’s program seeks to highlight challenges to privacy posed by widespread adoption of machine learning and artificial intelligence technologies. One motivation for this focus stems from goals and provisions of the European General Data Protection Regulation (GDPR), including requirements for privacy and data protection by design, providing notices and information about the logic of automated decision-making, and emphasis on privacy management and accountability structures in organizations that process personal data. Interpreting and operationalizing these requirements for systems that employ machine learning and artificial intelligence technologies is a daunting task and we hope to attract papers from researchers, civil society and industry on the topic.

This year we decided to co-locate IWPE with the European IEEE S&P which will take place in London between the 24th and 26th of April. With this, we hope in the coming years to establish a tradition of moving the workshop (for now) between the US and EU.

Workshops are the product of all the dedicated researchers who agree to serve on our PC as well as the hard work of the organizers of the conferences where we co-locate our workshop. We are delighted to once again have a fantastic and interdisciplinary PC. There is also great effort that goes into establishing a new workshop. For this, I would like to thank Jose M. del Alamo (Universidad Politécnica de Madrid) who has pulled the heavy weight of putting together our workshop for the last four years. Special thanks also goes out to our current program co-chairs Anupam Datta (Carnegie Mellon University) , Aleksandra Korolova (University of Southern California) and Deirdre K. Mulligan (UC Berkeley); our industry chair Nina Taft (Google); our mentoring and local chair Jose Such (King’s College London); and, our publicity chair Arunesh Sinha (University of Michigan). We look forward to seeing you at IWPE’18.

Keeping the ball rolling: In memory of Özgür Uçkan and Caspar Bowden

It is hard to keep floating when two people who have inspired you in life pass away within days from each other. I owe it to these two troublemakers to thank them for their great work and for the paths that they have opened to many of us.

Today the news came that we lost Özgür Uçkan. Özgür was a digital rights activist, as well as a professor, philosopher, artist, economist, and one of the founding members of Alternatif Bilisim, an association based in Turkey working on digital rights and freedoms. I have had the fortune of meeting a number of polymaths in my life, but few of them sustain an equal passion for working with people, as they do for their intellectual endeavors like Özgür did. The picture below from an anti-censorhip protest in Istanbul that Ismail Hakki Polat used in his eulogy says it all.

Ozgur Uckan Sansure Karsi

Özgür, in the brown t-shirt, is standing tall and proud, and most probably having some good fun at the front-line. Most importantly, he is surrounded by peers and some of the many young people he inspired, many of whom continue to be part of the struggle for digital rights and freedoms in Turkey. Within a year from the time that picture was taken, the same networks would organize large protests that would come to attract 60.000 people in over 30 cities within and outside of Turkey. People have argued that these series of actions were some of the stepping stones that led to the Gezi Park protests. After all, ruptures like Gezi are often the product of widely felt frustration as well as the accumulation of years of organizing. From where I stand, Özgür Uçkan belonged to the group of people who understand what it takes to create a collective vision, and then to organize and mobilize people around it. He worked relentlessly to capture the spirit of our times, to resist infringements upon our fundamental freedoms, and to do so in a way that inspired action and change.

There is another detail in that same picture which will bring me to Caspar Bowden, the other person who passed away this week. Next to Özgür Uçkan stands Yaman Akdeniz, yet another important academic, activist, and free-speech advocate. Caspar Bowden was the first person to mention Yaman’s name and work to me. Yaman Akdeniz and Caspar Bowden went way back. Here is a chapter in a book the two wrote together titled “Cryptography and Democracy: Dilemmas for Freedom” in 1999. The piece was written during Caspar’s time at the Foundation for Information Policy Research. While Yaman Akdeniz moved onto fighting government censorship as his prime area of activity, Caspar Bowden switched to Microsoft where he would later become the Chief Privacy Adviser. I met him during this time and was surprised by his commitment to promoting Privacy Enhancing Technologies given the title he was holding. Throughout the years, I witnessed how he leveraged all the powers and connections he had to push forward technical architectures and designs that would serve to protect privacy. He would encourage those of us working on such systems to continue our line of work, while also pulling us into rooms with policy makers and parliamentarians so that we could demonstrate the powers of encryption and distributed computation in the service of protecting privacy. When he parted paths with Microsoft and returned to his advocacy work, I saw him at first struggle with the legacy of his association with the company. But this being Caspar, he just held on to his grounds and pushed every channel possible to make it known to the public what Edward Snowden’s revelations about NSA and GCHQ surveillance programs would eventually confirm.

Today, the loss of Özgür Uçkan and Caspar Bowden feels like two hard punches. Tomorrow, I can imagine gaining courage from the many inspiring memories we have of them and to dream futures informed by the principles they held true. As one wise community activist from NYC once said, “they rolled the ball over to us, it is now our turn to keep it rolling”.

For a collection of videos of interventions by and about Özgür Uçkan see Erkan Saka’s compilation.

For a sweet farewell to Caspar Bowden, see Malavika Jayaram’s post.

And, here is a video of Caspar’s talk at 31C3 which will allow you to enjoy his talk _and_ his infamous slides.

Touching Correlations: A hands-on Workshop on Machine Learning

The Women and Surveillance Initiative just announced a workshop on Machine Learning. Please consider joining us!

Touching correlations: A hands-on Workshop on Machine Learning
Organized by the Women and Surveillance Initiative, NYC
18.-19. July 2015
Location: Data and Society Offices

Are you interested in how computers use algorithms to learn from data? Curious what kinds of things machine learning can be used for? Want to understand and discuss the culture of machine learning? Then join us for a participatory workshop!

Networked machines amassing large databases and running powerful machine learning algorithms touch all aspects of our lives, and yet they mainly remain a black box. These systems are increasingly used for face recognition, targeted advertisement, predicting consumer behavior, medical predictions, social network analysis, financial predictions, and yet sometimes even the experts will not be able to explain why they work, what it means to say that they “work”, or to comprehend the work they do in social settings. At this workshop, we will try to open the black box of “Machine Learning” and discuss what actually goes into making these kinds of predictions. After a primer on the basic concepts and procedures, we’ll do some hands-on experiments looking at real world datasets and discuss collectively the different elements that make up what machine learning.

Our objective is to explore machine learning from the perspective, experience and expertise of the participants. No prior knowledge of mathematics or algorithms is required, nor should having such expertise hold you back from participating. We are in the process of preparing a short reading/video list that can be used prior to or after the workshop for further exploration. We also recommend installing Weka, an open source software used for machine learning [0], on a device that you bring along. We hope that throughout the workshop, we can experiment with and make sense of the practice of machine learning based on our everyday experiences.

If you have a background in machine learning, and would like to help us make this workshop happen, please get in touch with us before 13th of July.

The workshop will take place on the 18th and 19th of July from 10am-4pm at the premises of Data and Society [1]. Those interested in participating should register by the 13th of July by sending an email to machinelearningworkshop@vous-etes-ici.net.

Participation in the workshop is free of charge. We will provide some drinks and snacks and would appreciate a donation of up to 10$s from participants. Participation is limited to 20 people.

Touching Correlations is organized by the Women and Surveillance Initiative based in New York City. The workshop is open to all past, present and future women or anyone who feels like they have a place in a women’s community.

[0] Weka Data Mining Software: http://www.cs.waikato.ac.nz/ml/weka/
[1] Data and Society http://www.datasociety.net

Panel at PETS 2014: Privacy Enhancing Technologies Post-Snowden

Our plan for a panel on the implications of the disclosed NSA and GCHQ surveillance programs for PETs researchers is materializing. The panel will take place on the 17th of July in Amsterdam at the PETs Symposium. We expect to have a lively discussion with Susan Landau, Wendy Seltzer, Stephanie Hankey, Nadia Heninger and George Danezis. In fact, thanks to a blog post on “The Dawn of Cyber-Colonialism”, it is maybe better to state, George has already kicked off the discussion.

Great thanks goes out to the program committee who have supported the idea from the first minute, and to the general chair Hinde ten Berge, Jaap Henk Hoepman from the PI.lab, and NWO for their material support.

PETs Post-Snowden: Implications of the revelations of the NSA and GCHQ Surveillance Programs for the PETs community

Despite the entertainment value of program names like “egotistical giraffe”, “onion breath” and “moth monster”, the revelations about the NSA and GCHQ surveillance programs are more than troubling. Specifically, BullRun (attacks on crypto) and the egotistical series (attacks on Tor) pose challenges to the PETs community and the solutions they work on. This panel focuses on some of these challenges, discuss their implications for PETs researchers and practitioners, and explore ways forward.

According to some, the revelations show that law and policy have failed to protect citizens around the globe from surveillance. It falls, among others, upon the shoulders of the PETs community to build technical solutions that are resilient to “mass surveillance” practices. But while Edward Snowden announced that “crypto still works”, intelligence agencies will continue to find ways to work around it. So others have argued that technology is far from a complete answer and that working with policy and law is more necessary than ever. If so, the challenges here range from finding ways to convince policy makers that weakening the Internet for surveillance is not acceptable to actually regulating “good” security and “bad” surveillance practices.

Both positions are troubled by motions to prevent companies from applying secure designs that may be seen as obstructing law enforcement agencies from conducting investigations. Further, governments around the globe are likely to consider implementing “back doors” as well as utilizing zero-day exploits as a way to guarantee law enforcement and intelligence access. These aggressive policies raise questions about where PETs can and should live; and, how to guarantee that their design remains robust, e.g., by keeping the implementation open to scrutiny?

Simultaneously with the revelations, cybersecurity for critical infrastructures has gathered force. Governments around the globe now bring intelligence agencies, standards bodies, contractors as well as academic researchers around tables in order to align technical security issues with national security interests. Cybersecurity funding abounds, affecting research trajectories as well as what gets done. How are PETs researchers and practitioners to manage these increasingly politicized demands along national lines?

Finally, people in their everyday lives navigate the implications of the revelations about the surveillance programs as much as engineers and researchers. Prominent security engineers have favored prioritizing developing measures against mass surveillance rather than for targeted surveillance. How “targeted” end users may be impacted by the prioritization of protections against “mass surveillance” is unclear. And indeed, the distinction itself may not be as clear cut as some of its proponents suggest. In other words, the issues raised here beg the question as to how we can ensure that user interests can be a continuous part of the PETs community’s priorities?

summer 2014: “thinking together” in times of mediated politics and politicized technologies


How can we live together? This is the very simple and fun (if not challenging) question that the participants of the Osthang Architecture Summer School will be asking this Summer in Darmstadt, Germany. The program will come to a closure with a nine day(!) public forum titled “thinking together” curated by Berno Odo Polzer. As Berno writes:

“«Thinking Together» is focused on rethinking future modes of living together in a pluricentric world, so it is a transdisciplinary platform for political imagination: ‘political’ because it is concerned with the way in which we organize the spaces, practices and lives that we share, locally as well as globally – ‘imagination’ because it is aimed at forming new ideas and imaginaries about how to do so.”

As part of “thinking together”, Femke Snelting, Miriyam Aouragh and myself will be organizing an afternoon with the title “Let’s First Get Things Done!”. In other words, how we can resist the divisions of labor between “activists” and “techies” that occur in those sneaky moments of moving forward?

Our experience is that the politics, values and practices of those activists heavily using networked technology for their struggles, and of techno-activists who struggle for progressive and alternative technologies do not always concur. Loyal to the utopia of a globally functioning interwebs, techno-activists usually organize around universal values: information must be “free”, secure, “privacy-preserving”, accessible, etc. In comparison, those who bring their political struggles to the interwebs may express political differences across a broader spectrum, situated in local and/or global contexts. Furthermore, “pragmatic decisions” due to time pressure and lack of resources often mean that these struggles integrate themselves into proprietary and conservative technical infrastructures. In the process, many organizational matters are delegated to techies or to technological platforms. Imagining our futures together, we may want to radically reconfigure these divisions of labor. But how? Where do we start? Well, it seems we will start in Darmstadt.

But, we are not the only ones asking these questions. This June members of some of the most successful alternative projects met at Backbone 409 in Calafou with the objective “to build infrastructures for a free Internet from an anti-capitalist point of view: autonomous servers, open networks, online services, platforms, open hardware, free software, etc.” . I am looking forward to hearing back from that meeting.

In August, Interference will take place in Amsterdam and also raise similar questions with respect to the politics of technology and infrastructures of politics. The organizers write:

“Interference is not a hacker conference. From a threat to the so-called national security, hacking has become an instrument for reinforcing the status quo. Fed up with yet another recuperation, the aim is to re/contextualize hacking as a conflictual praxis and release it from its technofetishist boundaries. Bypassing the cultural filters, Interference wants to take the technical expertise of the hacking scene out of its isolation to place it within the broader perspective of the societal structures it shapes and is part of.”

And surely, these discussions will show up at the TransHackFeminist Camp organized in collaboration with Calafou and the eclectic tech carnival people, and also at HOPE. It also seems that the topic has found interest among academics. See the call for papers for the next issue of the FibreCulture Journal titled: “Entanglements: activism and technology”.

Thanks to all these events, this will be a summer of collaboration and labor. I cannot wait to see what thoughts and actions we return with for the Autumn.

Let’s first get things done: on division of labor and practices of delegation in times of mediated politics and politicized technologies

4th of August, 2014
Osthang, Darmstadt, Germany

Be it in getting out the call for the next demonstration on some “cloud service”, or developing a progressive tech project in the name of an imagined user community, scarcity of resources and distribution of expertise makes short cuts inevitable. But do they really?

The current distance between those who organise their activism to develop “technical infrastructures” and those who bring their struggles to these infrastructures is remarkable. The paradoxical consequences can be baffling: (radical) activists organize and sustain themselves using “free” technical services provided by Fortune 500 companies. At the same time, “alternative tech practices”, like the Free Software Community, are sustained by a select (visionary and male) few, proposing crypto with 9-lives as the minimum infrastructure for any political undertaking.

The naturalization of this division of labor may be recognized in statements about activists having better things to do than to tinker with code or hardware, or in technological projects that locate their politics solely in the technology and infrastructures as if they are outside of the social and political domain. What may seem like a pragmatic solution actually re-iterates faultlines of race, gender, age and class. Through the convenient delegation of “tech matters” to the techies or to commercial services, collectives may experience a shift in the collective’s priorities and a reframing of their activist culture through technological decisions. The latter, however, are typically not open to a broader political discussion and contestation. Such separation also gets in the way of actively considering the way in which changes in our political realities are entangled with shifts in technological infrastructures.

We want to use this day to resist the reflex of “first getting things done” in order to start a long term collaboration that intersects those of us with a background in politics of society and politics of technology.

Attitudes towards “Spiny CACTOS”


It is one thing to ask people if they want to control the appropriate flow of their disclosures (or disclosures of others about them) on Online Social Networks (OSNs), it is another to ask who they think should be responsible for ensuring the appropriate flow of this information. In the first part of a small study conducted last summer at CMU, which Ero Balsa will present next week at USec 2014 , participants were asked these two questions. The objective of the study was to find out if users feel that they should be responsible for taking extra measures to avoid privacy problems that Cryptographic Access Control Tools for Online Social Networks (CACTOS) hope to mitigate. Namely, privacy problems resulting from disclosure of all user data (including private messages) by default to the OSN provider and delegation of privacy setting enforcement to OSN providers. In other words, are the privacy concerns that the developers of CACTOS have aligned with that of the users, and, if so, who do the users think should be responsible for mitigating these privacy problems?

http-::rbedrosian.com:Folklore:hp11to14a

Somewhat unsurprisingly, the study participants said they want full control over determining the disclosure and appropriate flow of their public and private messages on OSNs, but Facebook (the OSN used in the study) should share responsibility for making sure that their privacy is respected. For example, despite identifying increasingly permissive privacy settings in Facebook as a problem, the participants thought that it is their responsibility to configure their privacy settings correctly. However, they found that it is the responsibility of the OSN to make sure that privacy settings are effective. When it came to undesirable disclosures about the person by other OSN users, some participants expected that the OSN should ensure removal. While they were aware that facebook had provided advertisers access to their profiles and tracked their activities across the web, many participants agreed that it is the responsibility of the OSN to make sure that their disclosures do not all of a sudden pop up in the databases of third parties.

In the second part of the study, participants were offered a cryptographically powered tool called “Scramble!” that would allow its users to have “strict and highly granular controls” on who sees a disclosure on OSN and that prevents the OSN from accessing the content of the disclosure. The tool had the usual problems that security tools have with usability and complexity, but all in all the participants thought the tool was useful and could provide them with a desired granularity of control. However, due to a variety of reasons, they found it too cumbersome to use a CACTOS like Scramble! to ensure that their information flows appropriately. Even though the participants complained about the indeterministic privacy settings and the OSN provider’s data usage practices, they thought that to use a cryptographic tool to eliminate these problems was a disproportionate measure — such heavy duty tools were found appropriate for “others” who have secrets. They also did not want to establish trust towards yet another entity, in this case the CACTOS provider. They shied away from sharing their data with the CACTOS provider, although Scramble! would not be “seeing” their disclosures in clear text. Others even suggested that they would trust the tool if Facebook certified it. Many participants also agreed that if a disclosure is jeopardizing, they could send it as a private message (which they assumed would be kept confidential by the OSN), or, most strikingly, they could just remain silent.

ScrambleKuLeuvenLogo

At this point it is reasonable to ask why shift the focus of a study from usability to responsibility? The idea of the study was developed within the SPION project where responsibilization of users with respect to protecting their privacy is one of the main themes. The argument is that information systems that mediate communications in a way that also collects massive amounts of personal information may be prone to externalizing some of the risks associated with these systems onto the users. This can easily happen under the label “privacy” which can be leveraged to put the individual at the center of responsibility. Hence, privacy protection can become a way of burdening the users with the risks externalized by those systems and an apparatus for disciplining them. The objective of the SPION project is hence to critically assess the ways in which privacy technologies may intensify the responsibilization of OSN users, or explore whether they can be designed to shift back responsibilities to those providing the OSN services.

This idea of “responsibilization” is borrowed from David Garland’s article titled “The Limits of the Sovereign State: Strategies of Crime Control in Contemporary Society” which was applied in the domain of “identity management systems” by David Barnard Wills and Debi Ashenden in their article titled Public Sector Engagement with Online Identity Management . Responsibilization (a terrible word to pronounce) is a complex concept in studies of governmentality to which I can do no justice here, so I will stick to the basic definition above.

These researchers that write about the topic note that people may have various reactions to responsibilization, including one of rejection and pointing back to the institutions that cause the problem in the first place. For example, OSN privacy policies will often say it is the responsibility of the users to avoid undesirable information flows e.g., by watching over themselves and their privacy settings. As I mentioned earlier, OSNs will often change the semantics of privacy settings, this is a very slippery responsibility to put on the users shoulders. Nevertheless, the participants of our study seemed to have internalized that message. They mainly thought that they should be responsible for what they post and how they use functionality. Yet, although our study was small and limited, it seemed that the participants also pushed back on the configuration of responsibilities. Scramble! here functioned as an artefact through which they could imagine a different way of controlling information flows and express their needs: for most participants it was too cumbersome to make up for the unreliability of the OSN by enforcing appropriate information flows through Scramble!. Instead, following from the first part of the study, it should be the responsibility of the OSN to get privacy settings right and not share their information with third parties.

There are many limitations to this small scale study. It is small, and it is about one OSN and one CACTOS. Further, if for many people “technology” is a scary thing, then “encryption” is likely to give them nightmares. Surely, mentioning that Scramble! was based on “encryption” primed the users in a certain way and influenced their responses. “Responsibility” itself is just as loaded a concept as privacy, control or encryption. Shanto Iyengar has shown in his paper titled “Framing Responsibility for Political Issues: The case of Poverty” that framing has an important impact on who people will see as responsible for political issues. Exactly how this may also apply with respect to the framing of privacy and responsibilization is a great question for future research.

Finally, it seems that many participants of our study preferred to censor their speech or control their actions over deploying tools to protect their privacy. This is a troubling matter. Most computer science research on privacy looks to provide techniques and tools that are expected to support users in their everyday negotiation of privacy, e.g., CACTOS, anonymous communication tools, adblockers, identity management systems or privacy nudges. As computer scientists, we may have become too comfortable with a world-view in which “privacy protecting machines” can protect users or aid them in protecting themselves from “privacy intrusive machines”. In doing so, we may have overestimated the part “the users” may actually want play in this challenging game between machines. I hope by providing some insight into attitudes of users towards responsibilization in OSNs as well as towards CACTOS, this paper serves to think about where users may or may not want to enter this game.

Ero Balsa (KU Leuven), Laura Brandimarte (Carnegie Mellon University), Alessandro Acquisti (Carnegie Mellon University), Claudia Diaz (KU Leuven), Seda Gürses (New York University), Spiny CACTOS: OSN users attitudes and perceptions towards cryptographic access control tools , USec’14, San Diego.

Data Shadows: Anonymity and Digital Networks at apexart

Next week I will be participating in an evening event titled “Data Shadows: Anonymity and Digital Networks” put together by Alexander Benenson as part of the public program in conjunction with the exhibition Private Matters organized by Ceren Erdem, Jamie Schwartz and Lisa Hayes Williams. I plan to present the next episode of the series “A Failed Coup Attempt with Folk Songs” of which I found traces of part II , part III, and part V . Now the task is to figure out what the number of the part is when presenting at apexart where I will have the pleasure of sharing the room with, among others, Finn Brunton and John Menick. Most of my talk will be a reflection on some of the thoughts in the article titled “The Spectre of Anonymity” given the backdrop of the revelations about the NSA surveillance programs.

The event will take place on the 27th of February, 2014 at 7pm at apexart.

Symposium on Obfuscation

My first encounters with the concept of obfuscation go back to discussions that the privacy research group at COSIC/ESAT (KUL) had about TrackMeNot in 2012. Back then we were discussing the efficacy of the possible protections offered by TrackMeNot when faced with a “learning” machine. Little did I know that one day I would have close encounters with the creators of TrackMeNot, Helen Nissenbaum, Vincent Toubiana and Daniel Howe. All three will be at the Symposium on Obfuscation which takes place next week at NYU. The line up of speakers include Susan Stryker, Nick Montfort, Laura Kurgan, Claudia Diaz, Günes Acar, Finn Brunton, Hanna Rose Shell, Joseph Turow as well as Rachel Greenstadt representing her research group that developed “Anonymouth”, Daniel Howe, the creator of “Ad Nauseam”, and Rachel Law, the maker of “Vortex”. You can find out more about the event here.

, February 7, 2014. Category: news.

welcome, vous etes ici!

I am currently a post-doctoral fellow at CITP, Princeton University. More coming soon! Until then, please check my old homepage here.

, January 2, 2014. Category: meta, news.