Open video is the movement to promote free expression and innovation in online video. OVC is a two-day summit to explore the future of video on the web. Learn more...

Presented by
Open Video Alliance

New York Law School

New York Law School

With support from
Mozilla

Google

Cloud Video Encoding

Kaltura

Yale ISP

Bocoup — The Javascript Company

Flumotion

Pace University Seidenberg School

Supporting OVC
Learn more about supporting OVC



a coalition of individuals and organizations dedicated to creating and promoting open technologies, policies and pratcices. Mozilla
Miro
Kaltura
Information Society Project











learn about supporting OVC

Visual Privacy? Visual Anonymity?

With cameras now so widespread, and image-sharing so routine, it is surprising how little public discussion there is about visual privacy and anonymity.  Everyone is discussing and designing for privacy of personal data, but almost no one is considering the right to control personal images in a socially-networked age or to be  anonymous in a video-mediated world.

Imagine a landscape where companies are able to commercially harvest, identify and trade images of a person’s face as easily as they share emails and phone numbers. While it is technically illegal in some jurisdictions (such as the EU) to hold databases of such images and data, it is highly likely that without proactive policymaking, legislative loopholes will be exploited where they exist.  So far, policy discussions around visual privacy have largely centred on public concerns about surveillance cameras and individual liberties. And now, with automatic face-detection and recognition software being incorporated into consumer cameras, mobile apps and social media platforms, the potential for identification of activists and others – including victims, witnesses and survivors of human rights abuses – is growing.

Furthermore, services increasingly store users’ personal and other data in the digital cloud. Cloud data is processed and handled across multiple jurisdictions, creating potential inconsistencies and conflicts in how users and their data are protected. More worryingly, cloud storage renders data vulnerable to multiple attacks and data theft by any number of malicious hackers. Hostile governments, in particular, can use photo and video data – particularly that linked with social networking data – to identify, track and target activists within their countries.

Beyond that, in an increasingly video-mediated world, the right to visual anonymity – within an understanding of how the right to freedom of speech is often facilitated by the ability to be anonymous, has not yet been fully articulated or developed.

Professor Helen Nissenbaum

This session, led by Helen Nissenbaum, Professor at the NYU Department of Media, Culture, and Communications, and Sam Gregory, Program Director at WITNESS, will work towards a greater understanding of what we might mean by visual privacy and visual anonymity, and how this relates to other conceptions of privacy and anonymity.  It will address the opportunities identified by commercial providers who are beginning to use facial recognition and identification within social network products, and create a dialogue around key issues. It will also consider what tools and technologies could be incorporated into core web and mobile functionality to enable both human rights activists and general users to exercise greater control over visual privacy and anonymity. This includes building “visual privacy” checks (including masking data encoded into images, such as location, time, type of camera, etc.) as well as standard privacy checks into product design, development and marketing workflows, drawing on risk scenarios outlined through human rights impact assessments.

 

Leave a Reply