Program

TUESDAY      

25 OCTOBER 2022

12:00

WELCOME & JOINT LUNCH

13:45─14:00

OPENING ADDRESS
Gilles Barthe & Christof Paar, Max Planck Institute for Security and Privacy (MPI-SP)

14:00─14:45

The Theory of Artificial Immutability: Protecting Algorithmic Groups under Anti-Discrimination Law

Sandra Wachter, University of Oxford (remotely)

14:45─15:30

The Future of Privacy Research: Lessons from Artificial Intelligence and Machine Learning

Heng Xu, American University

15:30─16:00

COFFEE BREAK

16:00─16:45

Democracy and the Pursuit of Randomness
Ariel Procaccia, Harvard University (remotely)

16:45─17:30

Communicating with Anecdotes
Nicole Immorlica, Microsoft Research (remotely)

17:30─18:30

DINNER

18:30─19:15

Bias by Design: How Digital Technology Can Fail its Diverse Users
Katharina Reinecke, University of Washington (remotely)
19:15─20:00

Improving Privacy Transparency for Targeted Advertising
Blase Ur, University of Chicago

 

WEDNESDAY

26 OCTOBER 2022

10:00─10:45

Integrative Responsible Computing
Asia Biega, MPI-SP

10:45─11:30

Human Factors in Hardware Reverse Engineering
Steffen Becker, MPI-SP

11:30─12:15

Consumer Reactions to Data Breaches, Deficiencies of Breach Notifications, and Opportunities for Intervention
Yixin Zou, MPI-SP

12:15

JOINT LUNCH

14:00─14:45

Social and Algorithmic Curation of Online Information
Sandra González-Bailón, University of Pennsylvania

14:45─15:30

AI for Social Impact: Poverty and Disaster Mapping from the Sky
Meeyoung Cha, IBS Data Science Group

15:30─16:00

COFFEE BREAK

16:00─16:45

Data Privacy is Important, but it’s not Enough
Katrina Ligett, The Hebrew University of Jerusalem (remotely)

16:45─17:30

Countering Misinformation on Platforms: What Actually Works?
Jonathan Mayer, Princeton University (remotely)

17:30─18:15

Theorizing about the Complexity of Privacy Phenomena: A Configurational Approach
Nan Zhang, University of Florida

 

 

Abstracts

Tuesday - 25 October 2022

14:0014:45

The Theory of Artificial Immutability: Protecting Algorithmic Groups under Anti-Discrimination Law

Sandra Wachter

Artificial intelligence is increasingly used to make life-changing decisions, including about who is successful with their job application and who gets into university. To do this, AI often creates groups that haven’t previously been used by humans. Many of these groups are not covered by non-discrimination law (e.g., ‘dog owners’ or ‘sad teens’), and some of them are even incomprehensible to humans (e.g., people classified by how fast they scroll through a page or by which browser they use). This is important because decisions based on algorithmic groups can be harmful. If a loan applicant scrolls through the page quickly or uses only lower caps when filling out the form, their application is more likely to be rejected. If a job applicant uses browsers such as Microsoft Explorer or Safari instead of Chrome or Firefox, they are less likely to be successful. Non-discrimination law aims to protect against similar types of harms, such as equal access to employment, goods, and services, but has never protected “fast scrollers” or “Safari users”. Granting these algorithmic groups protection will be challenging because historically the European Court of Justice has remained reluctant to extend the law to cover new groups. This paper argues that algorithmic groups should be protected by non-discrimination law and shows how this could be achieved.

14:4515:30

The Future of Privacy Research: Lessons from Artificial Intelligence and Machine Learning

Heng Xu

There is a growing concern that what privacy scholars study in research – theoretically, empirically, and technically – do not resonate well with consumers or businesses in practice. The aim of my talk is to review the obstacles facing behavioral researchers today and to offer alternative paths for proceeding forward. I will focus most of the talk on the important role of context in privacy research, and how a recent breakthrough in Machine Learning can help transition a decades-old behavioral theory into a practical method for assessing context effects. At the end of my talk, I will discuss other barriers facing privacy research. Specifically, by drawing an analogy to the development of Artificial Intelligence over the past 70 years, I contend that many of these barriers may trace their root to a set of misplaced ontological and epistemological priorities that were taken for granted in the field. To explore alternative paths, I will conclude by discussing research strategies that could lower the barriers by generating scholarly knowledge from the same continuum as what people draw from when engaging in everyday activities related to their information privacy.

 

16:0016:45

Democracy and the Pursuit of Randomness

Ariel Procaccia

Sortition is a storied paradigm of democracy built on the idea of choosing representatives through lotteries instead of elections. In recent years this idea has found renewed popularity in the form of citizens’ assemblies, which bring together randomly selected people from all walks of life to discuss key questions and deliver policy recommendations. A principled approach to sortition, however, must resolve the tension between two competing requirements: that the demographic composition of citizens’ assemblies reflect the general population and that every person be given a fair chance (literally) to participate. I will describe our work on designing, analyzing and implementing randomized participant selection algorithms that balance these two requirements. I will also discuss practical challenges in sortition based on experience with the adoption and deployment of our open-source system, Panelot.

 

16:4517:30

Communicating with Anecdotes

Nicole Immorlica

We study a communication game between a sender and receiver where the sender has access to a set of informative signals about a state of the world. The sender chooses one of her signals, called an ''anecdote'' and communicates it to the receiver. The receiver takes an action, yielding a utility for both players. Sender and receiver both care about the state of the world but are also influenced by a personal preference so that their ideal actions differ. We characterize perfect Bayesian equilibria when the sender cannot commit to a particular communication scheme. In this setting the sender faces ''persuasion temptation'': she is tempted to select a more biased anecdote to influence the receiver's action. Anecdotes are still informative to the receiver but persuasion comes at the cost of precision. This gives rise to ''informational homophily'' where the receiver prefers to listen to like-minded senders because they provide higher-precision signals. In particular, we show that a sender with access to many anecdotes will essentially send the minimum or maximum anecdote even though with high probability she has access to an anecdote close to the state of the world that would almost perfectly reveal it to the receiver. In contrast to the classic Crawford-Sobel model, full revelation is a knife-edge equilibrium and even small differences in personal preferences will induce highly polarized communication and a loss in utility for any equilibrium. We show that for fat-tailed anecdote distributions the receiver might even prefer to talk to poorly informed senders with aligned preferences rather than a knowledgeable expert whose preferences may differ from her own. We also show that under commitment differences in personal preferences no longer affect communication and the sender will generally report the most representative anecdote closest to the posterior mean for common distributions.

 

18:3019:15

Bias by Design: How Digital Technology Can Fail its Diverse Users

Katharina Reinecke

From social media to conversational AI, digital technology has become a mainstay in the lives of many people around the world. Many of these inventions have been made in large technology centers, like Silicon Valley, that are inherently biased towards the views and experiences of product designers and developers who do not reflect a broad demographic in terms of age, education levels, culture, race, and physical abilities. In this talk, I will show how unconscious bias in the design of digital technology can systematically disadvantage specific groups of people. Specifically, I will present my lab’s prior work on diverse users’ experiences with digital technologies—ranging from websites and online communities to security & privacy interfaces and conversational AI—outlining how a lack of knowledge about diverse people can result in technology that is useful for some, but not all users. I will also present two approaches we have developed for recognizing biases in digital technology design: (i) Studying how diverse populations interact with technology using our volunteer-based Lab in the Wild platform and (ii) anticipating biases based on historical data on the unintended consequences of technology using our SpecTechle platform.

 

19:1520:00

Improving Privacy Transparency for Targeted Advertising

Blase Ur

Advertising networks collect extensive data about consumers, making inferences about their interests, demographics, and more to enable highly targeted advertising. This talk will discuss a series of recent projects aimed to improve consumers' understanding of these practices and their consequences. I will first discuss Tracking Transparency, a browser extension prototype we have developed to give consumers a more comprehensive view of the extent and implications of data collection for advertising purposes. I will next present a deception-based user study we conducted to understand consumers' reactions to data being recontextualized to provide highly personalized advertising. While participants found this practice creepy, it surprisingly did not impact their subsequent information-disclosure decisions. The remainder of the talk will focus on subject data access rights, or consumers' ability to download a copy of the personal data a company holds about them. I will highlight how having 231 consumers download their own Twitter data and share it with us provided insight into the current ecosystem of targeted ads and existing transparency mechanisms. I will conclude with our ongoing work aiming to make data-access rights more useful and informative for consumers.

 

Wednesday - 26 October 2022

10:0010:45

Integrative Responsible Computing

Asia Biega

Approaches to responsible computing often focus on certain angles: algorithmic interventions, data investigations, or policy recommendations. Using various findings from our work on operationalizing responsibility concepts, I will discuss the need for and the challenges of integrative approaches to responsible computing. The talk will explore some of the complex interrelations between algorithms, data, human factors, and policy. Finally, I will conclude with a reflection on how we can facilitate progress in this space, grounded in our recent work on interpreting generic epistemology -- a framework for 'a change of paradigm without crisis'.

 

10:4511:30

Human Factors in Hardware Reverse Engineering

Steffen Becker

Hardware components form the root of trust in virtually any computing system. Hardware Reverse Engineering (HRE) is employed to analyze such components, i.e., unknown hardware circuits, for example, to infringe intellectual property or to extract secret keys that are deeply embedded in microchips. As fully automated reverse engineering is currently inconceivable, success depends largely on the skills and cognitive factors of the analysts. Taking these human factors into account may lead to novel countermeasures against HRE.
In this talk, we will shed first light on the strategies and cognitive processes of human analysts in HRE. As no experts were initially available for empirical research, we developed a comprehensive HRE training program and then conducted a study with participants of this training program. Our results show that working memory may have an impact on time efficiency. Furthermore, we postulate a three-phase model of sensemaking in HRE and derive a comprehensive taxonomy for modeling HRE. To address the methodological problem that experts are unavailable, we introduce ReverSim, a prior-knowledge-free, game-based simulation that mirrors sub-processes of HRE. We evaluate ReverSim in two studies and show that it is a suitable tool to quantify human factors in this domain, relying exclusively on non-expert samples. We conclude this talk with an outlook on further interdisciplinary research activities on HRE.

 

11:3012:15

Consumer Reactions to Data Breaches, Deficiencies of Breach Notifications, and Opportunities for Intervention

Yixin Zou

Data breaches put affected consumers at risk of identity theft, account compromise, and other types of cybercrime. My collaborators and I conducted a series of studies to understand consumer reactions to data breaches, identify issues with breach notifications sent by companies, and evaluate mechanisms that motivate consumers to take action to protect themselves. Through interviews and an online survey, we found that our participants were rarely aware of breaches that leaked their personal information. Possible reasons behind consumers’ inaction include optimism bias, a tendency to delay action until harm has occurred, and misconceptions about available protective measures. Analyzing breach notifications provides complementary insights into possible barriers to taking action. Many breach notifications we analyzed were vague in stating the consequences of being affected by a breach; recommended measures were described in lengthy paragraphs with little guidance on prioritization. Based on insights into these known barriers, we conducted a longitudinal experiment to evaluate nudges that encourage consumers to change breached passwords. Our findings suggest the importance of highlighting the risks of compromised passwords and providing actionable instructions for changing passwords. I will conclude the talk by discussing opportunities for better supporting consumers in reacting to breaches and improving data breach notifications through policy and industry guidance.

 

14:0014:45

Social and Algorithmic Curation of Online Information

Sandra González-Bailón

The quality of our democracies relies on the quality of the information that citizens consume. Social media have challenged the mechanisms through which traditional news media enforce political accountability, blurring the boundaries between legitimate and illegitimate information and between credible and unreliable sources. On the other hand, online networks are also the conduit for information that would not be able to circulate otherwise, enabling the articulation of collective action efforts and the formation of alternative publics contesting the mainstream. Online networks, in other words, act as tools for organization and awareness but also for disinformation and conflict. One complication to understand how these information dynamics unfold is that networks and social choices are in constant interaction with algorithmic curation: algorithms curate content based on predictions of relevance that are based on past behavior and that determine future behavior in a never-ending loop. In this talk, I will discuss recent research that aims to unpack the interaction of social and algorithmic curation, with a focus on the information asymmetries and biases that arise from that interaction.

 

14:4515:30

AI for Social Impact: Poverty and Disaster Mapping from the Sky

Meeyoung Cha

Artificial intelligence (AI) is reshaping business and science. One of the areas it has an impact on is achieving Sustainable Development Goals (SDGs). This talk will review some of the latest research advances on poverty mapping (goal #1) and climate action (goal #13). I will discuss the problem of inferring economic development in the developing world with few official statistics. One emerging technology is to use deep image learning on high-resolution daytime satellite imagery. The same technology can be used to detect disaster damage at the building level, assisting in rapid response. I'll conclude the talk by discussing other exciting opportunities for using data science and AI for social impact.

 

16:0016:45

Data Privacy is Important, but it‘s not Enough

Katrina Ligett

Our current data ecosystem leaves individuals, groups, and society vulnerable to a wide range of harms, ranging from privacy violations to subversion of autonomy to discrimination to erosion of trust in institutions. We argue that legal and technical tools aimed at controlling data and addressing privacy concerns are inherently insufficient for addressing the full range of these harms, and suggest directions for making progress with respect to these challenges. Joint work with Ayelet Gordon-Tapiero and Alexandra Wood.

 

16:4517:30

Countering Misinformation on Platforms: What Actually Works?

Jonathan Mayer

Misinformation continues to proliferate across the internet. How can online platforms effectively respond? In a line of ongoing research, we are conducting experimental task and field studies to understand how platform interventions affect individual perceptions and behaviors. These studies are grounded in usable security scholarship, which has previously examined responses to trust and safety risks and identified best practices for user interface design. Our results call into question the efficacy of common platform responses to misinformation and highlight the limitations of prevailing research methods.

 

17:3018:15

Theorizing about the Complexity of Privacy Phenomena: A Configurational Approach

Nan Zhang

Privacy scholars study phenomena that are marked by complexity: Different individuals may ascribe distinct meanings to the very term "privacy". They also tend to engage in privacy-seeking behavior (or choose not to) for idiosyncratic and, at times, contradictory reasons. The complexity of privacy phenomena is reflected in many longstanding and influential debates in the field, from what privacy means to the nature of the privacy paradox to whether privacy self-management can be effective in today’s digital world. Theorizing about a complex privacy phenomenon is challenging for at least two reasons. First, the prevalence of idiosyncrasy confronts the limits of variable-centric theorizing, an influential approach for developing the extant privacy theories. Second, casual asymmetry is a common occurrence in privacy, meaning that the factors that consistently predict the presence of an outcome need not be the mirror opposite of factors that predict its absence. In this talk, I will attempt to address these challenges by engaging with configurational theorizing, which explores how and why multiple variables (or concepts) combine in distinct configurations (i.e., value combinations) to explain a phenomenon of interest. I will show that, as a “people-centric”, rather than variable-centric, approach, configurational theorizing allows antecedents and outcomes to diverge across individuals with different configurations. Further, it naturally captures causal asymmetry because it allows us to separately contemplate (and distinguish between) the conditions that are sufficient and/or necessary for an outcome. Through an empirical demonstration, I will highlight how configurational theorizing may unveil unique insights by sensitizing scholars to how different people engage in qualitatively distinct mechanisms when making sense of privacy. I will conclude by discussing the limits of configurational theorizing and providing actionable recommendations for future research.

Go to Editor View