Program
| Monday | July 07, 2025 |
|---|---|
| 10:00 – 10:20 |
Opening Remarks
Peter Schwabe, Managing Director at the MPI for Security and Privacy Krishna Gummadi, Scientific Director at the MPI for Software Systems |
| 10:20 – 11:05 |
AI Governance and the Global South: A Sociotechnical Perspective
Virgilio Almeida, Federal University of Minas Gerais and Harvard University |
| 11:05 – 11:50 |
How to empower humans in human-technology-interaction: A psychological perspective on trust, understanding and privacy
Nicole Krämer, University Duisburg-Essen |
| 11:50 – 12:10 | Lightning Talks |
| 12:10 – 12:30 | Poster Session |
| 12:30 – 13:30 | LUNCH |
| 13:30 – 14:15 |
Large-Scale Private Analytics in Practice
Hamed Haddadi, Imperial College London |
| 14:15 – 15:15 |
Mentoring Session
Wagner Meira, Professor of Computer Science, Universidade Federal de Minas Gerais Daniel Zappala, Professor of Computer Science, Brigham Young University Junho Lee, Associate Expert at the United Nations and all of the speakers and panellists. |
| 15:15 – 15:45 | COFFEE BREAK |
| 15:45 – 16:30 |
Privacy in the age of AI: What's changed and what should we do about it?
Sauvik Das, Carnegie Mellon University |
| 16:30 – 16:55 |
Student Talk
Lin Kyi, MPI for Security and Privacy |
| 16:55 – 17:20 |
Student Talk Niklas Risse, MPI for Security and Privacy |
| 17:20 – 17:30 |
Wrap-up
Mia Cha, Deputy Managing Director at the MPI for Security and Privacy |
| 18:00 | DINNER & SCIENTIFIC EXCHANGE |
| Tuesday | July 08, 2025 |
|---|---|
| 10:00 – 10:20 | WELCOME REMARKS |
| 10:20 – 11:05 |
Can We Constrain AI Misuse without Controlling People?
Michael Veale, University College London |
| 11:05 – 11:50 |
Cultivating IT-assisted Democracy in 21st Century
Sue Moon, Korea Advanced Institute of Science & Technology |
| 11:50 – 12:30 | Poster Session |
| 12:30 – 13:30 | LUNCH |
| 13:30 – 14:15 |
(C)lawing back your human rights and having societal impact
Kris Shrishak, Enforce, Irish Council for Civil Liberties |
| 14:15 – 15:15 |
Round Table Panel: “Deep dive into technical, behavioral, and governance perspectives”
Christoph Engel, Scientific Director at the MPI for Research on Collective Goods Yixin Zou, Faculty Member at the MPI for Security and Privacy Asja Fischer, Professor at the Faculty of Computer Science at Ruhr-University Bochum Emilio Zagheni, Scientific Director at the MPI for Demographic Research Jean-Louis van Gelder, Scientific Director at the MPI for the Study of Crime, Security and Law |
| 15:15 – 15:45 | COFFEE BREAK |
| 15:45 – 16:30 |
Privacy Myths and Mistakes: Paradoxes, tradeoffs, and the omnipotent consumer.
Kirsten Martin, University of Notre Dame |
| 16:30 – 16:55 |
Student Talk Gabriel Lima, MPI for Security and Privacy |
| 16:55 – 17:20 |
Student Talk Julius Hermelink, MPI for Security and Privacy |
| 17:20 – 17:30 |
Closing remarks
Carmela Troncoso, Scientific Director at the MPI for Security and Privacy |
Abstracts
Monday – July 7, 2025
10:20 – 11:05
AI Governance and the Global South: A Sociotechnical Perspective
Virgilio Almeida
Artificial Intelligence (AI) is transforming societies, economies, and systems of governance worldwide. Yet, discussions around AI governance are still largely shaped by perspectives from the Global North, often neglecting the distinct challenges and opportunities in the Global South. A key pillar of effective AI governance is the development of robust, public-interest-oriented academic research. This keynote will examine AI governance through a sociotechnical lens, highlighting the urgent need for new institutions and inclusive, context-sensitive policies that balance innovation with the public good.
11:05 – 11:50
Nicole Krämer
Increasingly, intelligent systems will help humans to take decisions in their private as well as occupational lives both in the form of artificial entities (LLM chatbots, AI decision support systems, autonomous driving) as well as via social media algorithms. As an important prerequisite for the acceptance of the systems, explainability and understandability are hailed – in the sense that the user is able to understand the system’s opaque functioning. This transparency is also seen as helpful regarding privacy issues, allegedly enabling users to better protect themselves. However, first studies show that users either do not want to “understand” too much and/or that the system’s functioning is difficult to grasp since most users do not have knowledge on computational processes they could build on. Based on literature from the field of science communication, it is therefore suggested that an alternative approach to yield acceptance can be to instill “epistemic trust” and eventually “calibrated trust”.
The contribution also discusses where psychological paths towards trust, understanding and privacy end and where further disciplines, such as ethics and law must step in. In line with this, though, it is also reflected how psychological insights can provide the ground on which ethics and law can propose norms and regulations.
13:30 – 14:15
Large-Scale Private Analytics in Practice
Hamed Haddadi
Collecting user feedback and interactions with services and products is essential for improving models and product designs. However, product telemetry often directly impacts user privacy. In this talk I will present the challenges and opportunities in performing large-scale, private analytics (i.e., telemetry) in industrial settings. I will bring together some successful examples from the industry in adopting the latest advances in privacy and security to provide useful product analytics, without being caught in the tangled, excessively complicated and impractical designs often seen on papers.
15:45 – 16:30
Privacy in the age of AI: What's changed and what should we do about it?
Sauvik Das
Many people are understandably apprehensive about how modern developments in AI will affect privacy — but what, exactly, does AI change about privacy? This talk will be a tale of two complementary perspectives on answering that question. In the first half of this talk, I will present work we have down codifying how the unique capabilities and requirements of AI technologies create new privacy risks (e.g., deepfakes, physiognomic classifiers) and exacerbate known ones (e.g., surveillance, aggregation). I will also discuss how today's AI practitioners face significant awareness, motivation, and ability barriers in identifying and mitigating these risks. In the second half of this talk, I will present work we have done exploring how modern AI technologies — and generative AI, in particular — unlock new interaction paradigms that can be used to help address longstanding human-centered privacy challenges.
Tuesday – July 8, 2025
10:20 – 11:05
Can We Constrain AI Misuse without Controlling People?
Michael Veale
AI tools can be used for many forms of intentional misuse, such as deception (e.g. romance scams), artificial child abuse imagery, or social engineering for cybersecurity violations. Some of these forms of misuse are illegal at the point of generation or even intent to generate (e.g. child abuse imagery), while others at the point of use (e.g. deception) or dissemination (e.g. incitement to violence). AI systems give high capabilities for misuse to actors who typically have not been able to use computers in such an advanced way. These systems do not rely on powerful AI systems — errors and failures that are unacceptable in business or government are not a big problem for many criminals, who externalise the cost of these errors onto their would-be victims. Many of these tools we see are developed, fine-tuned and deployed on local hardware, and based on open-weight models.
We should expect increasing calls to stop such misuse happening, either technically preventing it in the first place, or leaving ways for easier detection and enforcement after the fact. In order to do that, the main target will not be the model developers, who seem (according to current research) to be unable to bake-in robust and sticky safeguards into model weights, but varying computational intermediaries, such as operating systems, cloud providers, network security providers, model marketplaces, and even hardware manufacturers. In this talk, I present reflections from an ongoing book project with Robert Gorwa, Full Stack AI Governance, too look at what kind of governance is available when we look across the technology stack, and what rights and freedoms are at stake if we get this kind of governance wrong.
11:05 – 11:50
Cultivating IT-assisted Democracy in 21st Century
Sue Moon
For the past 30 years, we have witnessed the rise of democracy in emerging economies. Modern IT services and infrastructures have accelerated the spread of information, but at the same time aggravated social chasms. In this talk we review how the modern digital infrastructures offered initial illusion of improved connectivity and instead pushed consolidated echo chambers, based on historical and anecdotal events. We wrap up with a review of today’s online forums and remaining challenges.
13:30 – 14:15
(C)lawing back your human rights and having societal impact
Kris Shrishak
This two-part talk will show you (1) some examples on how you can go beyond your research papers to have societal impact when laws are written and enforced, and (2) how international human rights law can be an important lens for understanding the privacy that privacy enhancing technology should offer.
15:45 – 16:30
Privacy Myths and Mistakes: Paradoxes, tradeoffs, and the omnipotent consumer.
Kirsten Martin
The goal of this presentation is to dispel myths permeating privacy research, practice, and policy. These myths about privacy in the market – including that there is a tradeoff between functionality and privacy, that people don’t care about privacy, and that people behave according to the privacy paradox – provide a distraction from holding firms accountable for the many ways they can (and do) violate privacy. For research, such myths limit the generalizability of information science studies concerning privacy and data governance. A focus on disclosure decisions and consumers as omnipotent privacy negotiators narrows the scope and generalizability of privacy research. I suggest future research directions broadening the scope of privacy research to the appropriateness of data flows after disclosure and focus more on the actions of firms and structure of markets that preclude privacy needs of consumers being offered or met.