9 papers accepted at CHI'26
The CHI Conference on Human Factors in Computing Systems takes place next week in Barcelona. Scientists from MPI-SP will share results from nine different studies. Two publications received Best Paper Awards and Best Paper Honorable Mention Awards, respectively.
What is Safety? Corporate Discourse, Power, and the Politics of Generative AI Safety.
Authors: Ankolika De, Gabriel Lima, and Yixin Zou
Abstract: This work examines how leading generative artificial intelligence companies construct and communicate the concept of "safety" through public-facing documents. Drawing on critical discourse analysis, we analyze a corpus of corporate safety-related statements to explicate how authority, responsibility, and legitimacy are discursively established. These discursive strategies consolidate legitimacy for corporate actors, normalize safety as an experimental and anticipatory practice, and push a perceived participatory agenda toward safe technologies. We argue that uncritical uptake of these discourses risks reproducing corporate priorities and constraining alternative approaches to governance and design. The contribution of this work is twofold: first, to situate safety as a sociotechnical discourse that warrants critical examination; second, to caution human-computer interaction scholars against legitimizing corporate framings, instead foregrounding accountability, equity, and justice. By interrogating safety discourses as artifacts of power, this paper advances a critical agenda for human-computer interaction scholarship on artificial intelligence.
From Harm to Healing: Understanding Individual Resilience after Cybercrimes.
Authors: Xiaowei Chen, Mindy Tran, Yue Deng, Bhupendra Acharya, and Yixin Zou.
Abstract: How do individuals recover from cybercrimes? Victims experience various types of harm after cybercrimes, including monetary loss, data breaches, negative emotions, and even psychological trauma. The aspects that support their recovery process and contribute to individual cyber resilience remain underinvestigated. To address this gap, we interviewed 18 cybercrime victims from Western Europe using a trauma-informed approach. We identified four common stages following victimization: recognition, coping, processing, and recovery. Participants adopted various strategies to mitigate the impact of cybercrime and used different indicators to describe recovery. While they mostly relied on social support and self-regulation for emotional coping, service providers largely determined whether victims were able to recover their money. Internal factors, external support, and context sensitivity collectively contribute to individuals' cyber resilience. We recommend trauma-informed support for cybercrime victims. Extending our conceptualization of individual cyber resilience, we propose collaborative and context-sensitive strategies to address the harmful impacts of cybercrime.
It Shouldn’t Be This Difficult: Researcher Perspectives on Diversity and Inclusion in Usable Privacy and Security Research.
Authors: Priyasha Chatterjee, Smirity Kaushik, Karola Marky, and Yixin Zou
Abstract: While recent usable privacy and security (UPS) research has made progress in moving beyond “the average user,” a systematic account of how UPS researchers navigate diversity and inclusion in their work remains lacking. Through 20 in-depth semi-structured interviews with experienced researchers, we examine how and why they recruit diverse, underserved populations in their work, as well as the challenges they face in doing so, including conceptual difficulties in defining who is underserved, limited access to target populations, and inflexible peer review and publishing norms. Participants also reflected on their own positionality when planning and conducting studies, often expressing uncertainty about how to account for and articulate their positionality. We identify strategies researchers use to overcome challenges and highlight areas where collective action from the research community and institutions is needed to foster greater inclusion in UPS research practices.
Experiencer, Helper, or Observer: Online Fraud Intervention for Older Adults Through Role-based Simulation.
Best Paper Honorable Mention (recognizes the top 5% of all publications at CHI’26)
Authors: Yue Deng, Xiaowei Chen, Junxiang Liao, Bo Li, and Yixin Zou.
Abstract: Online fraud is a critical global threat that disproportionately targets older adults. Prior anti-fraud education for older adults has largely relied on static, traditional instruction that limits engagement and real-world transfer, whereas role-based simulation offers realistic yet low-risk opportunities for practice. Moreover, most interventions situate learners as victims, overlooking that fraud encounters often involve multiple roles, such as bystanders who witness scams and helpers who support victims. To address this gap, we developed ROLESafe, an anti-fraud educational intervention in which older adults learn through different learning roles, including Experiencer (experiencing fraud), Helper (assisting a victim), and Observer (witnessing fraud). In a between-subjects study with 144 older adults in China, we found that the Experiencer and Helper roles significantly improved participants' ability to identify online fraud. These findings highlight the promise of role-based, multi-perspective simulations for enhancing fraud awareness among older adults and provide design implications for future anti-fraud education.
When Feasibility of Fairness Audits Relies on Willingness to Share Data: Examining User Acceptance of Multi-Party Computation Protocols for Fairness Monitoring
Authors: Changyang He, Parnian Jahangirirad, Lin Kyi, and Asia Biega
Abstract: Fairness monitoring is critical for detecting algorithmic bias, as mandated by the EU AI Act. Since such monitoring requires sensitive user data (e.g., ethnicity), the AI Act permits its processing only with strict privacy measures, such as multi-party computation (MPC), in compliance with the GDPR. However, the effectiveness of such secure monitoring protocols ultimately depends on people's willingness to share their data. Little is known about how different MPC protocol designs shape user acceptance. To address this, we conducted an online survey with 833 participants in Europe, examining user acceptance of various MPC protocol designs for fairness monitoring. Findings suggest that users prioritized risk-related attributes (e.g., privacy protection mechanism) in direct evaluation but benefit-related attributes (e.g., fairness objective) in simulated choices, with acceptance shaped by their fairness and privacy orientations. We derive implications for deploying and communicating privacy-preserving protocols in ways that foster informed consent and align with user expectations.
"What If My Face Gets Scanned Without Consent": Understanding Older Adults' Experiences with Biometric Payment.
Authors: Yue Deng, Changyang He, Bo Li, and Yixin Zou.
Abstract: Biometric payment, i.e., biometric authentication implemented in digital payment systems, can reduce memory demands and streamline payment for older adults. However, older adults' perceptions and practices regarding biometric payment remain underexplored. We conducted semi-structured interviews with 22 Chinese older adults, including both users and non-users. Participants were motivated to use biometric payment due to convenience and perceived security. However, they also worried about loss of control due to its password-free nature and expressed concerns about biometric data security. Participants also identified desired features for biometric payment, such as lightweight and context-aware cognitive confirmation mechanisms to enhance user control. Based on these findings, we outline recommendations for more controllable and informative digital financial services that better support older adults.
Do Citizens Agree with the EU AI Act? Public Perspectives on Risk and Regulation of AI Systems.
Authors: Gabriel Lima, Gustavo Gil Gasiola, Frederike Zufall, and Yixin Zou.
Abstract: The European Union (EU) has spearheaded the regulation of artificial intelligence (AI) with the AI Act, which regulates AI systems based on the risks they pose to fundamental rights and other protected values. AI systems that pose unacceptable risks are prohibited, high-risk AI systems must comply with mandatory requirements, and minimal risk AI systems are encouraged—but not required—to adopt voluntary standards. Motivated by concerns that the AI Act may not reflect the public’s opinions, we investigate how laypeople (𝑁=1,421) assess 48 different AI systems concerning their risk and regulation. We find that people believe all 48 AI systems pose moderate levels of risk and should be regulated (albeit without outright prohibitions). Our findings challenge the AI Act’s tiered approach, showing that people might support horizontal regulation requiring minimal standards for AI systems, and provide implications for developers seeking to develop AI aligned with public expectations.
Characterizing Scam-Driven Human Trafficking Across Chinese Borders and Online Community Responses on RedNote
*Best Paper Award (recognizes the top 1% of all publications at CHI’26)
Authors: Jiamin Zheng, Yue Deng, Jessica Chen, Shujun Li, Yixin Zou, and Jingjie Li.
Abstract: A new form of human trafficking has emerged across Chinese borders, where individuals are lured to Southeast Asia with fraudulent job offers and then coerced into operating online scams. Despite its massive economic and human toll, this scam-driven trafficking remains underexplored in academic research. Through qualitative analysis of 158 RedNote posts, we examined how Chinese online communities respond to this threat. Our findings reveal that perpetrators exploit cultural ties to recruit victims for cybercriminal roles within self-sustaining compounds, using sophisticated manipulation tactics. Survivors face serious reintegration barriers, including family rejection, as the cultural values that enable trafficking also hinder their recovery. While communities present protective strategies, efforts are complicated by doubts about the reliability of support and cross-border coordination. We discuss key implications for prevention, platform governance, and international cooperation against scam-driven trafficking. Warning: This paper contains descriptions of physical, psychological, and sexual abuse.
From Clicks to Consensus: Collective Consent Assemblies for Data Governance
Authors: Lin Kyi, Paul Gölz, Robin Berjon, and Asia Biega
Abstract: Obtaining meaningful and informed consent from users is essential for ensuring autonomy and control over one's data. Notice and consent, the standard for collecting consent, has been criticized. While other individualized solutions have been proposed, this paper argues that a collective approach to consent is worth exploring. First, individual consent is not always feasible to collect for all data collection scenarios. Second, harms resulting from data processing are often communal in nature, given the interconnected nature of some data. Finally, ensuring truly informed consent for every individual has proven impractical. We propose collective consent, operationalized through consent assemblies, as one alternative framework. We establish collective consent's theoretical foundations and use speculative design to envision consent assemblies leveraging deliberative mini-publics. We present two vignettes: i) replacing notice and consent, and ii) collecting consent for GenAI model training. Our paper employs future backcasting to identify the requirements for realizing collective consent and explores its potential applications in contexts where individual consent is infeasible.