Steffen Becker is a postdoctoral researcher in the CASA Cluster of Excellence at Ruhr University Bochum and at the Max Planck Institute for Security and Privacy (MPI-SP). His research interests lie both at the intersection of hardware security and usable security and within these two areas. For example, his research aims to make hardware more secure against reverse-engineering attacks by considering the human factors involved. In other research projects, Steffen analyzes hardware Trojans and their detection or examines the perception of security and privacy by different end-user populations.

Asia J. Biega is a tenure-track faculty member at the Max Planck Institute for Security and Privacy (MPI-SP) leading the Responsible Computing group. Her research centers around developing, examining and computationally operationalizing principles of responsible computing, data governance & ethics, and digital well-being. Before joining MPI-SP, Asia worked at Microsoft Research Montréal in the Fairness, Accountability, Transparency, and Ethics in AI (FATE) Group. She completed her PhD in Computer Science at the MPI for Informatics and the MPI for Software Systems, winning the DBIS Dissertation Award of the German Informatics Society. In her work, Asia engages in interdisciplinary collaborations while drawing from her traditional CS education and her industry experience including stints at Microsoft and Google.

Meeyoung Cha is an associate professor in the School of Computing at Korea Advanced Institute of Science and Technology (KAIST) and an adjunct professor in the Department of Brain and Cognitive Sciences and Graduate School of Culture Technology. She previously worked as a post-doctoral researcher at the Max Planck Institute for Software Systems (MPI-SWS) in Saarbrücken, Germany. Meeyoung's interests include data science and information science, with a focus on modeling socially relevant information propagation processes. Her research on misinformation, poverty mapping, fraud detection, and long-tail content has received over 18,000 citations and best paper awards at a number of conferences. She has received the Korean Young Information Scientist Award 2019, AAAI ICWSM Test of Time Award 2020, and the Minister's Award of Science and ICT of Korea 2022. Meeyoung has worked as a visiting professor at Facebook's Data Science Team in Menlo Park, California, and has been named the World Customs Organization (WCO)'s BACUDA science collaborator. She is a member of the Seoul Forum for International Affairs (SFIA) and a commissioner for the Korea Copyright Commission, the Korea Customs Service, the National Tax Service, the Open Data Mediation Committee (ODMC), and the Presidential Council on Intellectual Property. As a Chief Investigator, she also leads the Data Science Research Group at the Institute for Basic Science (IBS) in Korea.

Sandra González-Bailón (PhD Sociology, Oxford) is an Associate Professor at the Annenberg School for Communication, and affiliated faculty at the Warren Center for Network and Data Sciences. Her research lies at the intersection of network science, computational tools, and political communication. She is the author of Decoding the Social World (MIT Press, 2017) and co-editor of The Oxford Handbook of Networked Communication (OUP, 2020).

Nicole Immorlica’s research lies broadly within the field of economics and computation. Using tools and modeling concepts from both theoretical computer science and economics, Nicole hopes to explain, predict, and shape behavioral patterns in various online and offline systems, markets, and games. Her areas of specialty include social networks and mechanism design. Nicole received her Ph.D. from MIT in Cambridge, MA in 2005 and then completed three years of postdocs at both Microsoft Research in Redmond, WA and CWI in Amsterdam, Netherlands before accepting a job as an assistant professor at Northwestern University in Chicago, IL in 2008. She joined the Microsoft Research New England Lab in 2012.

Katrina Ligett, currently on sabbatical as the Microsoft Visiting Professor at the Center for Information Technology Policy (CITP) at Princeton University, is a professor in the School of Computer Science and Engineering at the Hebrew University, where she is also the head of the MATAR program on the Interfaces of Technology, Society, and Networks (formerly known as Internet & Society). She is also an elected member of the Federmann Study for the Center of Rationality and an affiliate of the Federmann Cyber Security Research Center. Before joining the Hebrew University, she was faculty in computer science and economics at the California Institute of Technology (Caltech). Ligett’s primary research interests are in data privacy, algorithmic fairness, machine learning theory, and algorithmic game theory. She received her Ph.D. in computer science from Carnegie Mellon University in 2009 and did her postdoctoral training at Cornell University. She is a recipient of the NSF CAREER award and a Microsoft Faculty Fellowship. Ligett was the co-chair of the 2021 International Conference on Algorithmic Learning Theory (ALT) and the chair of the 2021 Symposium on Foundations of Responsible Computing (FORC). She currently serves as an Advisory Board Member to the Harvard University OpenDP Project and as an associate editor at the journals TheoretiCS and Transactions on Economics and Computation (TEAC). She is also an executive board member of the Association for Computing Machinery (ACM) Special Interest Group on Economics and Computation (SIGecom) and a principal investigator in the Simons Foundation Collaboration on the Theory of Algorithmic Fairness.

Jonathan Mayer is an Assistant Professor at Princeton University, where he holds appointments in the Department of Computer Science and the School of Public and International Affairs. Before joining the Princeton faculty, he served as the technology law and policy advisor to United States Senator Kamala Harris and as the Chief Technologist of the Federal Communications Commission Enforcement Bureau. Professor Mayer holds a Ph.D. in computer science from Stanford University and a J.D. from Stanford Law School.

Ariel Procaccia is Gordon McKay Professor of Computer Science at Harvard University. He works on a broad and dynamic set of problems related to AI, algorithms, economics, and society. His distinctions include the Social Choice and Welfare Prize (2020), a Guggenheim Fellowship (2018), the IJCAI Computers and Thought Award (2015), and a Sloan Research Fellowship (2015). To make his research accessible to the public, he has co-founded several not-for-profit websites including Spliddit.org and Panelot.org, and he regularly contributes opinion pieces.

Katharina Reinecke is an Associate Professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington where she researchers and teaches human-computer interaction and computing ethics. Her research explores how people's interaction with digital technology varies depending on their cultural, geographic, or demographic background and how technology can be biased against people who are unlike the small groups of people that created it. Her lab has developed a number of approaches and systems that make technology better suitable for diverse user groups and that can help designers and developers anticipate unintended consequences of technology. Katharina is a co-founder of Lab in the Wild, a virtual lab for conducting large-scale behavioral studies with diverse participants, and of Augury Design Inc., a startup that predicts the success of website designs based on Lab in the Wild data. Katharina received a PhD in Computer Science from the University of Zurich and was a postdoctoral fellow at Harvard University. Prior to coming to the University of Washington, she was an Assistant Professor in the School of Information at the University of Michigan. Her lab is currently supported by the NSF, Google, Microsoft, Adobe, the Wikimedia Foundation, and Meta/Facebook.

Blase Ur is Neubauer Family Assistant Professor of Computer Science at the University of Chicago, where he researches security, privacy, human-computer interaction, and ethical AI. He is part of the UChicago SUPERgroup, which uses data-driven methods to help users make better security and privacy decisions, as well as to improve the usability of complex computer systems. He has received an NSF CAREER Award (2021), three best paper awards, five honorable mention paper awards, and UChicago's Quantrell Award for undergraduate teaching (2021). He holds degrees from Carnegie Mellon University (PhD and MS) and Harvard University (AB). He also enjoys bicycles, cacti/succulents, and punk rock.

Sandra Wachter is an Associate Professor and Senior Research Fellow focusing on law and ethics of AI, Big Data, and robotics as well as Internet and platform regulation at the Oxford Internet Institute at the University of Oxford. Her current research focuses on profiling, inferential analytics, explainable AI, algorithmic bias, diversity, and fairness, as well as governmental surveillance, predictive policing, human rights online, and health tech and medical law. At the OII, Professor Sandra Wachter leads and coordinates the Governance of Emerging Technologies (GET) Research Programme that investigates legal, ethical, and technical aspects of AI, machine learning, and other emerging technologies. Professor Wachter is also an affiliate and member at numerous institutions, such as the Berkman Klein Center for Internet & Society at Harvard University, World Economic Forum’s Global Futures Council on Values, Ethics and Innovation, the European Commission’s Expert Group on Autonomous Cars, the Law Committee of the IEEE, the World Bank’s Task Force on Access to Justice and Technology, the United Kingdom Police Ethics Guidance Group, the British Standards Institution, the Bonavero Institute of Human Rights at Oxford’s Law Faculty and the Oxford Martin School. Professor Wachter also serves as a policy advisor for governments, companies, and NGO’s around the world on regulatory and ethical questions concerning emerging technologies. Previously, Professor Wachter was a visiting Professor at Harvard Law School. Prior to joining the OII she studied at the University of Oxford and the Law Faculty at the University of Vienna. She has also worked at the Royal Academy of Engineering and the Austrian Ministry of Health. Professor Wachter has been the subject of numerous media profiles, including by the Financial Times, Wired, and Business Insider. Her work has been prominently featured in several documentaries, including pieces by Wired and the BBC, and has been extensively covered by The New York Times, Reuters, Forbes, Harvard Business Review, The Guardian, BBC, The Telegraph, CNBC, CBC, Huffington Post, Science, Nature, New Scientist, FAZ, Die Zeit, Le Monde, HBO, Engadget, El Mundo, The Sunday Times, The Verge, Vice Magazine, Sueddeutsche Zeitung, and SRF. Professor Wachter has received numerous awards, including the ‘O2RB Excellence in Impact Award’ (2021 and 2018), the Computer Weekly Women in UK Tech award (2021), the Privacy Law Scholar (PLSC) Award (2019) for her paper A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI, and the CognitionX ‘AI Superhero Award’ (2017) for her contributions in AI governance. Her British Academy project “AI and the Right to Reasonable Algorithmic Inferences” aims to find mechanisms that provide greater protection to the right to privacy and identity, collective and group privacy rights, and safeguards against the harms of inferential analytics and profiling. Professor Wachter further works on the governance and ethical design of algorithms, including the development of standards to open the ‘AI Blackbox’ and to increase accountability, transparency, and explainability. Her explainability tool – Counterfactual Explanations – has been implemented by major tech companies such as Google, Accenture, IBM, and Vodafone. Professor Wachter also works on ethical auditing methods for AI to combat bias and discrimination and to ensure fairness and diversity with a focus on non-discrimination law. Her recent work has shown that the majority (13/20) of bias tests and tools do not live-up to the standards of EU non-discrimination law. In response she developed a bias test (‘Conditional Demographic Disparity’ or CDD) that meets EU and UK standards. Amazon picked up her work and implemented it in their cloud services. Professor Wachter is also interested in legal and ethical aspects of robotics (e.g., surgical, domestic and social robots) and autonomous systems (e.g., autonomous and connected cars), including liability, accountability, and privacy issues as well as international policies and regulatory responses to the social and ethical consequences of automation (e.g., future of the workforce, worker rights). Internet policy and platform regulation as well as cyber-security issues are also at the heart of her research, where she addresses areas such as “fake news,” deepfakes, misinformation, censorship, online surveillance, intellectual property law, and human rights online. Her previous work also looked at (bio)medical law and bioethics in areas such as interventions in the genome and genetic testing under the Convention on Human Rights and Biomedicine.

Heng Xu is a Professor of Information Technology and Analytics in the Kogod School of Business at the American University, where she also serves as the Director of the Kogod Cyber Governance Center. Before joining Kogod in 2018, she served as a faculty member at the Pennsylvania State University for 12 years, as well as a program director at the National Science Foundation for 3 years. Her recent research focuses on data analytics, privacy protection, data ethics, and algorithmic fairness. Her scholarly work has been published in premier outlets across various fields such as Psychology, Business, and Computer Science, including Psychological Methods, Management Science, Management Information Systems Quarterly (MIS Quarterly), Information Systems Research, Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI), and many others. Her interdisciplinary research has been sponsored by multiple U.S. funding agencies such as the Defense Advanced Research Projects Agency, National Institute of Health, and National Science Foundation, as well as companies such as Amazon and Facebook. Her work has received many awards, including the MISQ Impact Award (2021), Woman of Achievement Award in IEEE Big Data Security (2021), IEEE ITSS Leadership Award in Intelligence and Security Informatics (2020), the Operational Research Society's Stafford Beer Medal (2018), National Science Foundation's CAREER award in 2010, and many best paper awards and nominations at various conferences.

Nan Zhang is a Professor of Management at Warrington College of Business, University of Florida.  Before joining UF in 2022, he was a Professor of Information Technology and Analytics at American University, a Professor of Information Sciences and Technology at Pennsylvania State University, and a Professor of Computer Science at George Washington University. He also served as a Program Director at the US National Science Foundation (NSF). His research interests include the use and governance of AI in organizational settings, information security/privacy, and the use of machine learning in social and behavioral science research. His research has been sponsored by US federal agencies such as NSF, Defense Advanced Research Projects Agency, Army Research Office, etc., and companies such as Amazon and Meta.

Yixin Zou is a postdoctoral research fellow at the University of Michigan and an incoming tenure-track faculty member at the Max Planck Institute for Security and Privacy (MPI-SP). Her research interests span human-computer interaction, privacy, and security, focusing on improving consumers’ adoption of protective behaviors and supporting the digital safety of at-risk populations. Her research has been recognized with the 2022 John Karat Usable Privacy and Security Student Research Award and best paper awards/honorable mentions at the Symposium on Usable Privacy and Security (SOUPS) and the ACM Conference on Human Factors in Computing Systems (CHI). In addition, her research has generated broader impacts on industry practice (e.g., Mozilla and NortonLifeLock) and public policy, including the rulemaking process for the California Consumer Privacy Act. She holds a Ph.D. in Information from the University of Michigan.

Go to Editor View