The FAT* 2018 Conference on Fairness, Accountability, and Transparency is a first-of-its-kind international and interdisciplinary peer-reviewed conference that seeks to publish and present work examining the fairness, accountability, and transparency of algorithmic systems. This article covers research papers dedicated to 1st Session on Online discrimination and Privacy.
FAT* hosted the presentation of research work from a wide variety of disciplines, including computer science, statistics, the social sciences, and law. It took place on February 23 and 24, 2018, at the New York University Law School, in cooperation with its Technology Law and Policy Clinic. The conference brought together over 450 attendees from academic researchers, policymakers, and Machine learning practitioners. It witnessed 17 research papers, 6 tutorials, and 2 keynote presentations from leading experts in the field.
Session 1 explored ways in which online discrimination can happen and privacy could be compromised. The papers presented look for novel and practical solutions to some of the problems identified. We attempt to introduce our readers to the papers that will be presented at FAT* 2018 in this area thereby summarising the key challenges and questions explored by leading minds on the topic and their proposed potential answers to those issues.
Session Chair: Joshua Kroll (University of California, Berkeley)
Paper 1: Potential for Discrimination in Online Targeted Advertising
Problems identified in the paper:
Much recent work has focused on detecting instances of discrimination in online services ranging
from discriminatory pricing on e-commerce and travel sites like Staples (Mikians et al., 2012)
and Hotels.com (Hannák et al., 2014) to discriminatory prioritization of service requests and offerings
from certain users over others in crowdsourcing and social networking sites like TaskRabbit
(Hannák et al., 2017). In this paper, we focus on the potential for discrimination in online advertising, which underpins much of the Internet’s economy. Specifically, we focus on targeted advertising, where ads are shown only to a subset of users that have attributes (features) selected by the advertiser.
Key Takeaways:
- A malicious advertiser can create highly discriminatory ads without using sensitive attributes such as gender or race. The current methods used to counter the problem are insufficient.
- The potential for discrimination in targeted advertising arises from the ability of an advertiser to use the extensive personal (demographic, behavioral, and interests) data that ad platforms gather about their users to target their ads.
- Different targeting methods offered by Facebook: attribute-based targeting, PII-based (custom audience) targeting and Look-alike audience targeting
- Three basic approaches to quantifying discrimination and their tradeoffs:
- Based on advertiser’s intent
- Based on ad targeting process
- Based on the targeted audience (outcomes)
Paper 2: Discrimination in Online Personalization: A Multidisciplinary Inquiry
The authors explore ways in which discrimination may arise in the targeting of job-related advertising, noting the potential for multiple parties to contribute to its occurrence. They then examine the statutes and case law interpreting the prohibition on advertisements that indicate a preference based on protected class and consider its application to online advertising.
This paper provides a legal analysis of a real case, which found that simulated users selecting a gender in Google’s Ad Settings produces employment-related advertisements differing rates along gender lines despite identical web browsing patterns.
Key Takeaways:
- The authors’ analysis of existing case law concludes that Section 230 may not immunize advertising platforms from liability under the FHA for algorithmic targeting of advertisements that indicate a preference for or against a protected class.
- Possible causes of ad targeting:
- Targeting was fully a product of the advertiser selecting gender segmentation.
- Targeting was fully a product of machine learning—Google alone selects gender.
- Targeting was fully a product of the advertiser selecting keywords.
- Targeting was fully the product of the advertiser being outbid for women.
- Given the limited scope of Title VII the authors conclude that Google is unlikely to face liability on the facts presented by Datta et al. Thus, the advertising prohibition of Title VII, like the prohibitions on discriminatory employment practices, is ill-equipped to advance the aims of equal treatment in a world where algorithms play an increasing role in decision making.
Paper 3: Privacy for All: Ensuring Fair and Equitable Privacy Protections
In this position paper, the authors argue for applying recent research on ensuring sociotechnical systems are fair and non-discriminatory to the privacy protections those systems may provide. Just as algorithmic decision-making systems may have discriminatory outcomes even without explicit or deliberate discrimination, so also privacy regimes may disproportionately fail to protect vulnerable members of their target population, resulting in disparate impact with respect to the effectiveness of privacy protections.
Key Takeaways:
- Research questions posed:
- Are technical or non-technical privacy protection schemes fair?
- When and how do privacy protection technologies or policies improve or impede the fairness of systems they affect?
- When and how do fairness-enhancing technologies or policies enhance or reduce the privacy protections of the people involved?
- Data linking can lead to deanonymization; live recommenders can also be attacked to leak information
- The authors propose a new definition for a fair privacy scheme: a privacy scheme is (group-)fair if the probability of failure and expected risk are statistically independent of the subject’s membership in a protected class.
If you have missed Session 2, Session 3, Session 4 and Session 5 of the FAT* 2018 Conference, we have got you covered.