Welcome to the third edition of the Law, Policy, and AI Briefing! Briefings will go out intermittently, because I need to do research also. This one is very very late because… well.. research. So some of this is likely a bit old news to some of you.

What is this? The goal of this letter is to round up some interesting bits of information and events somewhere at the intersection of Law, Policy, and AI. Sometimes I will weigh in with Thoughts or more in-depth summaries. Feel free to send me things that you think should be highlighted @PeterHndrsn. Also… just in case, none of this is legal advice.


tldr; “It would prohibit most covered entities from using covered data in a way that discriminates on the basis of protected characteristics (such as race, gender, or sexual orientation). It would also require large data holders to conduct algorithm impact assessments. These assessments would need to describe the entity’s steps to mitigate potential harms resulting from its algorithms, among other requirements. Large data holders would be required to submit these assessments to the FTC and make them available to Congress on request.”

Thoughts: There were some concerns that there might be pre-emption of California state law, but that is being worked out. Here is a breakdown of each component and whether it pre-empts state law. And here’s another breakdown from Brookings. Notably Senator Cantwell argues that the bill “does not adequately protect women’s reproductive information because constraints on private lawsuits will make it harder for women to sue for violations.”

tldr; The Act seeks to ensure that “high-impact AI systems are developed and deployed in a way that identifies, assesses and mitigates the risks of harm and bias; establish[es] an AI and Data Commissioner to support the Minister of Innovation, Science and Industry in fulfilling ministerial responsibilities under the Act, including by monitoring company compliance, ordering third-party audits, and sharing information with other regulators and enforcers as appropriate; and outlin[es] clear criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm and where there is fraudulent intent to cause substantial economic loss through its deployment.”

Thoughts: I largely agree, there are a lot of problems with using AI for these purposes and the recommendations seem reasonable. The aim of the report seems to be to throw cold water on calls for legislation requiring the use of AI in these cases, which would be problematic to say the least.

tldr; Under the deal, Meta must stop allowing advertisers to use the “Lookalike Audience” tool which can allow for discrimination based on protected characteristics under the Fair Housing Act. They must develop a new system by December 2022 which addresses disparities in housing ads. A third party reviewer will investigate and verify the new system to make sure it abides by the settlement terms. Meta must pay to the United States a civil penalty of $115,054, the maximum penalty available under the Fair Housing Act.

Thoughts: There has been some criticism of this settlement. I do think this is a good thing if there is real monitoring, and will test algorithmic fairness at scale. However, the maximum penalty under the FHA seems pretty low to have a significant enforcement effect.

Shameless self-promotion alert: We wrote about the challenges for enforcement prioritization, and discuss food standards agencies, in our recent work Beyond Ads: Sequential Decision-Making Algorithms in Law and Public Policy. Health safety rating systems can encode some biases that can lead to feedback loops and may be worth exploring more deeply in fundamental ML theory work.

Thoughts: I’m keeping an eye on how the antitrust+algorithms interaction plays out here.

Thoughts: If you’re creating a pricing algorithm, might be worth checking with some attorneys whether you’re increasing liability…

Thoughts: Though I’m not surprised this was dismissed, I’m keeping an eye on this space to see how companies selling AI respond to external audits.


Thoughts: Feel free to hire me as faculty!!