This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

What's Trending

Tracking trends critical to life sciences and technology companies. Subscribe to stay up to date.

| 2 minutes read

Developing Un-Biased AI Is Not Just An Aspiration, It is A Necessity

As generative AI gains a foothold in our technology, the first wave of litigation has rippled through the industry, mostly focusing on copyright issues. We expect that plaintiffs will file more lawsuits, if only because that is one way our society seeks change. One issue that we foresee is that companies developing or using AI whose effect biases a population based on protected characteristics (e.g. race, gender, religion, etc.) will likely be pulled into court.

Those lawsuits will presumably find a foothold in laws that prohibit unfair or deceptive acts or practices in commerce. The Federal Trade Commission, the most significant United States regulator when it comes to whether an act or practice is fair or deceptive, cautioned in 2016 that companies using big data sets are expected to consider the following issues when building technologies using that data:

  1. How representative is the data set?
  2. Does the data model account for biases?
  3. How accurate are your predictions based on big data?
  4. Does your reliance on big data raise ethical or fairness considerations?

In a subsequent 2022 report to Congress, the FTC's guidance similarly recommended that relevant actors should consider, among other things, how well a tool works; its real-world impacts; who has authority to answer those questions; and who is accountable for unfair, biased, or discriminatory outcomes at the company. In its report, the FTC also noted that some AI tools can exacerbate bias where the language used to craft AI inputs is one other than English because English is the prevalent language used to train AI models. Whether your company is building a model or vetting an AI vendor, you should know the answers to these questions so that you can determine if the technology is appropriate for the use case.

One example of the consequences of a poor fit between technology and its usage comes from the FTC's recent enforcement action against Rite Aid. There, Rite Aid used facial recognition technology to generate a “match alert” if the technology matched a live in-store image with a database of persons of interest believed to pose a security risk at stores. The FTC noted that, among other things, the technology produced thousands of false-positive matches, which disproportionately harmed customers based on their race, gender, and other demographic characteristics. Rite Aid ultimately agreed to an order and injunction that will limit its future business operations, such as the kind of security it implements in its stores.  

The FTC's reports and the Rite Aid enforcement telegraph the risk of developing or using an AI model that biases or unfairly disadvantages a population. On the other hand, the FTC has given companies a roadmap to use to mitigate that risk.

Tags

litigation, ai & machine learning