The VGER Blog

EU vs US Regulations for AI

Written by Patrick O'Leary | Apr 17, 2023 4:54:46 PM
By now everyone has used chatGPT,  https://chat.openai.com/
Undoubtably it will get abused, bad actors are inevitable however mass abuse by governments and corporations is something that can be reduced, not eliminated but at least accountable.
Governments are starting to react by trying to form committees to provide guidelines and regulations.
 
There are three perspectives to take with AI regulation
  • The end user
  • The corporate view
  • The researcher view
Lets examine how both the EU and US are planning on handling them
 
First the EU AI Act https://data.consilium.europa.eu/doc/document/ST-14954-2022-INIT/en/pdf
 
The EU approach is interesting where it classifies activities by level of risk, and bans or regulates the usage of AI in those areas
The four levels of risk are
  • Unacceptable
  • High
  • Obligated
  • Minimal
 
e.g.,
 
Unacceptable Risk (Banned)
 
High Risk (Permitted, subject to compliance)
  • Recruitment - specifically eliminating candidates based on AI correlation or assumption of traits.
  • Medical devices - specifically traceable, testable and predictable rules for intervention devices.
    • Hearth defibrillators, breathing respirators etc..
    • Subject to 3rd party assessments
  • Law enforcement
  • Biometrics in public areas (with exceptions)
    • e.g., checking ALL faces in a public area against a db of suspects.
      • exceptions for victims of crime - human trafficking, kidnapped children etc..
      • threat of life, terrorism
      • EU wide / Interpol arrest warrant
    • Companies like ClearView would run afoul very quickly
 
Transparency Obligations (Permitted, with obligations)
  • Automated instructional systems
  • An example given were Chat bots providing investment recommendations
 
Minimal or no risk (Permitted)
 
The question around minimal is the evaluation of what is considered minimal, the EU is establishing working teams to keep a constant measure around this topic and will probably be a source of business disruption in the future.
 
The EU AI Act goes a long way with the combination of the EU Digital Services Act, where companies with a reach > 10% of the EU citizens must create transparency on recommendation systems like news, ads, timelines and posts.
The EU is looking into a method to create an application process for AI, certifying the area and level of risk for a go to market system and ensuring the appropriate measures are in place.
This also dovetails into the EU approach to data privacy where users are considered to be the owners of data as it relates to them, and as such can request that data, request corrections and even request the deletion of data as it relates to a person. The term relates is highly expanded to mean any datum including UUIDs and referential data.
 
The US has a different approach, a blueprint / AI Bill of Rights
https://www.whitehouse.gov/ostp/ai-bill-of-rights the language of this bill is incredibly vague and in it's infancy.
 
These are the headings around the AI Bill of Rights
  • Safe and Effective Systems
    • A recommendation to providers to do testing before deployment
  • Algorithm Discrimination Protections
    • AI must be designed in an equitable way
      • Actively avoid discrimination based on Racial, sexual orientation, medical, disability, genetics etc..
    • "impact assessment, including disparity testing results and mitigation information, should be performed and made public whenever possible to confirm these protections. "
      • A very vague and open ended recommendation
    • Income and racial bias is currently occurring in the insurance, lending, and education sectors.
      • My confidence level of this becoming a law without carveouts is low
  • Data Privacy
    • You should be protected with
      • "Data collection that conforms to reasonable expectations"
  • Notice and Explanation
    • "You should know that an automated system is being used" and understand "how it impacts you"
      • Arguable that places the onus an end user, similar to camouflaged ads
      • The intention is that providers ensure users know, but again it's minimal and vague.
  • Human Alternatives, Consideration, and Fallback
    • "Where appropriate" have access to a person who can quickly consider and remedy problems you encounter
    • That's not going to be a hard requirement, just a recommendation similar to pressing 0
    • However the considering piece behind the human is probably the same as the AI implementation.
 
Overall the AI bill of rights is a long way off from being a useful framework that can get codified and in this format it may take years for something reasonable to come together.
 
Without a strong federal data privacy act, the AI Bill of Rights will have little to no teeth when it comes to implementation.