AI Now Institute Executive Director Amba Kak discussed how the existing notice and consent mechanisms deployed by Big Tech companies online are insufficient to protect U.S. consumers’ personal information, both presently and in the context of increasingly more sophisticated AI models being deployed in the future.
“We’re seeing new privacy threats emerge from AI systems; we talked about the future threats and how we don’t know where AI is going to go, but we absolutely do know what harms they’re already causing,” Kak said. “Unless we have rules of the road, we’re going to end up in a situation where this kind of state of play against consumers is entrenched.”
Kak added the problem is three-fold in the form of privacy risks, competition risks brought by “unchecked commercial surveillance” and national security risks, due to the sheer volume of unnecessary personal data collected by companies that create a “honeypot” for cybercriminals. She said the personal data protections offered in the ADPPA, such as data minimization requirements, as well as existing regulatory powers would serve as adequate baseline protections against AI-related harms without necessarily needing to pass stand-alone AI legislation.
For more, head here