On Wednesday, Federal Commerce Fee (FTC) Chair Lina Khan pledged to make use of present legal guidelines to manage AI in a New York Occasions op-ed, “We Should Regulate A.I. This is How.” Within the piece, she warns of AI dangers resembling market dominance by massive tech companies, collusion, and the potential for elevated fraud and privateness violations.
Within the op-ed, Khan cites the rise of the “Net 2.0” period within the mid-2000s as a cautionary story for AI’s growth, saying that the expansion of tech firms led to invasive surveillance and lack of privateness. Khan feels that public officers should now guarantee historical past would not repeat itself with AI, however with out unduly proscribing innovation.
“As these applied sciences evolve,” she wrote, “we’re dedicated to doing our half to uphold America’s longstanding custom of sustaining the open, honest and aggressive markets which have underpinned each breakthrough improvements and our nation’s financial success—with out tolerating enterprise fashions or practices involving the mass exploitation of their customers.”
Khan’s op-ed comes when rising hype and anxiety about generative AI like ChatGPT has begun to dominate the tech world. Growing use of the nebulous time period “AI” in commerce has led the FTC to put up clarifying statements about the way it plans to cope with these new applied sciences (and potentially misleading claims about them) on its web site.
In keeping with these earlier statements, Khan made a degree of noting that AI represents nothing particular within the eyes of the regulation. “Though these instruments are novel, they aren’t exempt from present guidelines,” she wrote, “and the FTC will vigorously implement the legal guidelines we’re charged with administering, even on this new market.”
Moreover, Khan’s plans for AI transcend the favored AI chatbots of generative AI, extending to different types of automation and algorithmic decision-making. She mentions at the least 4 key areas of concern:
- Making certain honest competitors: Stopping massive tech companies from exploiting their market dominance and utilizing collusion to stifle innovation and smaller rivals within the AI panorama.
- Strengthening shopper safety: Safeguarding customers from misleading and fraudulent practices enabled by AI, resembling phishing scams, deepfake movies, and voice cloning.
- Selling knowledge privateness: Monitoring AI methods to make sure they adhere to knowledge safety legal guidelines and forestall exploitative knowledge assortment or utilization, defending customers’ private info.
- Combating discriminatory practices: Making certain AI methods do not perpetuate or amplify biases and discrimination, which may result in unfair remedy in areas like employment, housing, or entry to important providers.
A few of these components have beforehand been specified by the Biden administration’s “AI Bill of Rights” tips revealed in October. These tips don’t explicitly have the pressure of regulation behind them, however the FTC has the latitude to interpret present legal guidelines to use to AI. “The FTC is effectively outfitted with authorized jurisdiction to deal with the problems dropped at the fore by the quickly growing A.I. sector,” Khan mentioned.
Wanting forward, Khan asks: Can the US proceed to foster world-leading expertise with out accepting “race-to-the-bottom enterprise fashions” and “monopolistic management” that locks out higher-quality merchandise. Her reply? “Sure—if we make the proper coverage decisions.”