Gary Gensler Advocates for Increased Adoption of AI in Regulatory Operations

In a compelling speech delivered at the National Press Club on July 17, Gary Gensler, the Chair of the United States Securities and Exchange Commission (SEC), emphasized the potential benefits of incorporating artificial intelligence (AI) into the agency’s operations. Gensler expressed his belief that leveraging AI could significantly assist the SEC in its crucial role as a securities watchdog.

During his address, Gary Gensler took the opportunity to address the recent Ripple court ruling, breaking his silence on the matter. However, he also focused on the prospect of employing AI to revolutionize the regulator’s approach to overseeing the financial markets. The SEC has been particularly active in pursuing enforcement actions against cryptocurrency firms, initiating actions against no less than 54 such companies between 2018 and the first half of 2023. The regulatory scrutiny escalated following the November collapse of FTX, leading to a marked surge in enforcement actions.

Although Gary Gensler refrained from providing specific details on how AI would be deployed within the agency, he spoke highly of the transformative potential of the technology across various sectors. Citing its impact on pattern recognition and its scalability, the SEC Chair highlighted the efficiency gains that AI could bring to the economy.

He further emphasized his profound belief in the groundbreaking nature of AI, likening its significance to that of the internet and mass production of automobiles. This bold proclamation underscores the transformative potential of AI and its possible role in shaping the future of regulatory operations within the SEC.

Gary Gensler Raises Concerns Over AI Bias and Privacy Infringement

Gary Gensler also  expressed his positive sentiment towards AI technology while shedding light on the critical issues of bias, privacy infringement, and conflicts of interest that plague many AI systems today.

Highlighting the potential pitfalls of predictive AI models, Gensler emphasized that these systems often reflect historical biases, leading to decreased accuracy and, in some instances, producing entirely false predictions. Such biases can have far-reaching consequences in various industries, impacting decision-making processes and exacerbating societal inequalities.

Moreover, Gensler underscored that AI systems frequently run afoul of privacy rights, raising concerns over the unauthorized use of personal data and the potential for data breaches. The reckless handling of sensitive information by AI models has become a pressing concern, leading regulators to call for enhanced privacy protections and transparency in AI development and deployment.