Regulating AI: The Urgent Need for Responsible Oversight and Collaboration

Baker Nanduru
Product Coalition
Published in
3 min readMay 21, 2023

--

Last week, Sam Altman, Founder & OpenAI CEO, made a compelling call to the US Senate to regulate AI, highlighting its unique aspects. His testimony stood out as he proactively expressed concerns about the potential dark side of the technology he helped create. Altman emphasized the need for regulations covering licensing and testing requirements for AI models that surpass a certain threshold of capabilities. Additionally, he expressed a desire to collaborate with the government to establish stringent regulations similar to the EU AI Act. This timely appeal from Altman underscores the crucial implications for humanity.

The United States and China currently lead the AI research market significantly. Between 2016 and 2021, the US was granted an impressive 58,000 AI-related patents. Venture capitalists have invested billions of dollars in this field over the past two decades, and pioneering companies such as Meta, Google, IBM, Amazon, and Apple have played a crucial role. China is also a formidable force in AI research, boasting exceptional contributions from institutions like Tsinghua University and major corporations, including Baidu, Alibaba, Qihoo 360, and Bytedance. Notably, China has already made significant strides in AI regulation.

Regulating AI in the United States presents unique challenges for several reasons. AI is an innovative technology that the US aims to leverage for geopolitical advantage. The desire to reap benefits from further technological advancements before implementing regulations motivates the US to tread cautiously. The rapidly changing landscape of AI technology poses difficulties in regulating something that is not yet fully understood. Market leaders, driven by profit maximization, have been the primary catalysts for AI innovation and may resist regulation. Moreover, many consumers remain excited about the benefits of AI while remaining largely unaware of its potential dark side, including job displacement, privacy concerns, and security risks. This lack of public awareness may not pressure legislators to take immediate action.

The process of developing a key regulation typically spans four to five years. For instance, the EU’s General Data Protection Regulation (GDPR) was drafted in 2011 and became effective in 2016. The US Health Insurance Portability and Accountability Act (HIPAA) was drafted in 1992 and became effective in 1996. Similarly, the EU AI Act, which commenced in 2021, went through multiple stages of drafting, enhancement, and finalization, with the regulation set to become effective in 2025.

US leadership in artificial intelligence revolves around creating an environment that fosters innovation while safeguarding the nation’s and humanity’s long-term interests. The alarming risks associated with AI necessitate immediate action rather than inaction. An iterative approach to drafting regulations, leveraging the best practices from the EU, and actively seeking input from responsible AI creators like OpenAI can accelerate the regulatory process. It is crucial to act swiftly and ensure that the US does not lag, allowing AI to fall into the wrong hands, including greedy corporations, and spiraling out of control, endangering humanity.

--

--

Transforming lives through technology. Checkout my product leadership blogs on medium and video series on youtube.com/@bakernanduru