By Masha Borak
Governments, civil society and businesses are all attempting to steer the development of artificial intelligence. The Ada Lovelace Institute wants to revise current UK AI laws and has specific recommendations for how to do so, Singapore may be ASEAN’s regulatory role model, while U.S. tech giants have pledged to support the White House in mitigating AI risks.
UK research group says UK has gaps in regulating AI
Research and advocacy organization The Ada Lovelace Institute says that the UK’s AI regulatory network is facing “significant gaps.” It has issued 18 recommendations in their report, published last week, that it believes could jump-start the island nation into becoming a global AI governance pioneer.
One of the recommendations is a call to review the UK General Data Protection Regulation (GDPR) and the Equality Act 2010 to include new rights and protections for those affected by AI. Other recommendations are to establish a dedicated AI ombudsman, AI deployment pilots, and to introduce mandatory reporting requirements for developers of foundation models. The Institute also suggests developing a biometrics governance system.
The UK does not have a holistic body of law governing AI in the same way that the European Union is trying to achieve with its upcoming AI Act. The country should work on establishing both “horizontal” frameworks, covering human rights, equality and data protection, and “vertical” or domain-specific regulation, such as regulating medical devices, according to the Institute.
“The UK has an opportunity to position itself as a leader in global AI governance, pioneering a context-based, institutionally focused model for regulating AI that could serve as a template for other global jurisdictions,” the report concludes.
Southeast Asia needs an AI governance framework and Singapore may have the answer
Southeast Asian leaders should consider adopting Singapore’s Model AI Governance document on a regional level ensuring its countries have a common legal basis to govern the use of Al, similar to the EU’s AI Act. This would allow the region to strengthen both competitiveness and digital rights, a new opinion piece published by East Asia Forum argues.
Singapore’s Model AI Governance Framework, launched in 2020 as part of its National AI Strategy, is one of several AI-focused initiatives in the region. Indonesia, Thailand, Malaysia and Vietnam have also published national strategies and roadmaps for developing the technology, while the 10-member Association of Southeast Asian Nations (ASEAN) is currently drawing up the ASEAN Guide on AI Governance and Ethics.
The main advantage of Singapore’s approach is its focus on AI risks. However, the document still lacks many details surrounding categories and levels of risk, including facial recognition, according to Albert J. Rapha, a postgraduate student in Public Sector Innovation and E-governance at Katholieke Universiteit Leuven. In January, Singapore introduced an AI governance testing framework and tools in their A.I. Verify exercise.
A survey conducted by consulting firm Kearney in 2020 showed that although the vast majority (80 percent) of the region considers AI adoption to be at nascent stages, Southeast Asia could see a 10 to 18 percent GDP uplift by 2030 thanks to the technology, equivalent to nearly $1 trillion.
US tech companies pledge to work with White House on AI risks
Leading U.S. tech companies, including Amazon, Google, Meta, and Microsoft, have committed to collaborating with the Biden administration to address potential risks associated with AI. Their commitments include security testing of AI systems before release, sharing risk information with various organizations, watermarking AI-generated content, and addressing harmful bias and public trust issues.
OpenAI, the developer of ChatGPT, as well as its competitor Anthropic, which was established by former OpenAI staff, and Inflection, the startup responsible for Chatbot Pi and led by former DeepMind leader Mustafa Suleyman, have also made voluntary commitments.
In his remarks to reporters before meeting with AI company leaders, President Joe Biden commended these efforts, emphasizing the importance of safety, security, and trust in AI development. He stated, “The group here will be critical in shepherding that innovation with responsibility and safety-by-design to earn the trust of Americans. We must be clear-eyed and vigilant about the threats from emerging technologies that can pose — don’t have to — but can pose to our democracy and our values.”
Despite these commitments, some observers believe that more than the voluntary commitments of only seven AI companies is required. They believe that other AI companies will only continue to develop and implement AI tools with proper oversight if the government creates the proper regulations.
The White House has promised to work with allies to establish a global framework for governing AI. It is developing an executive order and bipartisan legislation to address AI-related issues such as algorithmic bias and transparency. Senator Todd Young, R-Ind., has stated that the Senate is working to adapt laws and regulations to address the impact of AI on society. New AI legislation is anticipated to be released within the next six months.
Source: Biometric Update
Masha Borak is a technology journalist. Her work has appeared in Wired, Business Insider, Rest of World, and other media outlets. Previously she reported for the South China Morning Post in Hong Kong. Reach out to her on LinkedIn.
Image: Pixabay
Become a Patron!
Or support us at SubscribeStar
Donate cryptocurrency HERE
Subscribe to Activist Post for truth, peace, and freedom news. Follow us on SoMee, Telegram, HIVE, Flote, Minds, MeWe, Twitter, Gab, and What Really Happened.
Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.
Be the first to comment on "US, UK and ASEAN Debating How to Regulate AI"