Singapore’s AI Verify is now available to the global open-source community to support the development of responsible artificial intelligence (AI) through the new AI Verify Foundation.
AI Verify is an AI governance testing framework and software toolkit first developed by the Infocomm Media Development Authority (IMDA) in consultation with companies from different sectors and different scales. It is consistent with internationally recognised AI governance principles, including those from the European Union and Organization for Economic Cooperation and Development (OECD).
Launched as a minimum viable product for an international pilot last year, it attracted the interest of over 50 local and multinational companies, including IBM, Dell, Hitachi and UBS.
Through the new AI Verify Foundation, the open source and global community can access the AI Verify toolkit to generate testing reports that cover different governance principles for an AI system. As such, organisations can be more transparent about their AI by sharing these reports with their stakeholders.
The seven pioneering premier members that will provide strategic directions and development of the AI Verify roadmap are IMDA, Aicadium (Temasek's AI Centre of Excellence), IBM, Microsoft, Google, Red Hat and Salesforce.
The foundation also includes more than 60 general members, such as Adobe, DBS, Huawei, Meta, SenseTime, and Singapore Airlines.
See also: Keys to achieving human-centred automation testing
“[Singapore] believes in using AI in a responsible way and deploying it for good, but we will also strive to shield society from the most serious AI risks. To make progress on all of those, the government cannot do it alone. The private sector and the research ecosystem have rich expertise… [so] they can and must be encouraged to participate meaningfully to advance AI for the public good,” says Josephine Teo, Singapore’s Minister for Communications and Information, during the launch of the AI Verify Foundation at the ATxAI conference (a part of Asia Tech x Singapore event) this morning.
She adds that the launch of the AI Verify Foundation will support Singapore’s goal of becoming “a vibrant node within a global network where efforts are directed towards trusted AI system and responsible use and where AI for the public good will truly come alive”.
Ashley Fernandez, Huawei's chief data & AI officer, comments: “The scale and pace of AI Innovation in this new modern technology era requires, at the very core, foundational AI governance frameworks to be made mainstream in ensuring the appropriate guardrails are considered while implementing responsible AI algorithmic systems into applications. AI Verify Foundation serves this core mission and, as we progress as an advancing tech society, substantiates the need to advocate for the deployment of greater trustworthy AI capabilities.”
See also: Human element still important for effective mass communication
Sparking discussions on responsible generative AI among policymakers
IMDA and Aicadium have also published a discussion paper to further spur discussions among policymakers on building an ecosystem for the trusted and responsible adoption of generative AI without limiting its innovative capabilities.
The paper identifies six key risks that have emerged from generative AI as well as a systems approach to enable a trusted and vibrant ecosystem. The approach provides an initial framework for policymakers to strengthen the foundation of AI governance provided by existing frameworks to address the unique characteristic of generative AI, address immediate concerns and invest for longer-term governance outcomes.
The specific ideas – such as shared responsibility framework and disclosure standards – also seek to foster global interoperability, regardless of whether they are adopted as hard or soft laws.