Singapore seeks views on governance framework for generative AI

Michelle Zhu
Published Tue, Jan 16, 2024 · 06:00 PM

THE Singapore government on Tuesday (Jan 16) launched a public consultation on a proposed framework to govern generative artificial intelligence (AI).

The Infocomm Media Development Authority (IMDA) said the consultation on the Model AI Governance Framework for Generative AI was jointly released with the AI Verify Foundation to seek views from the international community. It will close on Mar 15, 2024.

The public consultation follows a discussion paper on generative AI which was jointly published in June 2023 by IMDA, software company Aicadium and the AI Verify Foundation. Discussions and feedback on this have been “instructive”, said the authority.

IMDA said its proposed “comprehensive and systematic framework to build a trusted ecosystem for generative AI” pulled together different aspects of governance concerns discussed internationally, such as accountability and content provenance.

Generative AI are models capable of generating text, images or other media. They learn the patterns and structure of their input training data and generate new data with similar characteristics.

In IMDA’s view, existing governance frameworks for these AI models need to be reviewed to foster a broader, trusted ecosystem.

GET BT IN YOUR INBOX DAILY

Start and end each day with the latest news stories and analyses delivered straight to your inbox.

VIEW ALL

“A careful balance needs to be struck, between protecting users, while driving innovation. There have also been various international discussions pulling in the related and pertinent topics of, for example, accountability, copyright, misinformation, etc. These issues are interconnected and need to be put together in a practical and holistic manner.”

The authority added that “no single intervention will be a silver bullet”.

IMDA’s proposed governance framework sets out nine dimensions to be looked at in totality, to enable and foster such an ecosystem.

These dimensions are: accountability; data; trusted development and deployment; incident reporting; testing and assurance; security; establishing content provenance; safety and alignment research and development; and AI for the public good.

For instance, IMDA said it views accountability as key to incentivising players along the AI development lifecycle to be responsible to end-users.

“In doing so, we recognise that generative AI, like most software development, involves multiple layers in the technology stack, and hence allocation of responsibility may not be immediately clear.”

Potential solutions to address the issue of accountability include shared responsibility frameworks – which would allocate obligations to stakeholders based on their level of control in the cloud industry – or liability frameworks such as copyright indemnities. 

READ MORE

BT is now on Telegram!

For daily updates on weekdays and specially selected content for the weekend. Subscribe to  t.me/BizTimes

Singapore

SUPPORT SOUTH-EAST ASIA'S LEADING FINANCIAL DAILY

Get the latest coverage and full access to all BT premium content.

SUBSCRIBE NOW

Browse corporate subscription here