STRATEGY SPOTLIGHT

Understanding the ethics of artificial intelligence

With the growing use of AI in business, ethical considerations can no longer be ignored

Francis Kan
Published Wed, Oct 4, 2023 · 05:00 AM

THE digital revolution has transformed many facets of human life, none more strikingly so than the advance of artificial intelligence (AI).

While AI promises to deliver innovations across various sectors, concerns over the technology’s ethical use must be considered. In an increasingly interconnected world, the ethical dimensions of AI cover a range of issues, from compliance to privacy, and from strategic alignment to social responsibility.

One pressing ethical dilemma is the potential for AI technology to generate false, biased or discriminatory results, which may occur when such systems are “fed” with datasets involving inaccurate, unverified or false information, or when prompts given are flawed or inherently biased in the first place.

“This is especially ethically problematic when AI systems are used to make decisions directly affecting individuals, such as AI-driven recruitment or hiring systems, or when the use of the systems results in the misleading of customers,” says Jeremy Tan, managing partner at Bird & Bird ATMD.

Other concerns include the unauthorised collection and use of sensitive information by AI systems, as well as issues surrounding intellectual property and ownership when it comes to content generated by AI systems.

Building trust and governance

To address these concerns, organisations must adopt a comprehensive approach to ethical management, aligning AI systems with industry best practices, compliance requirements, and societal expectations.

A NEWSLETTER FOR YOU
Friday, 12.30 pm
ESG Insights

An exclusive weekly report on the latest environmental, social and governance issues.

Ritin Mathur, partner of consulting at Ernst & Young Advisory, points out that the absence of robust governance and ethical guidelines can cause AI technologies to malfunction or be corrupted, deliberately or otherwise. “These failures can have profound ramifications for security, decision-making and credibility, and may impact reputation, profitability and regulatory scrutiny,” he says.

“Failures in AI technologies can have profound ramifications for security, decision-making and credibility, and may impact reputation, profitability and regulatory scrutiny,” says Ritin Mathur, partner, consulting at Ernst & Young Advisory. PHOTO: EY

To address such issues, PwC helps its clients to build what the firm refers to as “trust by design”, an approach that integrates risk considerations into AI development processes from the onset. “In the design and build phase, key risks are considered upfront and controls, which commensurate with the risk, are designed and implemented from the outset,” explains Greg Unsworth, digital business and risk services leader, PwC Singapore.

He adds that governance and risk functions need to adopt an “AI in mind” risk-management culture which prioritises continuous scanning for potential ethical challenges, tracking them to ensure compliance with ethical AI considerations, and implementing additional or enhanced controls if necessary.

Some of these guardrails for AI usage could include automated and manual controls at all stages – design, build, deployment, and operation. While regulations provide a framework for responsible usage, organisational policies should also focus on cybersecurity, privacy, and fairness. Furthermore, the pace of AI evolution demands that these frameworks be continuously updated.

Human judgement in the age of AI

Even as more organisations adopt AI in their operations, traditional human judgement will remain relevant, experts say. Indeed, there needs to be a healthy dose of cynicism when working with AI in a business setting, says Bird & Bird’s Tan. “This will create the necessary presence of mind to determine when AI findings need to be challenged, and to be able to challenge those findings.”

He adds that decisions regarding the initial deployment of AI, the inputs used when programming an AI system, and whether to accept its outputs at face value, will all require human judgement.

“A healthy dose of cynicism will create the necessary presence of mind to determine when AI findings need to be challenged, and to be able to challenge those findings,” says Jeremy Tan, managing partner of Bird & Bird ATMD. PHOTO: BIRD & BIRD ATMD

EY’s Mathur notes that even as powerful generative AI tools such as ChatGPT transform the business landscape, human judgement is critical in ensuring that these tools are sound, safe and relevant, and not affected by the biases of the data used. In other words, as AI becomes increasingly integral to organisational structures, human oversight must adapt but cannot be replaced.

In a situation when an error arises from an AI system, the organisation employing the technology, as opposed to its developer, must ultimately take responsibility when things go wrong.

“While there may be some recourse to external AI developers or employees, it is still likely that the organisation itself will assume the ultimate responsibility for AI models deployed by them,” says Unsworth, who is also Singapore divisional deputy president of CPA Australia as well as its digital committee chair.

Leading the way

Singapore’s vibrant ecosystem, comprising technology companies, startups, educational institutions and regulatory bodies, positions it as an emerging leader in the ethical AI space. Experts recognise Singapore’s potential to foster innovation, while maintaining a trusted environment for AI development and adoption.

To this end, the government has taken a unique ecosystem approach towards AI governance and ethics, involving consultations with providers and users.

For instance, the Singapore Model AI governance framework was released in 2019 for consultation, adoption and feedback. Since then, the Infocomm Media Development Authority has developed an AI governance testing framework and software toolkit, which is based on 11 AI ethics principles that are consistent with global frameworks, such as those from the European Union and the Organisation for Economic Co-operation and Development.

Says Unsworth: “Singapore should continue to encourage innovation and experimentation at the same time as developing a trusted environment for AI adoption. I expect we will see a number of leading applications of AI developed in Singapore and deployed to the region and globally in future.”

“Singapore should continue to encourage innovation and experimentation at the same time as developing a trusted environment for AI adoption,” says Greg Unsworth, digital business and risk services leader at PwC Singapore. PHOTO: PWC

As AI technologies continue to evolve, ethical considerations must keep pace to ensure responsible and equitable deployment. While the path is fraught with challenges, robust governance frameworks, human oversight, and multi-tiered strategies for accountability and transparency can serve as guiding posts for organisations navigating the complex ethical terrain of AI.

KEYWORDS IN THIS ARTICLE

READ MORE

BT is now on Telegram!

For daily updates on weekdays and specially selected content for the weekend. Subscribe to  t.me/BizTimes

International

SUPPORT SOUTH-EAST ASIA'S LEADING FINANCIAL DAILY

Get the latest coverage and full access to all BT premium content.

SUBSCRIBE NOW

Browse corporate subscription here