The messy reality of AI safety
Should AI accelerate? Decelerate? The answer is both
THE near-implosion of OpenAI, a world leader in the burgeoning field of artificial intelligence, surfaced a conflict within the organisation and the broader community about the speed with which the technology should continue, and also if slowing it down would aid in making it more safe.
As a professor of both AI and AI ethics, I think this framing of the problem omits the critical question of the kind of AI that we accelerate or decelerate.
In my 40 years of AI research in natural language processing and computational creativity, I pioneered a series of machine learning advances that let me build the world’s first large-scale online language translator, which quickly spawned the likes of Google Translate and Microsoft’s Bing Translator. You’d be hard-pressed to find any arguments against developing translation AIs. Reducing misunderstanding between cultures is probably one of the most important things humanity can do to survive the escalating geopolitical polarisation.
KEYWORDS IN THIS ARTICLE
BT is now on Telegram!
For daily updates on weekdays and specially selected content for the weekend. Subscribe to t.me/BizTimes
Opinion & Features
Singapore offices await a new wave of tenants
Musk has made Tesla a meme stock
The dog ate Japan’s plan to phase out coal power
If inflation continues to build, the Fed won’t be able to maintain neutral stance for long
Last rides and last rites: The rise of the limousine hearse
Beyond US aid, Ukraine needs European allies to step up