Navigation

© Zeal News Africa

Reflections on AI governance from AI for Good Summit: Beyond the hype, towards a Just AI future

Published 7 hours ago2 minute read

Drawing on deliberations at the recent AI for Good Summit in Geneva, Switzerland, Executive Director Pria Chetty reflects on unresolved tensions in AI governance and offers recommendations for forging a path toward a Just AI future.

As the glittering halls of the Palexpo buzz with the promises of agentic AI and innovations reshaping our physical world, a crucial question hangs in the air, one that resonates deeply from the vibrant, yet vulnerable, digital landscape of Africa:

The optimism surrounding AI’s transformative potential is undeniable, yet beneath the surface lies a growing unease. We’re witnessing a proliferation of AI innovations at the company level with no clear accountability beyond voluntary trust and safety pledges. This is a precarious foundation. 

As experts at the Summit cautioned: ‘You can’t truly test for bugs you haven’t seen before; our current safety verification methods mitigate for identifiable risk.’ This leaves us in a disquieting state where complex AI systems are seemingly waiting for a moment when the risk presents, rather than guaranteeing safety from inception.

For safe and Just AI, we must test for specific and concrete attributes of safety and trustworthiness. Without standardised attributes, safety verification at the AI innovation level remains inconsistent and wholly unreliable. If the library of attributes is not developed in adaptive ways, systems can pass stringent technical tests in controlled environments, only to fail spectacularly in the real world when confronted with unpredictable user interaction.

Preparedness reviews and responsible scaling policies from leading AI developers like OpenAI are a start, but they remain voluntary, reflecting internal characterisations of threats rather than standardised industry-wide protocols. 

The challenge is magnified at the geopolitical level, where diverse priorities mean that concepts of AI risk or AI issues differ across continents. Myriad governance instruments amplify conceptual tensions and frustrate the implementation of governance processes to decisively address AI safety.

These unresolved tensions in AI governance pose fundamental questions for AI futures. The prospect, in particular, of superintelligence raises existential questions about maintaining human control and understanding how AI might evade it. 

Origin:
publisher logo
Research ICT Africa
Loading...
Loading...
Loading...

You may also like...