On the RegulatingAI Podcast, Sanjay Puri speaks with Frederic Werner on AI’s rapid evolution, global equity, skills gaps, and responsible innovation.
WASHINGTON, DC, UNITED STATES, February 23, 2026 /EINPresswire.com/ — The pace at which artificial intelligence is developing is simply breathtaking. What started as traditional machine learning has quickly branched out into generative AI, autonomous agents, robotics, and even embodied and space-based AI systems. In a very engaging conversation on the RegulatingAI Podcast, the host, Sanjay Puri, interviewed Frederic Werner, the Chief of Strategic Engagement at the International Telecommunication Union (ITU), to discuss what this rapid development means for the world, particularly the Global South.
Werner’s message was clear and striking: “AI is too important to leave to the experts.”
A Moving Target That Demands Collaboration
AI, as Werner said, is a moving target. In a few short years, the emphasis has moved from predictive models of AI to generative AI and now to AI agents that can make autonomous decisions. Throw in robotics, brain-computer interfaces, and new uses in various industries, and it’s a complex picture.
This is not a path that can be guided by any one government, corporation, or research community. ITU, through its AI for Good platform, has brought together governments, corporations, research institutions, UN agencies, and youth leaders. The aim is not innovation—but responsible innovation.
Because when AI develops at this pace, fragmentation is a danger. Collaboration is a necessity.
The Global South Must Shape AI’s Future
One of the key takeaways from the conversation between Frederic Werner and Sanjay Puri on the RegulatingAI Podcast is the importance of the Global South. The AI debate is all too often reduced to a U.S.-China-Europe equation. However, Werner stressed that any hope of effective AI regulation and development must involve the emerging economies.
He cited the African mobile payments revolution as a case in point for how the Global South can leapfrog existing infrastructure with its own innovation. AI could follow suit if countries are enabled to be creators, not just consumers.
However, access is not the same as empowerment. Even if millions of people have AI-enabled devices in their pockets, the question is: Are they using them to build businesses, address community needs, and create value?
The promise of sovereign AI is not yet fully realized without skills, standards, and support.
AI, Jobs, and the “Two Brains” Approach
In terms of AI and the workforce, Werner promotes what he calls a two-brain approach.
The positive brain looks at opportunity: AI opens up new industries and allows people to create things that have never existed before. Rather than using AI to do more of the same thing faster, he encourages people to use it to create new possibilities.
The practical brain, on the other hand, looks at disruption. Initial data on the early labor market indicates that new graduates face a tighter job market, and it is reported that women’s roles in certain industries may be adversely affected, especially in developing countries.
The Most Urgent Priority: Closing the AI Skills Gap
If there is one thing that Werner would like to tell world leaders, it is this: “Close the AI skills gap.”
With the help of initiatives such as the AI Skills Coalition, there are now hundreds of courses being made available in various languages to ensure that AI knowledge is accessible to everyone. However, this initiative should not be limited to educational institutions alone. AI literacy needs to be a lifelong process—from grade school to government institutions.
Why? Because a lack of literacy makes governance brittle. Because innovation without literacy is not equal.
Open Source, Standards, and Shared Responsibility
Finally, the discussion turned to open source AI. Werner believes it is critical to democratization and the goal of sovereign AI—but not without danger. Openness and accessibility must be balanced with security and proper use.
Ultimately, the takeaway message from Frederic Werner to Sanjay Puri on the RegulatingAI Podcast was both optimistic and realistic: AI for good is possible—but not inevitable. It will depend on who shows up, who is ready, and whether we can choose to cooperate rather than compete.
The future of AI is not just a technological issue. It is a collective one.
Upasana Das
Knowledge Networks
email us here
Legal Disclaimer:
EIN Presswire provides this news content “as is” without warranty of any kind. We do not accept any responsibility or liability
for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this
article. If you have any complaints or copyright issues related to this article, kindly contact the author above.
![]()


































