Standards are key in the fragmented landscape of AI regulations

Artificial intelligence holds immense promise and opportunity to solve some of our most pressing and complex problems. Due to its potential power, many believe that it should be developed with regulatory oversight and guardrails in place. Fragmented approaches to AI controls across the world are pulling in different directions, and one potential consequence of this is the impact on interoperability, the ability of systems to interact seamlessly with each other. We can draw an analogy with the electrical industry, which developed in isolation in each country to create the world of disparate plugs and voltages that we face today.
If AI is indeed the electricity of the 21st century, does it risk a similar fate?
Unlike the electrical industry, the telecommunications industry addressed the global standards issue from the start. The International Telecommunications Union (ITU), established in 1865, aimed to solve the problem of interoperability between national telegraph systems and foster international cooperation on standards. This collaboration has been pivotal in shaping our globalized communications systems ever since.
As we celebrate World Standards Day on October 14, can we take inspiration from the telecom industry and come together, once again through standards, towards a better-connected, AI-driven future?
Divergent regulatory approaches on AI
Governments around the world have adopted different stances on regulation. These are shaped by competing values, such as human centricity, the drive for speed to market, and concerns for risk mitigation. For example, the EU has adopted a unified European Union wide regulation, called the EU AI Act, while in the US there is variation from state to state, with Colorado, Texas, California and New York all taking different approaches to regulation. Brazil intends to regulate AI through Bill No. 2,338/2023 and China recently announced the Action Plan for Global Governance of Artificial Intelligence.
Companies with global operations face a difficult challenge in reconciling this complex and evolving regulatory landscape and conducting their compliance obligations coherently across potentially conflicting requirements. To reduce the burden on organizations, the global communities have already started bringing disparate values and approaches together into one, self-consistent body of frameworks and standards, which is the missing link for innovative and responsible adoption of AI. The global standardization organization ISO/IEC (the International Standards Organization and the International Electrotechnical Commission) and the global policy forum OECD (the Organization for Economic Co-operation and Development) already provide today effective environments for driving shared values and finding common grounds among divergent regional perspectives.
National AI governance strategies strive to create a competitive marketplace by navigating the balance between the push for innovation speed and the foundational commitment to long-term human-centric values.
Standards provide a baseline
Ideally, the countries of the world would agree to a multi-lateral, consensus-based approach to AI governance such as the UN Independent International Scientific Panel on Artificial Intelligence’s recent launch of a Global Dialogue on Artificial Intelligence Governance (25 Sep 2025). The drive for global standards in AI is setting the foundation that will make these conversations more productive and help us to converge and speak the same language globally on AI.
AI governance standards differ from technical standards set by industry bodies like 3GPP or IETF. Unlike technical standards, governance standards provide guidance for trustworthy AI development, with a focus on non-application-specific elements of AI such as terminology and management frameworks. ISO/IEC play an important role in creating global AI standards. ISO standards like ISO 9001, ISO 14001 and ISO 27001 are, for instance, foundational management standards for many industries and organizations.
Preempting global regulations, the ISO/IEC JTC1/SC42 AI Committee, which first met in 2018 and now has 50 participating and 25 observing members, recognized the need to lay down common terms and frameworks in response to the rapid growth and associated risks of AI. It develops international standards, technical reports and specifications that provide an impartial platform, open to all participants, as well as structures and processes needed to facilitate consensus. There are five main working groups addressing AI: foundational standards, data, trustworthiness, use cases and applications, and computational approaches and characteristics of AI systems. They also work in close collaboration with other ISO/IEC committees covering adjacent technology areas such as security and sustainability. Since 2018, SC42 has already published 39 standards, with many more are under development.
Equally, CEN/CENELEC (the European Committee for Standardization and the European Committee for Electrotechnical Standardization) JTC21 was established in 2021 to help organizations operationalize a selection of legal requirements of the EU AI Act. The committee currently includes more than 20 participating country-members. The work is organized into five key working groups addressing various provisions of the EU AI Act from operational to engineering aspects. JTC21 works in close collaboration with ISO/IEC JTC1/SC42 to ensure that European standardization efforts are aligned, wherever possible, with international AI standards, while adapting them to meet specific European regulatory mandates. This dual effort aims to provide both global consistency and local regulatory compliance. The first JTC21 standards for the EU AI Act are expected in the first half of 2026.
Nokia is a valued contributor in both CEN/CENELEC and ISO/IEC AI committees, particularly in groups focused on aspects of environmental impact and risk and quality management, including critical digital infrastructure, which is very important for the telecom sector.
Join us in driving responsible and rapid AI innovation
AI will continue to evolve and regulatory divides are likely to remain. This presents a challenge for global interoperability. Standards developed by bodies like the ISO are closing the gaps left by regulatory divergence. They address all aspects of AI governance such as risk management, quality control, trustworthiness, ethics, data governance, transparency, and bias, providing much needed guidance for organizations that develop or use AI systems.
Their work, however, is not specific to a particular industry. In the case of telecoms, tailoring these higher-level requirements and guidelines to networks will be up to bodies like 3GPP, which operate on longer timelines to produce detailed technical specifications that underly the communication systems we have today.
The timely implementation of industry-standard AI governance practices will pave the way to a faster integration of AI into the core of network technologies and save organizations having to solve the challenge of unifying disparate requirements on their own. Therefore, as we look forward to next-generation standards like 6G, we encourage the industry to look beyond telecommunications standards and contribute also to global, multi-stakeholder bodies like ISO and CEN/CENELEC. As the timely implementation of industry-standard AI governance practices will pave the way to a faster integration of AI into the core of network technologies and save organizations having to solve the challenge of unifying disparate requirements on their own.
Furthermore, these AI standards also inform policy makers around the globe, further aiding the alignment and fostering a trusted, interoperable AI ecosystem that benefits the entire digital economy.