The Kremlin has quietly formalized a new presidential commission to shape Russia’s approach to artificial intelligence. The decree creates a central coordinating body meant to align federal ministries, regional authorities, the central bank and major private firms on everything from standards and procurement to emergency responses.
What this commission looks like matters. Its membership mixes defense and security chiefs with economic and digital ministers, plus senior figures from big banks and tech companies. That blend tells you two things at once: Moscow wants to accelerate AI development as an industrial priority, and it wants tight oversight where strategic or security-sensitive applications are concerned. Which of those impulses dominates will depend on who holds sway inside the room.
Who’s running it
The order describes institutional roles more than naming every individual, but several key appointments are public. The commission will be co-chaired by Deputy Prime Minister Dmitry Grigorenko and the president’s deputy chief of staff, Maxim Oreshkin—positions that combine executive muscle with direct presidential influence. Sitting on the panel are ministers and security leaders such as Andrei Belousov (defense), Alexander Bortnikov (FSB), Maxim Reshetnikov (economic development), Anton Siluanov (finance) and Maksut Shadayev (digital development). Industry voices include Sberbank CEO German Gref and former Yandex executive Tigran Khudaverdyan. Also notable is Alexey Dyumin, a presidential aide and former bodyguard, signaling a clear presence of trusted security personnel.
Read this lineup one way and you see a pragmatic links-between-state-and-business effort to build domestic AI capacity. Read it another way and you see a mechanism likely to prioritize control and risk containment, especially where national security is invoked. The institutions represented will reveal the commission’s tilt once membership and voting rules are clarified.
Mandate and practical powers
The commission is not a mere advisory panel. Its tasks are sweeping: coordinate agencies involved in AI development; map and assess risks posed by advanced systems; and design measures to mitigate or “neutralize” those risks. That could mean setting technical standards, shaping procurement rules for critical sectors, and guiding emergency or regulatory responses when AI systems malfunction or are misused.
Because the roster includes senior banking and tech figures, the commission will almost certainly address data governance, market structure and industrial strategy—areas that touch on commercial confidentiality, cross‑border data flows and systemic financial risk. Those are sensitive policy choices: stricter data rules and targeted funding can protect national interests, but they can also entrench market advantages and limit external scrutiny.
Regional rollout and deadlines
The federal decree obliges regional leaders to establish interdepartmental AI commissions by June 1, 2026, creating a synchronized timetable across the country. Regional bodies are expected to adapt federal guidance to local capacity, map priority systems, and report compliance back to Moscow. Implementation will likely involve capacity building, shared technical toolkits and joint drills to prepare for AI-related incidents in critical infrastructure.
Transparency, oversight and accountability
A central worry is how transparent the commission’s work will be. Without clear public reporting, independent review mechanisms and civil-society access, technical standards and procurement decisions risk becoming opaque levers of state advantage. The commission will need to specify how classified national-security assessments intersect with public-interest obligations, and how independent audits can be secured for systems labeled high-risk.
Concrete early outputs to watch
The first operational framework and the list of systems designated as high-risk will be revealing. Expect the commission’s initial priorities to include cataloguing high-risk applications, publishing baseline security requirements, advising procurement bodies on pre-deployment certification, and setting timelines for standards adoption and supervised deployments in essential services. How enforcement powers are defined—whether guidance is binding or merely advisory—will determine whether the commission shapes practice or produces paperwork.
What to monitor next
– Which systems are classified as high risk, and on what basis. – Rules on cross-border data sharing and data governance. – Mechanisms for independent audits and civilian oversight. – The balance between protectionist industrial measures and support for commercial innovation. – How regional commissions implement federal guidance in practice. That configuration can speed up a coherent national AI strategy and shore up resilience where failures would be catastrophic. It can also channel policy toward tighter state control and limit independent scrutiny. The critical question now is not whether Russia has created a coordinating body—that much is clear—but how transparent, accountable and balanced its first actions will be. Watch the membership details, the first operational documents and the list of high-risk systems to understand which direction it will take.
