2026-02-18

Governments today are facing a massive digital dilemma. Citizens now expect their interactions with the state, whether renewing a license or filing taxes, to be as seamless and instant as using a mobile banking app. The pressure to modernize is intense, and the economic incentives are massive. In fact, studies suggest that widespread AI adoption in the public sector could increase GDP by up to 4% and reduce government fiscal deficits by as much as 22% [1].
However, the path to these benefits is blocked by significant hurdles. While 80% of public sector leaders believe AI will positively impact their work, many agencies remain stuck in a cycle of stalled experimentation, with reports indicating that only 20% to 25% of AI proofs of concept (PoCs) ever scale to wider implementation [2].
This stalling often stems from applying modern tech to outdated workflows. At Rootcode Connect AI in e-governance, our annual flagship conference bringing together global tech leaders, Luukas Ilves, former CIO of Estonia, addressed this friction. Introducing the "Agentic State," he warned that real progress isn't about "slapping technology on top of government," but requires "a very thorough, deep re-engineering of the processes of how government works". To achieve this re-engineering safely, governments must build Sovereign, Domain-Specific AI that runs on local infrastructure, ensuring innovation doesn't compromise data security.
The problem runs deeper than just budget constraints. Governments simply cannot use public tools like ChatGPT for sensitive work. National security laws, strict data sovereignty requirements, and privacy regulations (like GDPR) mean that sensitive citizen data cannot leave national borders or be fed into a public cloud model [3]. Furthermore, public sector organizations often struggle with legacy infrastructure that "won't cooperate" with modern AI, alongside a severe shortage of technical talent.
So, how do governments innovate without compromising security or public trust? The solution is not to sideline AI due to security risks, but to architect it differently by building Sovereign, Domain-Specific AI. This involves creating AI systems that live entirely inside government-controlled infrastructure, learning from local data without ever exposing it to the outside world.
Here are the critical strategies for building safe, effective AI for the public sector, illustrated by real-world projects.
1. The Sovereign Approach (Keep Data In-House)
The first commandment of government AI is that sensitive data must possess sovereignty. Governments cannot rely on generic models hosted on servers in foreign jurisdictions. Instead of sending data out to a model, you must bring the model to the data. This involves building and training AI on-premise within secure government data centers.
We saw this challenge firsthand when partnering with the Government of Estonia, a global leader in digital governance. They needed a way to let various ministries benefit from AI without creating a massive, vulnerable central database. Rootcode worked with the Estonian government to build a distributed AI model training platform that allows different government departments from the Police and Border Guard Board to the Ministry of Education to upload their own domain-specific data and train classification models locally. Read the case study about how we built the infrastructure for the government of Estonia.
This architecture ensures that departments can build tools to automatically route emails or classify documents based on their specific needs, yet the data never leaves the secure government network. This fulfills the strict requirements of national digital governance frameworks while still enabling innovation.
2. Privacy-First Design: Automating Anonymization
Privacy cannot be an afterthought; it must be integrated into the design of the AI. Recent research highlights that privacy and security concerns are cited by over 60% of government leaders as a primary barrier to AI implementation [4]. To overcome this, AI must act as a shield. Before a human civil servant reviews a document or an image submitted by a citizen, the AI should automatically strip away Personally Identifiable Information (PII).
This "privacy-first" engineering was the driving force behind Urbanora, a platform Rootcode developed for the City of Prague. The city wanted to streamline how citizens reported issues like potholes or broken streetlights, but photos uploaded by residents often posed GDPR risks by accidentally capturing bystanders' faces or car license plates.
To solve this, we implemented multimodal Large Language Models (LLMs) with a dedicated privacy layer. Now, before any image reaches a city administrator, the AI automatically detects and blurs faces, license plates, and other sensitive details. It then classifies the issue and routes it to the correct department, proving that instant compliance and operational efficiency can coexist. Read the case study about how we built Urbonara for the city of Prague.
3. Accessible to Non-Tech Staff
A major barrier to AI adoption is "effort expectancy", if a system is too difficult to use, people will simply ignore it. Policy experts, case officers, and citizens are rarely data scientists. For AI to scale beyond a pilot project, it must be accessible via simple, intuitive interfaces that abstract away the complex code.
We applied this principle of accessibility when partnering with the City of Porto to support local agriculture. The city wanted to help small-scale farmers and community gardeners adapt to climate change, but raw environmental data is difficult for non-technical users to understand and act upon.
Rootcode developed Clamigo, an AI-powered mobile application paired with custom IoT sensors. Instead of forcing farmers to analyze complex graphs of soil moisture or temperature trends, the AI does the heavy lifting in the background. It analyzes the data and generates simple, personalized recommendations such as exactly when to water crops or how to manage pests using organic methods. By hiding the technical complexity behind a user-friendly interface, the solution allows citizens with limited technical backgrounds to utilize precision agriculture, ensuring sustainable food security for the community. Read the case study about how we built Clamigo for the city of Porto.
4. Human-in-the-Loop & Explainability
In the public sector, "the computer said so" is not a valid justification for a decision. If an algorithm denies a welfare benefit or flags a tax return for audit, the government must be able to explain why. Trust is the bedrock of public service. AI should act as a "Co-pilot," not an "Autopilot." It sorts, retrieves, and suggests, but the human validates the final decision.
This necessity for accuracy and oversight was very important in our work with Truentity Health. In the healthcare sector, errors can be fatal, so we couldn't risk the conversational AI "hallucinating" (making up facts) about patient medication. Read the case study about how we built the conversational AI for Truentity Health.
Rootcode implemented custom guardrails and a multi-agent architecture to solve this. When a doctor queries patient metrics (e.g., "Show me blood pressure trends vs. medication intake"), our AI platform uses algorithmic tools to perform the calculation and retrieval but presents the source data for verification. The AI handles the drudgery of data sorting, but the physician remains in the loop to interpret the results, ensuring safety and compliance with regulations like HIPAA.
5. Tailor Solutions to Local Contexts
AI is not a "one-size-fits-all" solution that can simply be copy-pasted from one nation to another. Recent research on public sector AI emphasizes that adoption strategies must be differentiated based on the specific environment, a solution designed for a central ministry may fail in a rural county with different infrastructure and needs. To succeed, governments must align their AI use cases with their specific "readiness archetype," ensuring that the technology matches their current levels of digital maturity and infrastructure.
We applied this context-aware approach when building Bleep Med, a telehealth platform for the UAE. Through deep cultural analysis, Rootcode discovered that in this region, care decisions are often shared among family members rather than handled individually. To support this social structure, we designed features allowing users to easily book appointments for extended family members. Additionally, understanding that local users found uncertainty about arrival times unprofessional, we implemented real-time location tracking for doctors. By tailoring AI solutions to fit local cultural norms and operational realities, governments can ensure that digital services are not just functional, but truly embraced by the citizens they serve. Read the case study about how we built the telehealth platform for Bleep Med.
Conclusion
The future of government AI starts with solutions that are designed to work within government constraints. It is about building systems that respect data sovereignty, protect citizen privacy, and work within real regulatory constraints. When AI is designed to run securely inside government infrastructure and support human decision making, it can deliver real value without compromising public trust.
At Rootcode, we have been building AI solutions for decades. Our work supports government institutions across Europe and leading enterprises around the world. We focus on engineering domain-specific AI that is secure, practical, and ready for production. To learn more about what we build, explore our portfolio. If you are planning to develop AI solutions for the public sector or highly regulated environments, get in touch with us to start the conversation.




