Geneva’s Blueprint for Responsible Public-Sector AI

Guest post by Sana Yaakoubi

This article represents the author’s views and not an official position of the State of Geneva

Only 62% of Swiss citizens now say they have high or moderately high trust in their federal government (OECD, Survey on Drivers of Trust in Public Institutions, 2024)

When people interact with public institutions, they expect fairness, transparency, and respect for their rights. Introducing Artificial Intelligence into that relationship raises a crucial question: How can citizens trust that algorithmic decisions won’t be misused or misunderstood?

This article, the first in our AI Governance in Practice series, draws on an interview with Rafael Tiedra, Data Scientist on the Data Science and Artificial Intelligence team of Geneva State – OCSIN, to explore how Geneva State is translating principles into day-to-day practice. The core argument is simple: governance isn’t optional; it’s the foundation of public trust and digital sovereignty.

AI Adoption and Use Cases in the Canton of Geneva

Long before the world discovered ChatGPT, the Canton of Geneva was already laying the groundwork for responsible AI. Since 2018, a small in-house data-science team has been developing machine-learning models for practical, well-scoped public-sector challenges, from forecasting and structured-data modelling to natural-language classification.

When generative AI arrived, Geneva didn’t rush to external cloud providers. Instead, it invested in computational infrastructure (GPU cards), keeping critical computation within Swiss or EU jurisdiction. This wasn’t just a technical choice – it was a sovereignty decision, ensuring that sensitive data and model training remained under cantonal control.

By 2025, Geneva’s administration was tracking around 15 active AI projects, several already in production. The focus is pragmatic: use AI to improve efficiency, sustainability, and citizen service, without compromising public trust.

Two examples illustrate how these projects deliver tangible public value: sis.

 

Use Case 1: Sensor data collection and analysis

Environmental offices, Hautes Écoles and Geneva’s AI and data science team are working together to collect and analyze sensor data from buildings, pollution levels, flows in rivers and other sources.

In the future, AI models will help officials anticipate peak loads, optimize resource use, and plan efficiency upgrades in public buildings. The AI informs policy, but humans always make the final decisions.

Benefit: automated forecasting reduces energy waste, improves building performance, supports climate targets, and frees experts from repetitive data analysis.

 

Use Case 2: Document & Correspondence Triage

Each year, Geneva’s public offices process vast volumes of paperwork and citizen correspondence.

To streamline this, AI models will help classify and prioritize incoming documents. This could be used to identify the topic of an item within an office, and help staff search and summarize internal knowledge bases in natural language.

Benefit: automation accelerates routing and reduces manual sorting, improving response times, accuracy, and overall citizen satisfaction.

 

From Principles to Practice: Governance Lessons from Geneva

The Canton of Geneva offers a blueprint for how AI governance can move from principle to practice. For public administrations elsewhere, the real value lies not only in Geneva’s internal processes but in the governance actions that any public sector organization can adapt to strengthen accountability, sovereignty, and trust.

Geneva’s approach offers practical lessons for others. The recommendations below show how governments can turn responsible-AI principles into real governance practice:

1. Establish a business-led intake and central AI governance program

Start where Geneva started: make AI projects demand-driven, not technology-driven.

  • Encourage departments to raise specific operational challenges (energy use, citizen requests, document routing) and channel them through IT governance channels

2. Build legal and policy alignment before any citizen-facing deployment

Before exposing citizens to AI services, establish legal and ethical guardrails that integrate:

  • Compliance with national data protection acts and, where relevant, GDPR

  • Reference to the Swiss Federal Council’s Guidelines for the Use of AI in Public Administration (2020), or equivalent national guidance elsewhere

  • Alignment with European AI Act principles, while maintaining autonomy over national implementation

  • Dialogue with civil society and academia to ensure transparency and legitimacy

3. Design for sovereignty and supply-chain control

Governments can learn from Geneva’s investment in local GPU capacity and contractual oversight over data location and access rights:

  • Prioritize hosting in trusted jurisdictions (national or EU/EEA where applicable)

  • Define clear clauses on data use, retraining, and IP ownership

  • Evaluate cloud providers for compliance with national or EU data-protection norms

4. Control model learning and ensure lifecycle oversight

Public AI should be predictable and auditable. Geneva freezes models at deployment for citizen-facing use cases and periodically reviews them for drift:

  • Implement controlled learning policies: define when models can retrain, under what approval, and how outputs are monitored

  • Plan for lifecycle governance – from design to retirement – with documentation at each stage

5. Build once, reuse broadly

To avoid fragmentation, Geneva promotes transversal reuse: shared AI components (like document retrieval or classification) developed once and reused across offices:

  • Create central AI assets (e.g., language models, APIs, or RAG systems) that can serve multiple agencies

  • Maintain a public-sector AI catalogue and common governance framework to simplify assurance

6. Maintain a living governance ecosystem

Governance frameworks must evolve as fast as technology:

  • Update ethical charters and risk policies regularly

  • Follow international developments (e.g., the European AI Act, NIST AI RMF, and OECD AI Principles) and adapt them to local realities

  • Train policymakers and legal officers on emerging risks like system drift, explainability, and generative model bias

Balancing Trust, Sovereignty, and Openness: Policy Trade-Offs in the European Context

Geneva’s governance approach reflects a broader European challenge: how to advance responsible AI within a strong legal and ethical framework, while remaining agile enough to collaborate with global innovation ecosystems.

Aligning closely with EU principles ensures legal coherence and high standards of citizen protection – but it can also introduce friction when adopting AI technologies developed outside the EU or Switzerland, particularly from the United States or Asia. Differences in data-protection regimes, model-training laws, or auditability requirements may complicate access to frontier models and affect competitiveness.

This is not a shortcoming of Geneva’s governance; rather, it’s a strategic trade-off faced by many governments: maintaining sovereignty and trust versus accelerating innovation and global collaboration. The lesson for public administrations is to recognize and actively manage these tensions – through diplomacy, technical investment, and legal adaptation – rather than treating them as binary choices.

Conclusion: Trust by Design

Geneva shows what responsible public-AI can look like: local compute investment, carefully assessed use cases, data sovereignty, legal oversight, and ethical guardrails baked in from the start. Even as Switzerland’s national AI framework continues to evolve, Geneva has become a model of how citizens, services, and governance can benefit together.

If other public administrations want to learn from Geneva’s pathway, the lesson is clear: AI must be governed from design, not after deployment. Trust and sovereignty are not add-ons, they are the foundation.

Sana Yaakoubi

Christian DE NEEF

Seasoned business consultant, project director, working at the crossroads of innovation management, knowledge management & learning organizations. Looking at things from a distance and easily recognizing patterns/connecting the dots. Passionate when helping organizations in their transformation.

Founder and co-owner of FastTrack, a Brussels-based consultancy that facilitates the people side of knowledge and innovation management: we focus on change, collaboration, culture, and how these are key ingredients in lasting organizational transformation.

https://www.fasttrack.be
Next
Next

From Human-Friendly to AI-Ready: Rethinking Data Quality for Governance