top of page
GENESIS_AI---logo-Simplified----Two-circles---RGB---White.png

INNOVATIVE ARTIFICIAL INTELLIGENCE PLATFORM

FOR AUTOMATED WEB APPLICATION DEVELOPMENT

SPEED _ QUALITY _ SECURITY

EU AI Act & NATO AI Strategy vs SAVANT-AI & GENESIS-AI

  • Writer: Andrzej Albera
    Andrzej Albera
  • 3 days ago
  • 4 min read

The EU AI Act and NATO's AI strategies describe exactly what strategically important institutions and companies need: factories for lawful AI systems and a cognitive-decision layer that ensures explainability, auditability, and risk control. ​

GENESIS-AI and SAVANT-AI were designed precisely as such a pair of products—a technical "engine" and a governance superstructure—embedded directly in EU and NATO regulatory requirements. ​

 


Three pillars: EU AI Act, Council documents, and NATO strategies

The foundation for public administration is based on three pillars: the EU AI Act itself (Reg. 2024/1689), preparatory documents from the EP/Council, and NATO strategies for AI. Together, they form a coherent picture: AI is to be a responsible technology, supervised by humans, tested, certified, and interoperable, with full respect for fundamental rights and humanitarian law. ​

The EU AI Act introduces stringent requirements for high-risk systems: continuous risk management, data management, technical documentation, logging, transparency, human oversight, resilience to disruptions, errors, and changes in conditions (robustness), cybersecurity, quality system, and post-market monitoring. The regulation imposes obligations not only on providers but also on end users (deployers), including public authorities, including mandatory fundamental rights impact assessments (FRIA) for certain applications. For general-purpose models (GPAI), there are separate obligations: documentation, stress tests, risk management, copyright policy, training data summary, and participation in codes of practice. ​

NATO strategies emphasize responsible use of AI, accountability, human-in-the-loop/over-the-loop, life cycle management, testing, evaluation, verification, and validation (TEVV), interoperability, and cybersecurity. They explicitly define six Principles of Responsible Use (PRU): legality, responsibility and accountability, explainability and traceability, reliability, manageability, and bias mitigation. ​

 

GENESIS-AI: a factory for high-risk AI systems

GENESIS-AI is a complete DevSecOps/MLOps environment in which every project is created "in line" with the EU AI Act by default. In practice, this means that the key articles of the regulation are implemented as elements of the platform, rather than as point-by-point "checklists" to be ticked off at the end of the project. ​

 

Key mechanisms:

  • Risk management system (Article 9): each AI project is carried out in a pipeline with a built-in risk register, use and abuse scenarios, and CI/CD gates that block implementation if risk criteria are not met. ​

  • Data governance (Article 10): the data factory provides data quality profiles, representativeness and bias-mitigation tests, anonymization, and synthetic data generation for sandboxes and TEVV. ​

  • Documentation and logging (Articles 11–12): GENESIS automatically generates an "AI conformity package"—model documentation, versioning, pipeline logs—ready for audit, certification, or registration in the EU database. ​

  • Transparency and instructions for use (Articles 13–14): the platform creates instructions, model cards, and message templates for deployers in a unified format, in multiple languages. ​

  • Human oversight and governability (Article 14 + NATO PRUs): human-in-the-loop/over-the-loop patterns, integration with approval systems, and a full audit trail of overrides are enforced in the project architecture. ​

  • Robustness and cybersecurity (Article 15): automated resilience tests (adversarial, data/model poisoning, regression) are part of the standard pipeline; metrics are included in reports and management dashboards. ​

  • Quality management system (QMS) and post-market monitoring (Article 17): configurable QMS for AI (policies, workflows, checklists, report templates) combined with telemetry, drift and incident detection. ​

  • FRIA and DPIA support (recitals 58b, 58g): FRIA/DPIA processes are built into the project cycle, linked to specific system versions and data used in training. ​

  • GPAI and systemic risk models: the "model factory" records compute/FLOPs, conducts adversarial testing, generates training data summaries, and supports copyright policy (TDM, codes of practice). ​

GENESIS-AI is therefore not so much a "toolkit" as an engineering mapping of the EU AI Act and NATO PRUs in the processes of creating and maintaining AI systems. ​

 

SAVANT-AI: cognitive decision-making and governance layer

SAVANT-AI is a cognitive system: a layer of oversight, explainability, audit, and regulatory knowledge that "understands" both the AI systems created in GENESIS and the legal context in which they operate. Its role is to translate the requirements of the EU AI Act and NATO from the technical level to the decision-making level – department directors, inspectors, doctors, controllers, and officers. ​

 

Key features:

  • XAI and explainability: SAVANT provides explanations for decisions (reason codes, feature impact, what-if scenarios) in accordance with NATO PRU explainability and traceability requirements. ​

  • Traceability and full audit trail: a central layer of knowledge about AI systems allows you to reconstruct "who, what, when, and on what basis" guided a specific decision. ​

  • Human oversight in practice: dedicated "decision console" interfaces present AI recommendations with context, quality indicators, warnings, and accept/reject options, protecting against automation bias. ​

  • High-risk AI systems registry: SAVANT maintains a catalog of systems within the department, mapped to EU AI Act categories, compliance status, TEVV results, and related FRIA. ​

  • FRIA assistant: a module that guides officials step-by-step through the assessment of the impact on fundamental rights, identifying vulnerable groups, risks, and possible mitigation measures. ​

  • NATO PRU Compliance View: view of each application's compliance with the six NATO principles – identified gaps, recommended actions, repository of good practices. ​

  • AI literacy and training: personalized educational paths for different profiles (lawyer, procurement, IT architect, analyst) based on EU and NATO documents and real-world use cases from the ministry. ​

SAVANT-AI makes what the EU AI Act and NATO refer to as "governance" part of the daily work of decision-makers, rather than a theoretical document on a shelf. ​

 

 
 
 

Comments


bottom of page