Ensure Your AI Application is Compliant and Safe with Validas.
Partner with Experts to Build a Qualified Tool and Library Basis to Bring Your AI Innovations to Market.
Vision: Compliant and Safe AI Systems
Artificial Intelligence (AI) is used in more and more applications.
But can we trust it to not cause damage by hallucinating or having other faults that put people at risk?
Functional safety standards address and reduce these risks by requiring compliant processes. Safe AI means the absence of unreasonable risks, ensuring that the system performs its intended functions without causing harm. Compliance does not guarantee safety, but is a prerequisite for bringing safe products to market. There are new functional safety standards, like ISO 8800, ISO 21448, VDE-AR 2842-61 that have requirements for AI systems. Compliance with these safety standards is a prerequisite for being safe. Our approach to make AI systems compliant and safe is based on three pillars:
- Compliance with safety standards
- Verification of the system
- Tool Confidence into the AI and data tool chain
The approach is explained from OscAIr in this explanation video, hence OscAIr is the first AI avatar explaining how to make AI systems compliant and safe, see announcement video. So, tool qualification enables to build compliant and safe AI high-risk systems. Since Validas focusses on tool confidence, we have partners to offer a complete solution.
Pillar 1: Compliance with Safety Standards
The first pillar to make AI safe is compliance of the AI with newly developed functional safety standards for AI like ISO 8800, ISO 21448 or VDE-AR-2842-61. Functional safety standards achieve safety by first analyzing the risks of the system and then prescribing how to reduce the risk to a reasonable degree by following a process (called a “safety plan”) that is compliant with the requirements from the standard. The so-called “safety case” is the evidence that the process was followed and hence the developed AI system is compliant and safe. At Validas we use PMT (see here) a model-based and very rigorous approach to achieve compliance and safety. By using PMT we achieve “safer safety” and are able to guarantee success, i.e. no delays or extra costs by any safety assessment. Validas processes are certified to be compliant with safety standards, see here. Practically this means that we have project specific “Verification and Validation” (V&V) checklists to cover all requirements. We use specific V&V tools that check completeness of V&V and generate the V&V Report which is a vital part of the safety case.
Pillar 2: Verification Methods & Tools
The second pillar to make AI safe is the verification and validation of the AI for the given purpose as requested from the chosen safety standards to eliminate unreasonable risks. Unlike classical software where requirements-based testing with code coverage is the main functional verification method for AI software the situation is different. We need new verification methods and tools to reduce the risks of AI systems by verification. Standards like VDE-AR 2842-61 list many methods. In contrast to classical software these standards do prescribe the methods for the different risks levels, but require to select them (case by case) to address the main risk classes of AI systems: product risks (un-save behavior), development risks (incorrect training data) and technology risks (like hallucination).
Examples of AI verification methods include accuracy, neuron coverage, confusion matrix. We recommend to use methods like saliency maps to understand AI system behavior, especially in failure cases. In order to select the right verification methods the AI system risks need to be analyzed. Additionally, new AI verification tools supporting the new AI verification methods and metrices are required. Since Validas is neither a system risk analysis company, nor a tool provider, we cooperate with partners that are specialized on that, see section partners below.
Pillar 3: Tool Confidence
The third pillar of AI safety is tool safety and tool confidence. Since AI software is mainly produced by data and tools rather than by classical software developers, the tools, including data handling tools, have a much higher impact on the safety for AI. ISO 26262 has a chapter 8-11 “Confidence in the use of software tools” which is referred by all the standards: ISO 21448, ISO 8800 and VDE-AR 2842-61, since this chapter prescribes a detailed approach to assess the tool risks using the so called “tool confidence level (TCL)” as well as a tool risk reduction, called “Tool Qualification” in case the tool cannot be used such that all potential tool errors can be detected with a high probability. Also ISO 26262 8-11 requires ensuring that the tools are used during development and production of the system as they have been classified and qualified. This is done using a so called tool safety manual. For AI systems the data handling tools have to be considered as well as the training and verification tools. Validas has developed a generic classification of these tools that can be adapted to the tools used within a specific AI system development. This classification addresses all data risks identified in ISO 8800:
- Accuracy
- Completeness
- Correctness
- Independency
- Integrity
- Representativeness
- Temporality
- Traceability
- Verifiability
At Validas AG we use a certified process, a framework with modeling tools and templates to easily qualify AI development and verification tools. By tool confidence and tool qualification we enable the creation of compliant and safe AI systems.
Offering & Partners
Validas offers you the following services to make your AI system compliant and safe:
- Classification of your AI tool chain to make it compliant and safe
- Qualification of AI verification and data handling tools
- Classification and qualification of non-AI tools for the non-AI parts of the system
- A generic, pre-classified tool chain to develop your AI applications in compliance with ISO 8800 and to qualify your AI tools that require tool qualification
Together with our partners from cogitron (verification method selection) and VALIDAITOR (AI verification tool) we can offer you the guidance to bring your AI system to market including the three pillars:
- Compliance with AI safety standards
- Verification of the AI system including
- Selection of the verification methods in compliance with safety standards like ISO 8800, VDE-AR 2842-61 and ISO 21448 (done by cogitron)
- Compliant verification and of your AI system (using the validaitor platform)
- Tool Confidence into the verification, data management and other tools.
Next Steps
If you are interested to bring your safety-relevant AI application to market, talk with us. Ideally, follow these steps:
- Connect with us: Start by connecting with our AI expert, Oscar Slotosch.
- Sign an NDA: Securely exchange details by signing a Non-Disclosure Agreement with Validas.
- Prepare Your Presentation: Showcase your AI application and how it can be made compliant and safe.
- Meet with Oscar: Discuss your project in detail and receive personalized guidance (link).
- Receive an Offer: Get a tailored offer for making your AI application compliant and safe.
- Start the Project: Once you accept the offer, we handle all sub-contracts with Cogitron and VALIDAITOR for you.
Market Readiness: We help you bring your AI innovations to market with confidence and success guarantee.