|
|
Motivation
Symposium is over!
The accelerated developments in the field of Artificial Intelligence (AI) hint at the need for considering "Trust" as a design principle rather than an option. Moreover, the design of AI-based critical systems such as in avionics, mobility, defense, healthcare, finance, critical infrastructures, ... requires proving their trustworthiness. Thus, AI-based critical systems must be assessed across many dimensions by different parties (regulators, developers, customers, reinsurance companies, end-users) for different reasons. We can call it AI validation, monitoring, assessing, or auditing, but the fundamental concept in all cases is to make sure the AI is performing well within its operational design domain. Such assessment begins from the early stages of development, including the definition of the specification requirements for the system, the analysis, the design, etc. Trust and trustworthiness assessment have to be considered at every phase of the system lifecycle, including sale and deployment, updates, maintenance or int. It is expected that full trustworthiness in AI systems can only be established if the technical measures to establish trustworthiness are flanked by specifications for the governance and processes of organizations that use and develop AI. Application of Social Sciences and Humanities (SSH) methods and principles to handle human AI interaction, and aid in the operationalisation of (ethical) values in the design and assessment, with important information provided on their actual impact on trust and trustworthiness is a key issue. Thus, AI researchers and engineers are confronted with different levels of safety and security, different horizontal and vertical regulations, different (ethical) standards (including fairness, privacy), different homologation/certification processes, and different degrees of liability, that force them to examine a multitude of trade-offs and alternative solutions. In addition, they are struggling with values that need to be translated into concrete standards that can be used in assessment. Collaboration with SSH researchers to specify these standards is a central challenge to make sure that assessments also cover the normative/ethical aspects of trustworthiness. To judge AI-based systems merely by the accuracy percentage is a highly misleading metric. In addition, conventional methods for testing and validating software fall short and it is even difficult to measure test coverage in principle. Due to the multi-dimensional nature of trust and trustworthiness, one of the main issues we face is to establish objective attributes such as accountability, accuracy, controllability, correctness, data quality, reliability, resilience, robustness, safety, security, transparency, explainability, fairness, privacy etc, map them onto the AI processes and its lifecycle and provide methods and tools to assess them. Thus, this shines a light on quality requirements (“-ilities”, or non-functional requirements) which appear particularly challenging in an AI system, although many of them can be considered in any critical system. Furthermore, beyond quality requirements, this can also encompass risk and process considerations. The expected attributes and the expected values for these attributes depend on contextual elements such as the level of criticality of the application, the application domain of the AI-based system, the expected use, the nature of the stakeholders involved, etc. This means that in some contexts, certain attributes will prevail, and other attributes may be added to the list. Clear specifications of the non-functional requirements will help clarify these conflicts and can also spur innovation that solves some of these conflicts, allowing us to fulfill more of them at the same time. The goal of this symposium is to establish and grow a community of research and practitioners for AI trustworthiness assessment leveraged by AI sciences, system and software engineering, metrology, and SSH (Social Sciences and Humanities). This symposium aims to explore innovative approaches, metrics and/or methods proposed by academia or industry, to "assess the trust and trustworthiness" of AI-based critical systems with a particular focus on (but not limited to) the following questions:
|
Online user: 2 | Privacy |