The development of artificial intelligence (AI) systems is progressing rapidly and for this reason we see its presence in many industries, products and services. There is no doubt that artificial intelligence will affect our societies and our lives. Due to this influence and progress in the development and acceptance of artificial intelligence systems, trust, behavior and social concerns should be considered. Artificial intelligence systems must be reliable and transparent, which is still an ongoing debate. We help the reader become familiar with the topic by citing summaries of existing standards, developed standards, reports, regulations, audit and test proposals, certification guidelines.
Reliability of artificial intelligence
In recent years, many organizations from governments, industry, and academia have discussed the trustworthiness of AI systems. Many articles, proposals and standards have also been published. This shows that the reliability of artificial intelligence systems is taken seriously around the world. The European Union Artificial Intelligence (EU) ACT1 published the first comprehensive proposal for regulation on the risks of the development and use of artificial intelligence. However, this is not the only relevant document regarding trust in artificial intelligence. Almost an innumerable number of publications have been published in recent years that have discussed this issue from different angles.
In its draft version, the artificial intelligence law provides a definition that includes:
An artificial intelligence system refers to software developed with one or more techniques that can generate outputs such as content, predictions, recommendations for a given set of human-defined goals.
In the next step, we need to define what is a reliable artificial intelligence system? Therefore, we refer to the definitions provided by the High Level Expert Group on Artificial Intelligence (AI HLEG). This independent expert group was launched by the European Commission (EC) in June 2018.
They published the Ethical Guidelines for Trusted Artificial Intelligence, which defined that Trusted AI should have the following components:
- Be legal and comply with all applicable laws and regulations
- Have agency and human supervision
- Technical strength and security including flexibility against attack and security
- Maintaining privacy, quality and data integrity
- Traceability, explainability and communication
- Avoiding unfair bias, accessibility and universal design
- Social and environmental well-being
AI must be strong from both a technical and social perspective, because even with good intentions, AI systems can cause unintended harm.
So, in other words, trust in AI can be built with human-centered values and justice that adequately address these requirements and characteristics. However, there is never a clear definition of key requirements.
By adopting these principles, developers and organizations can ensure that AI systems are designed and deployed in a responsible, fair, and reliable manner that can help build trust and confidence in AI systems and provide audiences with This ensures that they deliver the benefits they are designed for and minimizes the risks and negative impacts they may have on individuals and society.
EU Artificial Intelligence Law
The most prominent legislative proposal is the AI Act24 proposed by the EC. The proposed regulatory framework for artificial intelligence includes the following:
- Ensuring safety and adherence to EU legislation
- Ensuring the facilitation of investment and innovation in artificial intelligence
- Strengthening governance and effective implementation of existing laws regarding the application of artificial intelligence systems
- Biometric identification and classification of natural persons
- Identification of natural persons
How can we trust artificial intelligence systems?
Ensuring that artificial intelligence (AI) systems are trustworthy requires a multifaceted approach involving various stakeholders, including AI developers, users, policymakers, and regulators. Some key strategies that can help build trust in AI systems include:
Ethical and responsible design:
Artificial intelligence systems must be designed and developed to act ethically and responsibly to ensure that risks are minimized and benefits are maximized.
Explain ability and transparency:
AI systems must be designed to be explainable and transparent, so that users can understand how the system works and how they make decisions.
Responsibility and monitoring:
There should be clear lines of responsibility for the development, deployment, and operation of AI systems, including ensuring that appropriate mechanisms are in place to monitor and audit AI systems and that processes are in place to identify and address any errors or biases that may occur.
Strength and reliability:
AI systems must be designed to be robust and reliable, which includes ensuring the system is resilient against cyber-attacks, data breaches, etc., and contingency plans must be in place to deal with any disruptions or failures.
Education and awareness:
Building trust in artificial intelligence systems requires education and awareness among users and developers.
In short, building trust in artificial intelligence systems requires a comprehensive approach that includes ethical and responsible design, explain ability and transparency, accountability and monitoring, robustness and reliability, and education and awareness. By adopting these strategies, developers and organizations can help ensure that AI systems operate in a responsible and trustworthy manner and provide the benefits they are designed to provide while minimizing the risks and negative effects they may have on individuals and society.
In general, regarding trust in artificial intelligence, how to test or verify it is an emerging issue. Research and legislation in this area are evolving. In this article, we tried to publish a brief report on the emerging world of artificial intelligence for you. ISQI , the leading company in inspection and certification (TIC) services, is trying to update itself according to the needs of the world.