Six questions to assess AI trustworthiness
Key principles for determining the trustworthiness of AI
We’ve been captivated by the concept of artificial intelligence (AI) since computers were first invented. We imagined all sorts of ways it could make our lives better, from friendly robots like The Jetsons’ maid Rosie to the benevolent and nearly omniscient starship computers of Star Trek. But in the real world, AI still has a long way to go before it becomes trustworthy enough to take on roles like those. A recent AI-generated article recommended a local food bank as a tourist destination, while another presented a chronological list of Star Wars movies and shows that was not in the correct order. If we can’t trust AI to get these types of things right, how can we trust it to make decisions that really matter, like those related to health, finances or mission-critical operations?
Even the experts don’t yet agree on how much we should trust AI. In a recent Munk Debate on the subject, concerns ranged from deepfakes and propaganda to who has control over AI — and what might happen if we lose control over it entirely. All this makes people nervous and begs the question: Can AI be trusted? The answer can be “yes”, if governments, enterprises, and individuals regulate, develop, and use AI in a responsible way. But there are trade-offs, and assessments must be made on a case-by-case basis to determine what’s required for trustworthiness in each situation.
What questions do we need to ask?
While some applications demand a higher level of trustworthiness than others, at a base level, all AI needs to be trustworthy. However, the definition of “trustworthy” will likely depend on individual priorities and on the requirements of a given situation — which can sometimes be at odds with trust factors.
Determining trustworthiness often comes down to a balancing act between priorities. The following considerations can each be viewed as a spectrum, with trade-offs made between one end and the other depending on the needs of a given use case.
- Transparency vs. impact risk: How did the AI make its decision?
Understanding why an AI made a given decision is a key part of trustworthiness, but it’s more important in some cases than others. If you need a loan to keep your business from going under, you want to be confident your bank’s AI made its decision to approve or deny your application soundly and logically. If you have concerns about the decision, you want it to be fully explainable, so you understand exactly what factors the AI considered and how it came to its final decision. On the other hand, if your favorite store uses AI to recommend products based on your online shopping habits, the stakes are much lower. If you don’t like its suggestions, you can simply ignore them, and you’re unlikely to spend much time wondering why it recommended what it did. - Timeliness vs. human involvement: How fast does the decision need to be made?
Decisions that need to be made quickly are prime candidates for AI-based automation. On the network, for example, there are a myriad of small anomalies and issues occurring all the time. Most of these don’t require a response, but those that do need to be addressed very quickly to avoid the kinds of performance issues and network downtime that can cost companies millions of dollars. This need for speed means sometimes it’s necessary to leave humans out the loop. However, full logs should always be kept for review after the fact and to provide insight to continually improve the AI. - Data quality vs. investment: What data was used to train the AI?
Good decisions depend on good data, but creating and maintaining a high-quality data set requires time and investment. If you work in the medical field and rely on AI to make diagnoses, a decision based on faulty information, biased analysis or a misinterpretation of data could literally be a matter of life and death. While in business it could have severe reputational and financial implications. In these cases, it is vital to invest in intensive and ongoing data quality management. - Data privacy vs. completeness: Does the AI need access to private data that must be protected?
While AI has a lot of potential to enhance customer services, data privacy is a significant concern. Full access to your services history records (e.g. all of your call records and location history), combined with data from other anonymous customers could help an AI offer the best service offering for any condition you might have. But you might also be concerned about releasing that information — and about how it might be integrated into the AI’s data set or how it could be misused for commercial purposes (targeted advertising).
Although privacy controls generally protect this type of data, those same controls may make it harder to access the personal data needed to support important decisions (e.g. for root cause analysis in case of service outages). For these applications, there is a need to find an appropriate balance between protecting privacy and enabling access to key data that could improve business outcomes. - Intellectual property/data protection vs. usefulness: Does the AI need access to intellectual property or other protected data?
With AI models dependent on access to massive data sets, some tools’ terms of use give them the right to incorporate any additional data input into them into their base data set. If you want to use AI tools to analyze or innovate based on your company’s own proprietary materials, this may not be an option, as it would effectively make valuable intellectual property publicly available to the broader AI user community. - Data security vs. impact risk: What are the consequences of a security breach?
For low-stakes applications, the impact of a data breach is likely minimal. For more critical applications, AI needs to be protected to avoid adverse outcomes. For example, someone could hack a self-driving car you’re travelling in and trick it into replacing a stop command with an accelerate command. The consequences of a breach like that could be severe, so it’s vital to ensure the AI is protected by top-notch security.
AI trust worthiness – checklist
AI has massive potential for good, but it also brings the risk of harm. And has already been known to make up “facts” that are not true. Here are some key factors to consider when assessing AI trustworthiness:
- Transparency: What information does the AI use and how does it use that information to deliver its output?
- Degree of autonomy: Does the AI present recommendations for a human to act on or does it take action based on its own calculations?
- Data quality: Critical questions cover the provenance of AI training data, the validation and accuracy of that data, steps taken to avoid bias, and whether the data is up to date?
- Data privacy: Considerations must include data access and use, who else has access to private data, and how private data is kept separated from public data?
- Data protection: Organizations must assess how the AI uses intellectual property or other protected data, and whether data incorporated into the base data should be public or protected?
- Data security: How is the AI protected from hacking and other security breaches?
Who’s responsible for AI?
To keep harmful consequences to a minimum, it is vital that stakeholders at all levels commit to principles of responsible AI development.
Governments and regulators must establish rules governing how AI can be used and set controls on who can develop and use it under what conditions — while balancing the need for a degree of freedom to support innovation. Enterprises must support the development of such rules and commit to building and deploying AI according to all applicable regulations. Individuals must refrain from knowingly using AI for harmful purposes.
For example, in a situation balancing transparency with impact risk, a government would set rules on what information companies need to provide about their decisions in what circumstances. The companies would build their tools accordingly and implement appropriate policies for the disclosure of decision data. Individuals would be responsible for understanding those rules and recognizing that not all decisions will offer the same level of transparency.
As an enterprise, Nokia is committed to supporting the responsible development of AI. We have implemented a framework based on the pillars of fairness; reliability, safety and security; privacy; transparency; sustainability; and accountability. We believe these principles should be applied the moment any new AI solution is conceived and then enforced throughout its development, implementation and operation stages.
No one-size-fits-all standard for trustworthiness
For AI to fulfill its promise of improvements and value for humanity, people need to trust it enough to use it. But determining the threshold for trust is not as simple as setting a single standard. Use cases across industries will have different thresholds, and these can vary even further among individual people.
But AI is here to stay, and it’s already becoming an integral part of new technology, including 6G networking. This will have a global impact, from the individual all the way up to entire industries. To ensure AI is developed and used responsibly, we all need to engage in a discussion about key trust factors, and how to balance them with functionality requirements.