3.5 C
United Kingdom
Monday, December 23, 2024

IEEE-USA’s New Guide Helps Companies Navigate AI Risks



Organizations that develop or deploy artificial intelligence systems know that the use of AI entails a diverse array of risks including legal and regulatory consequences, potential reputational damage, and ethical issues such as bias and lack of transparency. They also know that with good governance, they can mitigate the risks and ensure that AI systems are developed and used responsibly. The objectives include ensuring that the systems are fair, transparent, accountable, and beneficial to society.

Even organizations that are striving for responsible AI struggle to evaluate whether they are meeting their goals. That’s why the IEEE-USA AI Policy Committee published “A Flexible Maturity Model for AI Governance Based on the NIST AI Risk Management Framework,” which helps organizations assess and track their progress. The maturity model is based on guidance laid out in the U.S. National Institute of Standards and Technology’s AI Risk Management Framework (RMF) and other NIST documents.

Building on NIST’s work

NIST’s RMF, a well-respected document on AI governance, describes best practices for AI risk management. But the framework does not provide specific guidance on how organizations might evolve toward the best practices it outlines, nor does it suggest how organizations can evaluate the extent to which they’re following the guidelines. Organizations therefore can struggle with questions about how to implement the framework. What’s more, external stakeholders including investors and consumers can find it challenging to use the document to assess the practices of an AI provider.

The new IEEE-USA maturity model complements the RMF, enabling organizations to determine their stage along their responsible AI governance journey, track their progress, and create a road map for improvement. Maturity models are tools for measuring an organization’s degree of engagement or compliance with a technical standard and its ability to continuously improve in a particular discipline. Organizations have used the models since the 1980a to help them assess and develop complex capabilities.

The framework’s activities are built around the RMF’s four pillars, which enable dialogue, understanding, and activities to manage AI risks and responsibility in developing trustworthy AI systems. The pillars are:

  • Map: The context is recognized, and risks relating to the context are identified.
  • Measure: Identified risks are assessed, analyzed, or tracked.
  • Manage: Risks are prioritized and acted upon based on a projected impact.
  • Govern: A culture of risk management is cultivated and present.

A flexible questionnaire

The foundation of the IEEE-USA maturity model is a flexible questionnaire based on the RMF. The questionnaire has a list of statements, each of which covers one or more of the recommended RMF activities. For example, one statement is: “We evaluate and document bias and fairness issues caused by our AI systems.” The statements focus on concrete, verifiable actions that companies can perform while avoiding general and abstract statements such as “Our AI systems are fair.”

The statements are organized into topics that align with the RFM’s pillars. Topics, in turn, are organized into the stages of the AI development life cycle, as described in the RMF: planning and design, data collection and model building, and deployment. An evaluator who’s assessing an AI system at a particular stage can easily examine only the relevant topics.

Scoring guidelines

The maturity model includes these scoring guidelines, which reflect the ideals set out in the RMF:

  • Robustness, extending from ad-hoc to systematic implementation of the activities.
  • Coverage,ranging from engaging in none of the activities to engaging in all of them.
  • Input diversity, ranging fromhaving activities informed by inputs from a single team to diverse input from internal and external stakeholders.

Evaluators can choose to assess individual statements or larger topics, thus controlling the level of granularity of the assessment. In addition, the evaluators are meant to provide documentary evidence to explain their assigned scores. The evidence can include internal company documents such as procedure manuals, as well as annual reports, news articles, and other external material.

After scoring individual statements or topics, evaluators aggregate the results to get an overall score. The maturity model allows for flexibility, depending on the evaluator’s interests. For example, scores can be aggregated by the NIST pillars, producing scores for the “map,” “measure,” “manage,” and “govern” functions.

When used internally, the maturity model can help organizations determine where they stand on responsible AI and can identify steps to improve their governance.

The aggregation can expose systematic weaknesses in an organization’s approach to AI responsibility. If a company’s score is high for “govern” activities but low for the other pillars, for example, it might be creating sound policies that aren’t being implemented.

Another option for scoring is to aggregate the numbers by some of the dimensions of AI responsibility highlighted in the RMF: performance, fairness, privacy, ecology, transparency, security, explainability, safety, and third-party (intellectual property and copyright). This aggregation method can help determine if organizations are ignoring certain issues. Some organizations, for example, might boast about their AI responsibility based on their activity in a handful of risk areas while ignoring other categories.

A road toward better decision-making

When used internally, the maturity model can help organizations determine where they stand on responsible AI and can identify steps to improve their governance. The model enables companies to set goals and track their progress through repeated evaluations. Investors, buyers, consumers, and other external stakeholders can employ the model to inform decisions about the company and its products.

When used by internal or external stakeholders, the new IEEE-USA maturity model can complement the NIST AI RMF and help track an organization’s progress along the path of responsible governance.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles