WASHINGTON – In comments submitted to the National Institute of Standards and Technology (NIST), NISTIR 8312: Four Principles of Explainable Artificial Intelligence today, global tech trade association ITI expressed support for the development of tools to help achieve meaningfully explainable artificial intelligence (AI) systems and encouraged NIST to continue its stakeholder engagement to clarify the limitations of explainability, among other areas, in its final document.

“We support the development of meaningfully explainable AI systems and increasing the availability of tools to achieve this goal,” ITI wrote. “To the extent NIST is endeavoring to create a common language or lexicon to better equip stakeholders to engage in more informed and meaningful discussions around these topics, that is a laudable goal worth pursuing. We appreciate NIST’s desire to begin a conversation about this important subject and encourage NIST to continue to engage with stakeholders as it seeks to refine this document.”

In the comments, ITI shares its view that while explainability plays an important role in AI, it is not necessary for every AI system to be explainable, and in some cases requiring explainability may not only be technologically infeasible but may also hamstring continued innovation in AI applications by industry. The association specifically asks for NIST to further examine the limitations of explainability and the potential pitfalls of requiring explainability in all cases, particularly when in some instances there may be other approaches -- such as system reliability or engineering controls -- that can better address goals like ensuring the safety and trustworthiness of the AI system.

ITI also encourages NIST incorporate additional recommendations as it refines its document, including:

  • Recognize that garnering acceptance of AI systems is a shared responsibility;
  • Reconsider the use of the term “principles,” while also adding more rigor to other key terms used in the document;
  • Consider the limitations of explainability principles;
  • Consider that when the risks are low, it may not be necessary for every principle laid out in the paper to be met;
  • Provide further clarity around the scope of AI as addressed in the paper; and
  • Clarify the goal of the paper as well as the intended audience.

Read ITI’s full comments here.

Related [Artificial Intelligence, Cybersecurity]