Friday, September 17, 2021

How Can We Ensure That A.I. Is Responsible?

Third party private testing of A.I. and its source data may be essential to fair evaluation

By Wilson Miles, published by RealClear Science

September 17, 2021 -- C-level executives often have access to overwhelming amounts of data, yet struggle to effectively both analyze and pull actionable insights from it. Artificial intelligence (AI) technologies seek to address this problem by enabling computer systems—trained on large data sets—to model human problem-solving, including finding patterns, performing object recognition, and making predictions.

However, there are risks associated with AI: from inaccurate facial detection causing false arrests to an AI Chatbot—designed by the American company, OpenAI—refusing to talk about topics deemed sensitive by the Chinese Communist Party. Perhaps the defining characteristic of AI-enabled systems is that their decisions reflect all biases—known and unknown—in the data with which they are trained.

This makes greater validation and governance of automated systems, or ‘responsible AI’, critical to avoiding AI-related accidents. Responsible AI includes striving for the maximum possible degree of explainability and accountability. The term also refers to a disciplined AI governance structure, in which there is active supervision in the training and deployment of AI systems.

Responsible AI, now at the forefront of many AI-related discussions, needs to move away from theory into practice. To facilitate responsible AI, adopters of AI ought to pursue a certification from an independent third party to ensure—through outside review—that the systems they are fielding have been audited for bias which may be counter to their operational goals.

AI systems are becoming integrated into the real-world at an accelerating rate despite a lack of universal guidelines on implementation and validation. As our economy, health, and security become more dependent on these systems, their limitations create risk for companies and customers alike. Ultimately, AI technology can put real lives at stake.

Many business leaders don’t have a clear view into what their organization is doing to govern AI, or what new government regulations might lie ahead. While there is effort by the European Union to create guidelines for companies’ usage of AI, the US is lagging behind in establishing a legal and regulatory framework to guide AI’s use, amplifying the potential for accidental AI disasters.

As states, international organizations, and private companies attempt to come to a consensus on regulating AI, companies are left to navigate multiple competing viewpoints and to regulate themselves. The question then becomes: can we trust organizations to create and operationalize guidelines that are interpretable, fair, safe and respectful of a user’s privacy? Even if there is a clear international understanding of ethical AI standards, is there a mechanism for holding companies accountable to those standards?

The best way to ensure adopters of AI use their software responsibly is for an independent third party—from the private sector—to certify the clients’ AI system. The certification process entails providing best practices, which can include details on which data can be collected and used, how models should be evaluated, and how to best deploy and monitor models. This self-accrediting framework can also define who is accountable for any negative outcomes of AI.

If a company's AI fails, it can likely be attributed to the failure of either: 1) a person or people, 2) a process, or 3) the AI technology itself. An AI certification provides validation of a company’s risk profile, inclusive of process, technology, data, people, and culture.

A certification does pose unique challenges. A recent GAO report stated independent audits are complicated because an AI system can be a “black box” in which an organizations’ software is difficult to understand, or because “vendors [will] not reveal them for proprietary reasons.”

An AI certification should be designed to be a self-accrediting process to avoid being a barrier to innovation, yet simultaneously provide concrete, actionable steps—similar to the Failure Mode and Effect Analyses approach—to identify possible issues, like unintentional bias, and mitigate them before they cause harm to the organization.

The AI future runs on data. Both the private and public sector need confidence in the future of AI technology if it is to succeed en masse. Until all parties understand the benefits of AI, as well as the ethical and national security risks posed by insufficiently validated machine learning models, an AI certification is the only way to achieve such assurance.

Wilson Miles is a masters candidate in U.S. Foreign Policy and National Security at American University.

https://www.realclearscience.com/articles/2021/09/17/how_can_we_ensure_that_ai_is_responsible_794856.html


No comments:

Post a Comment