AI/ML Best Practices: Not Just for Business Anymore
As Artificial Intelligence and Machine Learning (AI/ML) solutions proliferate, it is a foregone conclusion that they are an integral component of national security and intelligence operations. Not just for the worlds of finance, healthcare, and online commerce (if perhaps a little less publicized), digital decision making is now making a more public appearance in the world of intelligence problem-solving. It has no doubt existed, and even led implementations there for years, but is now acknowledging and codifying challenges with which businesses have been grappling. Technology is being used to augment more traditional processes, to identify threats, trends, and anomalies. Still this sector has a heightened need for AI controls in a world where the protection of privacy, individual rights, and civil liberties are paramount.
Data Analytics Critical to ODNI Core Mission
As a broad overview, the office of the director of national intelligence, ODNI, leads the intelligence community (IC) to reach the following mission, paraphrased for brevity: The core mission of the ODNI is to lead the IC in intelligence integration, synchronizing collection, analysis, and counterintelligence functions so they are one operating as a team. The end game vision is a nation made more secure by a fully integrated, agile, resilient, and innovative intelligence community.
The ODNI coordinates the efforts of four specialized areas:
- National Counterintelligence Center, providing counterterrorism analysis and sharing information with partners
- National Counterproliferation Center, to prevent the proliferation of weapons of mass destruction
- National Counterintelligence and Security Center, to counteract threats to information and critical security assets
- Cyber Threat Information Integration Center, to eliminate cyber incidents and mitigate adversarial electronic threats
A Coming Out Party for AI Guidelines in Intelligence
To reach the goal of its core mission, the IC has consistently led the way in the use of AI and automated decision making to augment its security efforts. This technological reality, though sometimes more private than public, clearly underscores that the effective use of AI is permeating a broader swath of applicability, moving beyond commercial applications to include other critical areas, such as national security. The technology is being employed to solve intelligence problems, identify patterns, and to uncover threatening activities or scenarios.
Protecting Privacy, Civil Liberties
Still, the use of AI in all its implementations needs to ensure that outcomes and use are ethical and without bias, are trustworthy and transparent, and as important in the case of the IA, in line with American values. The IA goal is to execute successful intelligence missions and obtain solutions while employing the highest standards of privacy and the protection of civil liberties.
Again, not unlike in the business world, individual organizations using AI and technology providers who deliver AI services strive for consistent and reliable results. To that end the IC recently published AI principles for the intelligence community. They similarly include transparency, explainability, confidentiality, and the defensibility of non-biased system conclusions as the touchstones for AI compliance. The full press release and articulation of these Principles and Framework can be found here.
A Collaborative Development Effort
A collaborative effort between IC data scientists, a cadre of privacy and civil liberties officers, and other key stakeholders, all constituents were key in the development of the AI Ethics Framework and its incorporation into both the design and use of AI technology. The Framework provides public insight into the factors the IC considers when deciding whether and how to use this technology to counter national security threats, including lawfulness, equitability, security, and human intervention.
To quote Ben Huebner, ODNI Civil Liberties Protection Officer, “We must ensure that our intelligence activities produce objective intelligence while protecting privacy and civil liberties. The use of AI provides new opportunities, but we must decide how to best use it to advance our mission, and the Principles and Framework will provide a consistent approach.” The full list of framework criteria and associated guidelines are available for review here.
CognitiveScale’s Cortex Certifai Trust as a Service
And while guidelines help steer ethical practices and accuracy of results, they cannot necessarily codify and rate the validity of input data, AI learning models, and the overall viability of solution results. CognitiveScale, the trusted AI company, has been leading the way to this endgame with its Cortex Certifai software, allowing AI developers and aspiring system users to detect and score AI models before deployment, to effectively build trustworthy intelligent data systems. Fully aligned with the framework established by the IC and other leading AI experts, Certifai provides a practical and insightful way to ensure explainable and honest digital systems are the ultimate product, whether in the public or commercial sector. For more information, visit our webpage.