AI is Giving Enterprise Risk Management a Boost
UNLIMITED DATA | BY JAMES KULICH | 6 MIN READ
In the early days of COVID-19, an amazing story about the power of artificial intelligence emerged from South Korea.
As reported in a post on CNN.com, Korean biotech firm Seegene developed a fully functioning test for the novel coronavirus in just three days, without having a single sample of the actual virus available. Scientists at Seegene used artificial intelligence and a genetic blueprint of the virus to create their clinical test, which went into production shortly thereafter.
A key step in this work, indeed in any application of artificial intelligence, is making the right prediction. I know little about biotechnology, but the Seegene scientists were undoubtedly able to use some features from the genetic blueprint to predict whether a clinical sample was likely to be from COVID-19 rather than from some other source. The immediate result, in this case, was the development of a medical test. The ultimate hoped-for result is a significant reduction of a major human health risk.
Industry Examples of Enterprise Risk Management
Artificial intelligence is at work in a number of industries to identify risk and reduce its impact. Machine learning algorithms that flag fraudulent credit card transactions are in widespread use in the financial services sector and have led to the development of AI systems that readily enable both consumers and vendors to act when a potential issue arises.
As described in a post by Archer Software, banking giant JP Morgan Chase recently introduced its Contract Intelligence (COiN) platform. This AI-based tool can analyze thousands of commercial agreements in seconds, extracting important clauses and other data. Bank of America deployed its Intelligent Virtual Assistant, which uses AI, to provide guidance to its millions of customers. In each case, quality of service improves and risks are reduced.
Examples abound in many other industries. In a recent article in the journal Safety Science, authors Nicola Paltrinieri, Louise Comfort, and Genserik Reniers show how deep learning, a cutting-edge branch of data science, can be used to assess risk when operating an offshore oil and gas drilling operation.
At a recent conference sponsored by BayesiaLab, speakers described how they were using machine learning and artificial intelligence to address risks ranging from customer churn in a competitive market to the spread of antibiotic-resistant infections in a commercial farming setting to state-level hacking of major Web platforms. The pattern is similar in each case: identify a key indicator of risk, use machine learning to predict when that indicator will reach unacceptable values, and use artificial intelligence to automate the deployment of these predictions in ways humans can use.
As my colleague John Aaron described in our recent post on intelligent automation, AI can be used to identify risks in business processes and guide actions to reduce them. Rich information is available in business systems, project logs and external data sources, often in unstructured forms such as text. John’s current work focuses on tapping the potential of these data sources to create systems that reduce business risk.
Unintended Consequences of AI
AI can be a powerful tool for risk reduction, but it has a dark side. Like humans, AI can be affected by bias. An algorithm that is deployed without appropriate human oversight can make use of predictions that are discriminatory, violating legal and ethical standards. Sometimes, legitimate efforts to boost model performance by engineering input variables can inadvertently stack the deck, causing a model to favor certain outcomes that do not stand the test of time. Appropriate risk and benefit tradeoffs need to be included in model designs as safeguards against predictions that could be harmful in the long run.
In a recent article by McKinsey, Derisking Machine Learning and Artificial Intelligence, the authors identify ways to augment traditional technical approaches for validating machine learning models to include a greater focus on reducing the overall risks associated with AI. These include:
- attention to model interpretability, the ability of domain experts to understand not only what a model is predicting but why it makes its choices;
- the use of “challenger models” to reduce the risk of model bias by using different model perspectives;
- human review of parameter settings used by models to make their predictions to ensure that bias is not being encoded in a model’s design; and
- explicit attention to criteria, beyond technical considerations, that will determine if a model is ready for deployment.
The approach we have used from the start in Elmhurst University’s Masters in Data Science program addresses the concerns raised in the McKinsey post. Our ultimate goal is to use data science to create value in a responsible and enduring way. We dive deeply into all of the technical elements of this work, but always in a way that focuses on larger goals.
Lean Six Sigma and Project Management
One specific way we do so is by establishing connections between data science and other areas of practice. Techniques like Lean Six Sigma, which has long been in use in manufacturing settings to reduce error and eliminate waste, are excellent complements to the approaches we use in data science.
The discipline of project management, in its many forms, provides the structures needed to address risks arising from our work and to create value in the larger contexts in which we operate. Good project management requires you to think about why you are making project choices, not just about the mechanics.
By guiding our students to explicitly incorporate these practices in their data science efforts, we prepare them to meet the needs of the organizations they serve in ways that endure.
I welcome your thoughts and comments.