>

How Can You Create Ethical and Explainable AI

16 MIN READ

03/18/2021
Ethical AI/ML

Strategies and Principles for Making AI work in the Public Sector  

Decision making is an art and a science. While Machine Learning (ML) models are gaining more traction and scale in the public sector, it is important to not lose sight of why a decision was made. This understanding helps retain the human connection to the model and meets ethical and regulatory considerations 

MetroStar’s approach to Artificial Intelligence (AI) is based on applying the principles of Human-Centered Design (HCD) to ML. We start by developing the use-case on how ML predictions impact the end users to define any adverse actions and regulatory requirements associated with the ML systemWhile our ML Engineers are trained in deep learning techniques, we place emphasis on an explainable class of ML algorithms (e.g., decision trees, linear regression, etc.) as well as crafting humanly understandable features. Our interpretability approach further enables model fairness, privacy, and safety evaluation criteria for our ML project.  

While our HCD approach drives the ML requirements and the explainability criteria, we embed data science best-practices with experiment tracking, data lineage, and reproducibility as a prerequisite for operationalizing a model. Once a ML model is deployed, we leverage continuous monitoring and automation to determine the health of the model at the time of predictions.  

The MetroStar Approach 

Human-Centered Design 

Our HCD approach starts with identifying the impact of ML predictions on the end users and understanding the impact and adverse action that could be taken in their journey maps. We incorporate feedback early in the process to inform the explainability criteria as a business outcome that satisfies the use-case requirements. If a human-in-the-loop process is determined to be required, then we incorporate appropriate design elements that make it easy for people to understand and use the ML outputs.  

Explainability is the degree to which a person can understand a cause of a decision. For example, in a credit card risk model, a borrower may want to know the reasons behind their declineapplication. In this case, the ML system needs to be able to generate the reasons why an application was declined (e.g. debt to income ratio is higher than 50 percent).  

Explainability 

We define explainability as the degree to which a human can understand the cause of a decisionMetroStar’s experts start with intrinsically explainable ML techniques (e.g Decision Trees, Logistic Regression, etc.)and then build a post-hoc model understanding with model-agnostic methodologies (e.g. Partial Dependence Plots, SHAP, or LIME), and generate explainable examples for counterfactual developing ML explanationsOur approach revolves around asking three key questions: 

  • Are the features being used understandable by humans? 

  • Are the algorithm and training process transparent to the model consumer? 

  • Is the ML technique used explainable? 

Althougheven as these questions are answered, model-agnostic or post-hoc explanation tools can be exploited, and explanations can be misunderstood. For use-cases with high explainability needs, we recommend intrinsically explainable modeling techniques that can be understood as if-then statements and can generate explainable predictions. 

Example Approaches to Making ML Models Explainable 

  • Explainable ML — Leverages feature importance metrics as well model introspection techniques to understand how the model works. This class of models are generally monotonic on feature-target variable relationships and can be interpreted at a modular level. This class of methods also work well in situations where a more explainable technique is required to distill a black-box model to explainable techniques. 

  • Model Agnostic Techniques— Treating the model as a black box, we leverage game theory and aggregates to understand the feature importance, partial dependence plots, game theoretic optimal SHAP, etc.  

  • Example-Based Explanations— Example-based explanations help humans construct mental models of the ML model and the data it has been trained on. It especially helps to understand when the data input is understandable by humans (e.g., images). This technique is also helpful in generating adversarial examples. 

Fairness 

Our explainable ML approach complements evaluation of ML models under fairness criteria created during the use-case creation. Our approach ensures that sensitive variables (age, sex, race, etc.) are identified for privacy and, in most cases, excluded from the training. To evaluate that all included variables in the model do not correlate with the protected class, we leverage an approach known as group fairness that uses the protected variables and quantify the fairness.  

Experiment Tracking & Data Catalog 

Data is the key input to ML algorithms to learn and generate future predictions. Therefore, it is of utmost importance to ensure that data sources are trusted and cataloged with easy discoverability. MetroStar helps organizations uncover value from the data assets by first cataloging the available data, capturing data lineage and provenance, and labeling information that is useful to train downstream ML. As teams of data scientists create new models and experiment with the data, MetroStar has built out an Experiment Tracking system that natively integrates open-source tools and platforms to log various experiment results along with the best hyper-parameters for further study.  

Reproducibility  

Reproducibility is the ability to generate the same results as the original ML experiment using data, developer environments, and algorithm parameters. At MetroStar, we start with immutable and well-catalogued data, so that ML pipelines can be re-run to reproduce the results consuming the source code repository. We approach reproducibility for ML from the following perspectives: 

  • Data Science Environments & Upgradability Leverage conda and docker to specify and build environment container images required to train a ML model. 

  • Immutable Data Store and Catalog with labeled data In our data management, we ensure that all training and labeled data is stored in an immutable format so that we can easily fetch the precise training and testing datasets required to reproduce the analysis. 

  • ML Pipeline  We build repeatable ML pipelines that include data pre-processing, feature engineering, and other required model evaluation steps. This pipeline can be version controlled and can be easily integrated with a GitOps approach. In order to reproduce the pipeline, all appropriate configurations and experiment parameters are stored along with appropriate seeding criteria to reproduce the exact results. 

Continuous Model Monitoring & Reporting 

ML models can often behave differently in production environments because of performance degradation or change in distribution of the featuresMetroStar’s approach to model monitoring and reporting meets any regulatory reporting requirements as well as the insights required to build trust on ML systems. We recommend the following types of monitoring activities:  

  • Model Stability – Are the model predictions and scores stable overtime? We observe the actual predictions overtime and ensure that the distribution of the predictions is within the validation model bounds and inform any issues in the model deployment process. 

  • Variable Level Monitoring – Are the variables and features used in the model changing? We observe all features being input to the model to observe the feature quality overtime, detect any changes in the upstream data source to detect if the input variables are within the expected ranges. 

  • Model Performance Evaluation & Testing  Is the model performance in production similar to our validation benchmark? Evaluate model drift from a predetermined benchmark that has been validated to inform for any exogenous changes in the environment to trigger model refit activitiesWe use challenger-champion model to horse-race in product to ensure that we are properly measuring the counterfactual for creating better training datasets.