Explainable and Privacy-Preserving AI

Explainable and Privacy-preserving Artificial Intelligence

Explainable and Privacy-Preserving AI

Machine learning and artificial intelligence are becoming an important part of our lives.

They are running virtual assistants to help us find relevant information. They recommend books, films and other products based on our previous purchases or interests.

They are also being increasingly used in two important fields of our lives – health and finance. Artificial intelligence (AI) software is helping medical professionals diagnose diseases and when you visit the bank next time, the approval for your housing loan may have been made or influenced by an AI program.

Artificial intelligence is thus not only increasingly influencing the decisions we are making, e.g. which Netflix movie to watch, but also decisions that others make about us, e.g. whether our bank loan will be approved.

Until the rise of algorithmic based decision making, we took it for granted that we could obtain an understandable explanation about decisions affecting us.

With most AI algorithms behaving as black boxes, the interpretability of decisions is not a given anymore, however. The ability of explaining decisions is nevertheless an essential element of trust and in the context of machine learning, this is increasingly being recognized by regulators.

“Right to Explanation” has become one of the key parts of the GDPR regulation. Similar requirements are also included in other acts, including the Equal Credit Opportunity Act (ECOA) and the Fair Credit Reporting Act (FCRA).

Regulators are taking a closer look at machine learning applications not only with respect to the interpretability of its decisions but also regarding data privacy concerns arising from their use. To gain trust within the wider public, AI models will increasingly have to be both explainable and, where possible, privacy-preserving.

Privacy-Preserving Machine Learning

Machine learning (ML) affects data privacy in two ways. It may be using sensitive personal data for training the models and (as ML models accuracy generally rises with amount of training data, the more data the better) and secondly, it may be affecting data privacy is when they are part of making decisions about humans.

Data privacy violations when using machine learning models may occur in a variety of ways:

  • personal data from training datasets can be present in model weights
  • personal data can be extracted from model outputs by repeated usage of the model
  • data of certain groups of persons, e.g. outliers, may be at higher risk of identification
  • with repeated usage of otherwise black box model, one can learn [a] person’s image just from their name (Model inversion) or [a persons] presence can be detected in sensitive training data sets (Membership inference)
  • ML outputs can be combined with side information (e.g. from public records) to reconstruct personal information

In recent years, many methods have been developed to address the issue of data privacy in machine learning models. One of the most prominent ones is differential privacy, championed by companies such as Apple and Google.

Differential Privacy

The main idea behind differential privacy is obfuscation or introduction of noise to training data. If the training set consists of face images, the differential privacy would, for example, add random pixels to make it harder to recognize individual faces.

The parameter which controls how much random noise is applied during the process is called epsilon and there is a natural trade-off involved with it. Lowering the epsilon results in stricter privacy but leads to lower accuracy of the ML model trained on such obfuscated data.

Differential privacy approach has an important limitation. Although we apply random noise to training data, it is still possible to reconstruct the original data by repeatedly querying the model and reducing the noise by averaging the results.

One can say that “privacy loss” increases with each new query of the machine learning model and the “privacy guarantee” of our ML model decreases by a similar amount.

Owners of machine learning models can protect their models by enforcing a maximum privacy loss that will be tolerated, giving each user of the ML model a privacy budget that it can expend through querying the model.

This privacy budget, proportional to epsilon, is set by data and model owners at sufficiently low level so that it does not allow for reverse engineering of data through repeated querying of the model.

data privacy

Figure 1: Privacy-preserving AI methods and examples of privacy infractions

Homomorphic Encryption and Federated Learning

Homomorphic encryption allows us to perform calculations on encrypted data. An important breakthrough in this field was achieved in 2009 by Craig Gentry who introduced the first fully homomorphic encryption system.

The initial problem with homomorphic encryption systems were that they were very slow, it took about 100 trillion times as long to perform computations on encrypted data than on unencrypted data. Recent advances by Microsoft and IBM have considerably increased the speed of computations in this field.

Another methodology which limits full disclosure of data is Federated Learning. In this approach, machine learning model is jointly trained/learned across decentralized devices that hold only local data samples, which are not shared with other devices.

The first application of Federated Learning by Google was on mobile devices. Another field that is highly interesting in context of Federated Learning is healthcare.

Medical diagnosis demands extremely high accuracy from machine learning models and thus large amounts of data for training them, which are however often siloed across many individual healthcare institutions, which are either reluctant or are not allowed to share data.

By using federated learning, institutions can iteratively train their model on data of different institutions, without having to centralize the training data. It is another example of privacy-preserving training and usage of machine learning models.

In general, federated learning reduces risks in areas of data privacy, security of data and data access rights.

We will now turn again to the other key aspect that artificial intelligence should strive for in order to gain wider trust of society – the ability to explain AI decisions in understandable terms.

Explainable Artificial Intelligence

In their paper, ​Towards A Rigorous Science of Interpretable Machine Learning, Kim and Doshi Velez expanded on Merriam-Webster definition of [the] verb “Interpret,” to define interpretable or explainable artificial intelligence (XAI) as:

“the ability to explain or to present in understandable terms to a human.”

As XAI concerns itself with explanation to humans, the fields involved in XAI design should include not just machine learning but philosophy, psychology and cognitive sciences as well.

As noted by Miller at al., we otherwise encounter the risk of AI researchers building XAI applications for themselves, instead of final users, repeating a similar “mistake” of the early 2000s when design of traditional software was often done by programmers and not UI designers, often leading to poorly designed user interfaces.

Insights on what constitutes a »good explanation« as understood by humans can be drawn from extensive research work done in the fields of psychology and cognitive sciences. T. Miller lists several properties of explanations which should be prominent in XAI applications.

An important one is the contrastive nature of explanations – when seeking an explanation, we do not merely ask why some event P happened, but also want to know why event P happened instead of some other event Q. (For event Q, we can often use averages as baseline.)

This contrastive nature of XAI is inherent part of the SHAP method, as we will discuss later below.

Interpretable Models

Not all machine learning algorithms are black boxes, impenetrable to attempts at explanation. Several of them, such as linear regression, logistic regression and decision trees are models that are more easily interpreted than e.g. deep neural nets.

In linear regression models, it is easy to separate the effects of linear features. The importance of a feature can be estimated by how much the model output changes if we change the feature value by a unit (in the case of numerical features) or when we switch discrete values of features (in the case of categorical features).

In the case of linear regression, the absolute value of t-statistic is commonly used to measure the feature importance. The higher is the coefficient and the lower is the variance of the estimated coefficient, the more important is the corresponding feature.

Another type of ML model that lends itself to easier interpretation are decision trees. In this case, one can interpret predictions of a decision tree by following the decisions through the tree and noting the contributions at each node.

Simple, more interpretable models are however often less accurate. There is generally a trade-off between interpretability and accuracy of machine learning models, as shown schematically in Figure 2.

Accuracy-interpretability trade-off of machine learning models

Figure 2: Accuracy-interpretability trade-off of machine learning models

In many applications, our primary goal is high accuracy and we therefore do not want to limit ourselves to specific, often simpler models just because they are highly interpretable.

We also often want to test several different machine learning models and compare their performance.

The solution is methods that provide us with interpretability of machine learning models without depending on the specific implementations of individual ML models.

Model Agnostic Methods

These model-agnostic interpretability methods are highly flexible, and they allow machine learning practitioners to interpret and compare various ML models in terms of explanations they provide.

Model agnostic methods include Partial Dependence Plot (PDF), Permutation Feature Importance, Individual Conditional Expectation (ICE) Plot, Accumulated Local Effects (ALE) Plot, LIME (Local Interpretable Model-agnostic Explanations) and use of Shapley values (SHAP- SHapley Additive exPlanations).

Permutation Feature Importance

Permutation feature importance was introduced by the co-creator of random forests, L. Breiman. The main idea is to estimate the importance of a feature by calculating the change in the model’s error after randomly permutating the feature values, converting it to “noise”. If random permutation leads to increase in error, the feature is important for model predictions.

One of the disadvantages of permutation feature importance approach is that it can lead to misleading results if features are highly correlated.

Shapley values and SHAP method

Shapley values were introduced by Shapley in 1953 in the context of coalitional game theory. It is a method of assigning the “payouts” of “players” based on how much they contribute to the total payout of the “game.” “Players” are cooperating in coalitions and receive a certain profit for cooperation.

The Shapley concept in the context of machine learning models involves feature values (“players”), cooperating in prediction (“game”) with “payout” equaling the prediction for given a data instance minus the average prediction of all data instances, the baseline score.

We obtain the Shapley value of a single feature by averaging over all possible coalitions of features. For example, if we have two features in our model, there are four possible coalitions: both features, either one of single features or absence of features.

Shapley value of a feature value for given instance is a contribution of this feature value to the difference between the prediction and the mean baseline prediction of the model.

An excellent library implementing Shapley values approach to XAI is SHAP (SHapley Additive exPlanations) by Lundberg and Lee.

SHAP has two important properties: local accuracy and consistency. SHAP is also contrastive, another property that is important for XAI methods, as we noted above. We can thus compare and explain predictions of individual outcomes with prediction of other individual outcomes or groups.

One disadvantage of Shapley values approach is that its computations may take a long time, as the number of possible coalitions rapidly increases with number of features. Care should also be taken in SHAP interpretation when dealing with features that are correlated.

LIME (Local interpretable model-agnostic explanations)

In LIME approach, we learn about the importance of features in prediction for individual instance by slightly changing the input and observing the change in the model results.

Explanation is thus obtained with approximating our model with one that is linear and thus interpretable and that is learned locally around our prediction.

LIME can be considerably faster than SHAP method, but also has a disadvantage. Alvarez-Melis et al. investigated robustness of different XAI methods, defined as stability of prediction’s explanation with respect to variations in input. Their results show that interpreter’s explanations can vary considerably with LIME approach, while SHAP approach delivered much more stable results.

Explainable Artificial Intelligence Software

Explainable machine learning (XAI) has found implementations in many open source and commercial software packages.

Well-known packages for XAI practitioners include:

  • SHAP
  • LIME
  • ELI5
  • Skater
  • PDPBox
  • InterpretML

Post Comment