Interpretable Machine Learning by Composing Queries

Case ID:
C17551
Disclosure Date:
9/1/2022

Unmet Need

Artificial intelligence (AI), with its ability to computationally carry out many complex tasks quickly, has the potential to revolutionize many fields. However, its real-world use is currently limited by its lack of

interpretability: it is difficult to understand how AI models generate their outputs. This is especially important in risk-averse fields such as medicine and autonomous driving, where an error can have substantial negative consequences. Moreover, an inability to understand the model makes error identification and correction difficult, and current models tend to achieve high performance but low interpretability, or vice versa. As a result, a model that performs well on both aspects is needed, and such a solution would facilitate the application of AI to many fields, especially those that are risk-adverse.

Technology Overview

Johns Hopkins researchers have developed a machine learning (ML) algorithm that can be applied to a diverse range of applications, which solves tasks similar to a game of “20 question”, where a series of questions are asked about an input and using the outputs to generate a prediction. The structure of the queries depends on the task and is user defined to be meaningful for the task at hand. For example, for an image classification problem, the queries could be portions of the image, or relate to aspects of an object, such as its color, while in a medical diagnosis task the queries might be the results of various medical tasks or the presence/absence of symptoms. In experiments, this model outperforms other interpretable models on every task tested, with accuracy is nearly comparable to state-of-the-art non-interpretable models. Thus, this model demonstrates suitable performance and transparency for real-world use.

Stage of Development

The core concepts of the approach have been developed along with an algorithm for efficient computation. The inventors are currently extending the approach to other application domains.

Publication:

Chattopadhyay, A., Slocum, S., Haeffele, B. D., Vidal, R., & Geman, D.

(2022). Interpretable by design: Learning predictors by composing interpretable queries. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(6), 7430-7443.


Chattopadhyay, A., Chan, K. H. R., Haeffele, B. D., Geman, D., & Vidal, R. (2023). Variational information pursuit for interpretable predictions. International Conference on Learning Representations (ICLR).

Patent Information:
Inventors:
Category(s):
Get custom alerts for techs in these categories/from these inventors:
For Information, Contact:
Heather Curran
hpretty2@jhu.edu
410-614-0300
Save This Technology:
2017 - 2022 © Johns Hopkins Technology Ventures. All Rights Reserved. Powered by Inteum