As healthcare continues to move toward AI, explainability is critical

There is huge interest in AI in healthcare as many hospitals and healthcare systems have already adopted the technology – mostly on the administrative side – with great success.

But progress with AI in healthcare – especially on the clinical side – cannot happen without addressing the growing challenges of model transparency and explainability.

In an industry where decisions can mean life or death, understanding and trusting AI decisions is not just a technical necessity— it is an ethical obligation.

Neeraj Mainkar is Vice President of Software Engineering and Advanced Technology at Proprio, which develops immersive tools for surgeons. He has considerable expertise in the application of algorithms in healthcare. Healthcare IT News spoke with him to discuss explainability and the need for patient safety and trust, error detection, regulatory compliance and ethical standards for AI.

Q. What does explainability mean in the field of artificial intelligence?

A. Explainability refers to the ability to understand and clearly articulate how an AI model arrives at a particular decision. For simpler AI models, such as decision trees, this process is relatively simple, as the decision paths can be easily observed and interpreted.

However, when we move into the realm of complex deep learning models that consist of many layers and complex neural networks, the challenge of understanding the decision-making process becomes significantly more difficult.

Deep learning models work with a large number of parameters and complex architectures, making direct observation of their decision paths nearly impossible. Reverse engineering these models or investigating specific problems in the code is extremely difficult.

When a prediction does not meet expectations, determining the exact reason for this discrepancy is difficult due to the complexity of the model. The lack of transparency means that even the creators of these models cannot fully explain their behavior or outputs.

Opacity complex AI systems present significant challenges, especially in areas such as healthcare, where understanding why a decision was made is crucial. As AI continues to integrate into our lives, the demand for explainable AI will grow. Explainable AI aims to make AI models more interpretable and transparent, ensuring that their decision-making processes are understandable and reliable.

Q. What are the technical and ethical implications of AI explainability?

A. Both technical and ethical implications must be considered in the pursuit of explainability. On the technical side, simplifying models to improve explainability can reduce performance, but it can also help AI engineers debug and improve algorithms by giving them a clear view of where its outputs come from.

Ethically, explainability helps identify biases in AI models and promote fairness in treatment by eliminating discrimination against smaller and underrepresented groups. Explanatory AI also ensures end users understand how decisions are made while protecting sensitive information in HIPAA compliance.

Q. Discuss error detection as it relates to explainability.

A. Explainability is an important component of effective error detection and correction in AI systems. The ability to understand and interpret how an AI model reaches its decisions or output is required to effectively detect and correct errors.

By following the decision paths, we can identify where the model may have gone wrong, allowing us to understand the “why” behind the incorrect prediction. This understanding is critical to making the necessary adjustments to improve the model.

Continuous improvement of AI models depends largely on understanding their failures. In healthcare, where patient safety is paramount, fast and accurate model debugging and improvement is critical.

Q. Please clarify regulatory compliance for clarity.

A. Healthcare is a highly regulated industry with strict standards and guidelines that AI systems must meet to ensure safety, effectiveness and ethical use. Explainability is important for compliance as it meets a number of key requirements, including:

  • Transparency. Explainability ensures that every decision made by AI can be observed and understood. Such transparency is needed to maintain trust and ensure AI systems operate within ethical and legal boundaries.
  • Validation. Explanatory AI makes it easy to demonstrate that models have been thoroughly tested and validated to perform under different scenarios.
  • Alleviating prejudice. Explainability allows biased decision-making patterns to be identified and mitigated by ensuring that models do not unfairly disadvantage any particular group.

As AI continues to develop, an emphasis on explainability will continue to be an important aspect of regulatory frameworks, ensuring the responsible and effective use of these advanced technologies in healthcare.

Q. And where do ethical standards come in with respect to explainability?

A. Ethical standards play a crucial role in the development and deployment of responsible AI systems, especially in sensitive and high-stakes areas such as healthcare. Explainability is intrinsically linked to these ethical standards, ensuring that AI systems operate in a transparent, fair and accountable manner, following the core ethical principles of healthcare.

Responsible AI means operating within ethical boundaries. The push for improved explainability in AI will increase trust and credibility, ensuring that AI decisions are transparent, justified, and ultimately beneficial to patient care. Ethical standards guide the responsible disclosure of information, protection of user privacy, compliance with regulatory requirements such as HIPAA, and public trust in AI systems.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a publication of HIMSS Media.

#healthcare #continues #move #explainability #critical

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top