Advertisement
Canadian Journal of Cardiology

Artificial Intelligence in Cardiovascular Imaging: “Unexplainable” Legal and Ethical Challenges?

Published:November 01, 2021DOI:https://doi.org/10.1016/j.cjca.2021.10.009

      Abstract

      Nowhere is the influence of artificial intelligence (AI) likely to be more profoundly felt than in health care, from patient triage and diagnosis to surgery and follow-up. Over the medium-term, these effects will be more acute in the cardiovascular imaging context, in which AI models are already successfully performing at approximately human levels of accuracy and efficiency in certain applications. Yet, the adoption of unexplainable AI systems for cardiovascular imaging still raises significant legal and ethical challenges. We focus in particular on challenges posed by the unexplainable character of deep learning and other forms of sophisticated AI modelling used for cardiovascular imaging by briefly outlining the systems being developed in this space, describing how they work, and considering how they might generate outputs that are not reviewable by physicians or system programmers. We suggest that an unexplainable tendency presents 2 specific ethico-legal concerns: (1) difficulty for health regulators; and (2) confusion about the assignment of liability for error or fault in the use of AI systems. We suggest that addressing these concerns is critical for ensuring AI’s successful implementation in cardiovascular imaging.

      Résumé

      Nulle part l’influence de l’intelligence artificielle (IA) ne risque de se faire ressentir aussi profondément que dans le domaine des soins de santé – du triage et du diagnostic des patients à la chirurgie et au suivi. À moyen terme, cette influence sera plus marquée dans le contexte de l’imagerie cardiovasculaire, où les modèles d’IA atteignent déjà des niveaux de précision et d’efficacité quasi humains dans certaines applications. Pourtant, l’adoption de systèmes d’IA inexplicables en imagerie cardiovasculaire soulève encore d’importants défis juridiques et éthiques. Nous souhaitons, en particulier, mettre l’accent sur les défis posés par le caractère inexplicable de l’apprentissage profond et d’autres formes de modélisation de pointe en IA qui sont utilisées en imagerie cardiovasculaire. Pour ce faire, nous présentons brièvement les systèmes en cours de développement dans ce domaine, en décrivant leur fonctionnement et en considérant comment ils pourraient générer des résultats que ni les médecins ni les programmeurs de systèmes ne pourraient interpréter. Nous suggérons qu’une tendance inexplicable soulève spécifiquement deux sujets de préoccupation d’ordre éthico-juridique, à savoir : (1) les difficultés en matière de réglementation de la santé; (2) la confusion en matière d’attribution des responsabilités en cas d’erreur ou de faute dans l’utilisation des systèmes d’IA. Nous sommes d’avis qu’il est essentiel de tenir compte de ces préoccupations pour assurer une mise en œuvre réussie de l’IA en imagerie cardiovasculaire.
      To read this article in full you will need to make a payment

      Purchase one-time access:

      Academic & Personal: 24 hour online accessCorporate R&D Professionals: 24 hour online access
      One-time access price info
      • For academic or personal research use, select 'Academic and Personal'
      • For corporate R&D use, select 'Corporate R&D Professionals'

      Subscribe:

      Subscribe to Canadian Journal of Cardiology
      Already a print subscriber? Claim online access
      Already an online subscriber? Sign in
      Institutional Access: Sign in to ScienceDirect

      References

        • Lopez-Jimenez F.
        • Attia Z.
        • Arruda-Olson A.M.
        • et al.
        Artificial intelligence in cardiology: present and future.
        Mayo Clin Proc. 2020; 95: 1015-1039
        • Agrawal A.
        • Gans J.
        • Goldfarb A.
        Economic policy for artificial intelligence.
        Innovation Policy and the Economy. 2019; 19: 139-159
        • Perea R.G.
        • Poyato E.C.
        • Montesinosa J.
        • Díaza A.R.
        Optimisation of water demand forecasting by artificial intelligence with short data sets.
        Biosystems Engineering. 2019; 177: 59-66
        • Tian L.
        • Jiang J.
        • Tian L.
        Safety analysis of traffic flow characteristics of highway tunnel based on artificial intelligence flow net algorithm.
        Cluster Computing. 2019; 22: 573-582
        • Stern S.
        Artificial intelligence, technology, and the law.
        University of Toronto Law Journal. 2018; 68: 1-11
        • Killock D.
        AI outperforms radiologists in mammographic screening.
        Nat Rev Clin Oncol. 2020; 17: 134
        • Sermesant M.
        • Delingette H.
        • Cochet H.
        • Jaïs P.
        • Ayache N.
        Applications of artificial intelligence in cardiovascular imaging.
        Nat Rev Cardiol. 2021; 18: 600-609
        • Siegersma K.R.
        • Leiner T.
        • Chew D.P.
        • Appelman Y.
        • Hofstra L.
        • Verjans J.W.
        Artificial intelligence in cardiovascular imaging: state of the art and implications for the imaging cardiologist.
        Neth Heart J. 2019; 27: 403-413
        • Stokes M.B.
        • Roberts-Thompson R.
        The role of cardiac imaging in clinical practice.
        Aust Prescr. 2017; 40: 151-155
        • Henglin M.
        • Stein G.
        • Huscha P.V.
        • Snoek J.
        • Wiltschko A.B.
        • Cheng S.
        Machine learning approaches in cardiovascular imaging.
        Circ Cardiovasc Imaging. 2017; 10: 1-9
        • Johnson K.W.
        • Soto J.T.
        • Glicksberg B.S.
        • et al.
        Artificial intelligence in cardiology.
        J Am Coll Cardiol. 2018; 71: 2668-2679
        • Natale S.
        • Ballatore A.
        Imagining the thinking machine: technological myths and the rise of artificial intelligence.
        Convergence. 2020; 26: 3-18
        • Jordan M.I.
        • Mitchell T.M.
        Machine learning: trends, perspectives, and prospects.
        Science. 2015; 349: 255-260
        • Sidney-Gibbons J.A.M.
        • Sidney-Gibbons C.J.
        Machine learning in medicine: a practical introduction.
        BMJ Med Res Methodol. 2019; 19: 1-18
        • O’Regan D.P.
        Putting machine learning into motion: applications in cardiovascular imaging.
        Clin Radiol. 2020; 75: 33-37
        • Dey D.
        • Slomka P.J.
        • Leeson P.
        • et al.
        Artificial intelligence in cardiovascular imaging.
        J Am Coll Cardiol. 2019; 73: 1317-1335
        • LeCun Y.
        • Bengio Y.
        • Hinton G.
        Deep learning.
        Nature. 2015; 521: 436-444
        • Price W.N.
        Artificial intelligence in health care: applications and legal issues.
        SciTech Lawyer. 2017; 14: 10-13
        • Pesapane F.
        • Volonté C.
        • Codari M.
        • et al.
        Artificial intelligence as a medical device in radiology: ethical and regulatory issues in Europe and the United States.
        Insights Imaging. 2018; 9: 745-753
      1. Preece A, Harborne D, Braines D, Tomsett R, Chakraborty S. Stakeholders in explainable AI [abstract]. Presented at: AAAI FSS-18: Artificial Intelligence in Government and Public Sector. October 18-20, 2018; Arlington, VA.

        • Molnar C.
        • Casalicchio G.
        • Bischl B.
        Interpretable machine learning: a brief history, state-of-the-art and challenges.
        in: Koprinska I. Kamp M. Appice A. Workshops of the European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, Cham2020: 417-431
        • Holzinger A.
        Explainable AI and multi-modal causability in medicine.
        i-com. 2020; 19: 171-179
        • Ordish J.
        • Hall A.
        Black Box Medicine and Transparency: Interpretable Machine Learning.
        PHG Foundation, London2020
        • Erickson B.J.
        • Korfiatis P.
        • Akku Z.
        • Kline T.L.
        Machine learning for medical imaging.
        RadioGraphics. 2017; 37: 505-515
        • Holzinger A.
        • Langs G.
        • Denk H.
        • Zatloukal K.
        • Müller H.
        Causability and explainability of artificial intelligence in medicine.
        WIREs Data Mining and Knowledge Discovery. 2019; 9: 1-13
        • Setzu A.
        • Guidotti R.
        • Monreale A.
        • Turini F.
        • Pedreschi D.
        • Giannoti F.
        GLocalX - from local to global explanations of black box AI models.
        Artificial Intelligence. 2021; 294: 10357
        • Preece A.
        Asking ‘Why’ in AI: explainability of intelligent systems: perspectives and challenges.
        Intelligent Systems in Accounting, Finance, and Management. 2018; 25: 63-72
        • Lipton Z.C.
        The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery.
        Queue. 2018; 16: 31-57
        • Mittelstadt B.
        • Russel C.
        • Wachter S.
        Explaining explanations in AI. FAT∗ ’19: Proceedings of the Conference on Fairness, Accountability, and Transparency, January 29-31, 2019, Atlanta, Georgia.
        Association for Computing Machinery, New York2019: 279-288
        • Hall A.
        • Ordish J.
        Black Box Medicine and Transparency: Technical Summary.
        PHG Foundation, London2020
        • Longo L.
        • Goebel R.
        • Lecue F.
        • Kieseberg P.
        • Holzinger A.
        Explainable artificial intelligence: concepts, applications, research challenges and visions.
        in: Holzinger A. Kieseberg P. Tjoa A. Weippl E. Machine Learning and Knowledge Extraction. Springer, Cham2020: 1-16
      2. Shin M, Kim J, Kim M. Measuring human adaptation to AI in decision making: application to evaluate changes after AlphaGo. arXiv:2012.15035v3.

        • Affi M.
        • Brown M.S.
        What else can fool deep learning? Addressing color constancy errors on deep neural network performance.
        2019 IEEE/CVF International Conference on Computer Vision (ICCV);. 2019; (https://doi.org/10.1109/iccv.2019.00033): 243-252
        • Samek W.
        • Montavon G.
        • Lapuschkin S.
        • et al.
        Explaining deep neural networks and beyond: a review of methods and applications.
        Proceedings of the IEEE. 2021; 109: 247-278
      3. Yang G, Ye Q, Xia J. Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion. arXiv:2102.01998.

        • Khedkar S.
        • Subramanian V.
        • Shinde G.
        • et al.
        Explainable AI in healthcare. 2nd International Conference on Advances in Science and Technology (ICAST-2019).
        University of Mumbai, Maharashtra, India2019
        • Reddy S.
        • Allan S.
        • Coghlan S.
        • Cooper P.
        A governance model for the application of AI in health care.
        J Am Med Inform Assoc. 2020; 27: 491-497
        • Amann J.
        • Blasimme A.
        • Vayena E.
        • et al.
        Explainability for artificial intelligence in healthcare: a multidisciplinary perspective.
        BMC Med Inform Decis Mak. 2020; 20: 310
      4. Reimer U, Maier E, Tödtl B. Going beyond explainability in medical AI systems. In: Joint Proceedings of Modellierung 2020 Short, Workshop and Tools and Demo Papers, Workshop on Models in AI. February 19-21, 2020; Vienna, Austria. CEUR Workshop Proceedings (CEUR-WS.org);2542:185-191.

        • Kundu S.
        AI in medicine must be explainable.
        Nat Med. 2021; 27: 1328
        • Tonekaboni S.
        • Joshi S.
        • McCradden M.D.
        • Goldenberg A.
        What clinicians want: contextualizing explainable machine learning for clinical end use.
        Proceedings of Machine Learning Research. 2019; : 1-21
        • Murdoch W.J.
        • Singh C.
        • Kumbier K.
        • Abbasi-Asl R.
        • Yu B.
        Definitions, methods, and applications in interpretable machine learning.
        Proc Natl Acad Sci U S A. 2019; 116: 22071-22080
        • Hagen G.R.
        AI and patents and trade secrets.
        in: Martin-Bariteau F. Scassa T. Artificial Intelligence and the Law in Canada. LexisNexis Canada, Toronto2021
        • Castelvecchi D.
        Can we open the black box of AI?.
        Nature. 2016; 538: 21-23
        • Government of Canada
        Medical Devices Regulations (SOR/98-282).
        (Available at:) (Accessed August 25, 2021)
        • Price W.N.
        Regulating black-box medicine.
        Michigan Law Review. 2017; 116: 421
      5. AI diagnostics need attention.
        Nature. 2018; 555: 285
        • Government of Canada
        Guidance Document: Software as a Medical Device (SaMD): Definition and Classification.
        (Available at:) (Accessed August 25, 2021)
      6. Health Canada. Medical devices active licences search: AI-Rad Companion (Cardiovascular). [Licence No: 105724. Device identifier: 11270066.] Available at: https://health-products.canada.ca/mdall-limh/prepareSearch-preparerRecherche.do?type=active. Accessed August 25, 2021.

        • Parikh R.B.
        • Teeple S.
        • Navathe A.S.
        Addressing bias in artificial intelligence in health care.
        JAMA. 2019; 322: 2377-2378
        • London A.J.
        Artificial intelligence and black-box medical decisions: accuracy versus explainability.
        Hastings Cent Rep. 2019; 1: 15-21
        • Forcier M.B.
        • Khoury L.
        • Vézina N.
        Liability issues for the use of artificial intelligence in health care in Canada: AI and medical decision-making.
        Dalhousie Medical Journal. 2020; 46: 7-11
        • Lesage-Jarjoura P.
        • Kouri R.P.
        • Philips-Nootens S.
        Éléments de la responsabilité civile médicale [Elements of medical civil responsibility].
        4th ed. Éditions Y. Blais, Cowansville2016
      7. Ter Neuzen v Korn, 3 SCR 674, 127 DLR (4th) 577 (1995).

      8. Brough v Yipp, A.J. No. 1077, 2016 ABQB 559, para 121 (2016).

        • Reznick R.K.
        • Harris K.
        • Horsley T.
        Task Force Report on Artificial Intelligence and Emerging Digital Technologies.
        Royal College of Physicians and Surgeons, Ottawa2020
        • Froomkin A.M.
        • Kerr I.
        • Pineau J.
        When AIs outperform doctors: confronting the challenges of a tort-induced over-reliance on machine learning.
        Arizona Law Review. 2019; 61: 33-99
      9. Montreal Declaration for a Responsible Development of AI. Université de Montréal, Montreal2018
        • Khoury L.
        Uncertain Causation in Medical Liability.
        Bloomsbury Publishing, London2006: 13-16
        • Frank X.
        Is Watson for Oncology per se unreasonably dangerous? Making a case for how to prove products liability based on a flawed artificial intelligence design.
        Am J Law Med. 2019; 45: 273-294
      10. Imperial Tobacco Canada ltée c. Conseil québécois sur le tabac et la santé [Imperial Tobacco Canada Ltd. v. Quebec Council on Tobacco and Health], 2019 QCCA 358 pare 222-42.

        • Rudin C.
        Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.
        Nature Machine Intelligence. 2019; 1: 206-215
        • Gunning D.
        • Aha D.W.
        DARPA’s explainable artificial intelligence program.
        Deep Learning and Security. 2019; 40: 44-58
        • Arrieta A.B.
        • Barredo A.
        • Díaz-Rodríguez A.N.
        • et al.
        Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities, and challenges toward responsible AI.
        Information Fusion. 2020; 58: 82-115
      11. EC, Data Protection Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ, L 119/1, arts 15 and 22 (2016).

      12. Bill 64. An Act to modernize legislative provisions as regards the protection of personal information. 1st Sess, 42nd Leg, Quebec, 2020, art 65.2.

        • Office of the Privacy Commissioner of Canada
        A Regulatory Framework for AI: Recommendations for PIPEDA Reform.
        Her Majesty the Queen in Right of Canada, Ottawa2020
        • Robbins S.
        A misdirected principle with a catch: explicability for AI.
        Minds and Machines. 2019; 29: 495-514