Wednesday, July 21, 2021

g-f(2)386 The Big Picture of Business Artificial Intelligence (7/21/2021), STAT, Explaining medical AI is easier said than done.




ULTRA-condensed knowledge


Alert, Key problems with dubbed explainable artificial intelligence (XAI)
  • As we argue in Science magazine, together with our colleagues I. Glenn Cohen and Theodoros Evgeniou, this approach may not help and, in some instances, can hurt.
  • Requiring explainability for health care artificial intelligence and machine learning may also limit innovation — limiting developers to algorithms that can be explained sufficiently well can undermine accuracy.
        Lesson learned, Dubbed explainable artificial intelligence (XAI)
        • The growing use of artificial intelligence in medicine is paralleled by growing concern among many policymakers, patients, and physicians about the use of black-box algorithms. 
        • In a nutshell, it’s this: We don’t know what these algorithms are doing or how they are doing it, and since we aren’t in a position to understand them, they can’t be trusted and shouldn’t be relied upon.
        • A new field of research, dubbed explainable artificial intelligence (XAI), aims to address these concerns. 
          Opportunity, A potential solution
          • Instead of focusing on explainability, the FDA and other regulators should closely scrutinize the aspects of AI/ML that affect patients, such as safety and effectiveness, and consider subjecting more health-related products based on artificial intelligence and machine learning to clinical trials. 
          • Human factors play an important role in the safe usage of technology and regulators, as well as product developers and researchers, need to consider them carefully when designing AI/ML systems that can be trusted.

          Genioux knowledge fact condensed as an image

          Condensed knowledge


          Alert, Key problems with dubbed explainable artificial intelligence (XAI)
          • As we argue in Science magazine, together with our colleagues I. Glenn Cohen and Theodoros Evgeniou, this approach may not help and, in some instances, can hurt.
          • Requiring explainability for health care artificial intelligence and machine learning may also limit innovation — limiting developers to algorithms that can be explained sufficiently well can undermine accuracy.
                Lesson learned, Dubbed explainable artificial intelligence (XAI)
                • The growing use of artificial intelligence in medicine is paralleled by growing concern among many policymakers, patients, and physicians about the use of black-box algorithms. 
                • In a nutshell, it’s this: We don’t know what these algorithms are doing or how they are doing it, and since we aren’t in a position to understand them, they can’t be trusted and shouldn’t be relied upon.
                • A new field of research, dubbed explainable artificial intelligence (XAI), aims to address these concerns. 
                  Opportunity, A potential solution
                  • Instead of focusing on explainability, the FDA and other regulators should closely scrutinize the aspects of AI/ML that affect patients, such as safety and effectiveness, and consider subjecting more health-related products based on artificial intelligence and machine learning to clinical trials. 
                  • Human factors play an important role in the safe usage of technology and regulators, as well as product developers and researchers, need to consider them carefully when designing AI/ML systems that can be trusted.

                  Category 2: The Big Picture of the Digital Age

                  [genioux fact deduced or extracted from STAT]

                  This is a “genioux fact fast solution.”

                  Tag Opportunities those travelling at high speed on GKPath

                  Type of essential knowledge of this “genioux fact”: Essential Analyzed Knowledge (EAK).

                  Type of validity of the "genioux fact". 

                  • Inherited from sources + Supported by the knowledge of one or more experts.


                  Authors of the genioux fact

                  Fernando Machuca


                  References


                  Explaining medical AI is easier said than done, Boris Babic and Sara Gerke, July 21, 2021, STAT.


                  ABOUT THE AUTHORS



                  Boris Babic is an assistant professor of philosophy and of statistics at the University of Toronto. Sara Gerke is an assistant professor of law at Penn State Dickinson Law. This essay was adapted from a longer article in Science magazine by Boris Babic, Sara Gerke, Theodoros Evgeniou, and I. Glenn Cohen.


                  STAT


                  Stat (stylized STAT, sometimes also called Stat News) is an American health-oriented news website launched on November 4, 2015, by John W. Henry, the owner of The Boston Globe. It is produced by Boston Globe Media and is headquartered in the Globe's own building in Boston. Its executive editor is Rick Berke, who formerly worked at both The New York Times and Politico. According to Kelsey Sutton of Politico, the website is Henry's "biggest and most ambitious standalone site yet". The site's name comes from the term "stat", short for statim, or "immediately"—a term that has long been used in medical contexts.

                  As of February 2016, it had 45 staff members.


                  Key “genioux facts”








                  Featured "genioux fact"

                  g-f(2)256 The Big Picture of the Digital Age (5/2/2021), geniouxfacts, The fundamental knowledge mini-pyramid for traveling on the "GKPath" highway.

                  Extra-condensed knowledge This "genioux fact" describes the inverted mini pyramid of knowledge that is essential to travel on the ...

                  Popular genioux facts, Last 30 days