Friday, July 21, 2023

g-f(2)1222 The Lighthouse of AI: Navigating the Trade-off Between Accuracy and Explainability

Angel sponsors                  Monthly sponsors




ULTRA-condensed knowledge

Lighthouse of the Big Picture of the Digital Age

The “Positive Disruption: Transformation Revolution” has accelerated

The "Positive Disruption: AI Revolution" has accelerated


genioux Facts:

The article "Must AI Accuracy Come at the Cost of Comprehensibility?" is now available in an adapted version on Insead Knowledge. This article, which was previously published in the prestigious Harvard Business Review, provides valuable insights into the trade-off between accuracy and explainability in AI models. It is essential reading for anyone who is interested in the present and future of AI.

Recent research by François Candelon, Boston Consulting Group; Theodoros Evgeniou, INSEAD; and David Martens, University of Antwerp has shown that simple, interpretable AI models can perform just as well as black box alternatives in most cases. This means that companies should first consider white box models before considering more complex solutions. However, it is also important for managers to have a sound understanding of the data, users, context, and legal jurisdiction of their use case in order to make informed and conscious choices.

To supplement this lighthouse, I have asked my two brilliant AI assistants, Bard and Bing Chatbot, to complete specific tasks.

This lighthouse shares with you a selection of 10 key insights from the exceptional golden knowledge container "Must AI Accuracy Come at the Cost of Comprehensibility?", extracted using only human intelligence.
  1. Does the trade-off between accuracy and explainability really exist? To understand the dilemma, it is important to distinguish between so-called black box and white box AI models: White box models typically include a few simple rules, possibly in the form of a decision tree or a simple linear model with limited parameters. The small number of rules or parameters makes the processes behind these algorithms more easily understood by humans.
  2. On the other hand, black box models use hundreds or even thousands of decision trees (known as “random forests”), with potentially billions of parameters (as deep learning models do). But humans can only comprehend models with up to about seven rules or nodes, according to cognitive load theory, making it practically impossible for observers to explain the decisions made by black box systems.
  3. Contrary to common belief that less explainable black box models tend to be more accurate, our study shows that there is often no trade-off between accuracy and explainability.
  4. In a study with Sofie Goethals from the University of Antwerp, we conducted a rigorous, large-scale analysis of how black and white box models performed on nearly 100 representative datasets, or what is known as benchmark classification datasets. For almost 70 percent of the datasets across domains such as pricing, medical diagnosis, bankruptcy prediction and purchasing behaviour, we found that a more explainable white box model could be used without sacrificing accuracy. This is consistent with other emerging research exploring the potential of explainable AI models.
  5. While there are some cases in which black box models are ideal, our research suggests that companies should first consider simpler options. White box solutions could serve as benchmarks to assess whether black box ones in fact perform better. If the difference is insignificant, the white box option should be used. However, there are also certain conditions which will either influence or limit the choice.
  6. One of the selection considerations is the nature and quality of the data. When data is noisy (with erroneous or meaningless information), relatively simple white box methods tend to be effective. Analysts at Morgan Stanley found that simple trading rules worked well on highly noisy financial datasets. These rules could be as simple as “buy stock if company is undervalued, underperformed recently, and is not too large”.
  7. The type of data is another important consideration. Black box models may be superior in applications that involve multimedia data replete with images, audio and video, such as image-based air cargo security risk prediction. In other complex applications such as face detection for cameras, vision systems in autonomous vehicles, facial recognition, image-based medical diagnostics, illegal/toxic content detection and, most recently, generative AI tools like ChatGPT and DALL-E, a black box approach may sometimes be the only feasible option.
  8. Transparency is an important ingredient to build and maintain trust, especially when fairness in decision-making, or when some form of procedural justice is important. In certain jurisdictions where organisations are required by law to be able to explain the decisions made by their AI models, white box models are the only option.
  9. In organisations that are less digitally developed, employees tend to have less understanding, and correspondingly, less trust in AI. Therefore, it would be advisable to ease employees into using AI tools by starting with simpler and explainable white box models and progressing to more complex ones only when teams become accustomed to these tools.
  10. Even if an organisation chooses to implement an opaque AI model, it can mitigate the trust and safety risks due to the lack of explainability. One way is to develop an explainable white box proxy to explain, in approximate terms, how a black box model arrives at a decision.



REFERENCES


François Candelon, Boston Consulting Group; Theodoros Evgeniou, INSEAD; and David Martens, University of Antwerp, Must AI Accuracy Come at the Cost of Comprehensibility?, Insead Knowledge, July 21, 2023.



ABOUT THE AUTHORS

François Candelon


François Candelon is a Managing Director and Senior Partner at the Boston Consulting Group. He is also the Global Director of the BCG Henderson Institute.

Theos Evgeniou


Theos Evgeniou is a professor of Decision Sciences and Technology Management at INSEAD and director of the INSEAD Executive Education program on Transforming your Business with AI.

David Martens



David Martens is Professor of Data Science at the Department of Engineering Management, University of Antwerp, Belgium, where he teaches data science (ethics) to postgraduate FC TE DM students. In his work, David is mainly active in Explainable AI research, and has collaborated with large banks, insurance companies and telco companies, as well as with various technology startups. He is the author of the book “Data Science Ethics” (Oxford University Press).



Extra-condensed knowledge







Bard's Summary about Insead Knowledge, AI: Disruption and Adaptation




  • Artificial intelligence (AI) is rapidly transforming the world, disrupting industries and creating new opportunities.
  • In the business consulting sector, AI is being used to automate tasks, improve decision-making, and provide insights that were previously unavailable.
  • In education, AI is being used to personalize learning, provide feedback, and assess student progress.
  • In the media, AI is being used to generate content, personalize recommendations, and detect fake news.
  • AI also raises a number of regulatory and ethical challenges, such as the need to ensure that AI systems are fair and unbiased, and that they do not violate people's privacy.

The AI: Disruption and Adaptation series from Insead Knowledge explores the impact of AI on these and other sectors, and examines the regulatory and ethical challenges posed by this powerful technology.

Here are some of the key insights from the series:

  • AI is having a disruptive impact on many industries, but it also creates new opportunities.
  • The success of AI depends on the quality of the data that is used to train the models.
  • AI systems need to be transparent so that people can understand how they work and make informed decisions about their use.
  • AI raises a number of ethical challenges, such as the need to ensure that AI systems are fair and unbiased, and that they do not violate people's privacy.
The AI: Disruption and Adaptation series is a valuable resource for anyone who wants to understand the impact of AI on the world and the challenges that this technology poses.

I hope this summary is helpful!



Condensed knowledge






Bing Chatbot's summary of GK article "Must AI Accuracy Come at the Cost of Comprehensibility?"


The article “Must AI Accuracy Come at the Cost of Comprehensibility?” discusses the potential trade-off between the accuracy and comprehensibility of machine learning models, and how it relates to their safe and trusted adoption 12. The article argues that companies looking to integrate AI in their operations should think twice before turning their backs on simpler, more explainable AI algorithms in favor of complex ones 2.

The article explains that there is a real risk when organizations have little or no insight into how their AI tools are making decisions. Research has shown that a lack of explainability is one of executives’ most common concerns related to AI. It also has a substantial impact on users’ trust in and willingness to use AI products 2.

The article also discusses the difference between so-called black box and white box AI models. White box models typically include a few simple rules, possibly in the form of a decision tree or a simple linear model with limited parameters. The small number of rules or parameters makes the processes behind these algorithms more easily understood by humans. On the other hand, black box models use hundreds or even thousands of decision trees (known as “random forests”), with potentially billions of parameters (as deep learning models do). But humans can only comprehend models with up to about seven rules or nodes, according to cognitive load theory, making it practically impossible for observers to explain the decisions made by black box systems 2.

Contrary to common belief that less explainable black box models tend to be more accurate, the study shows that there is often no trade-off between accuracy and explainability 2. Is there anything else you would like me to look up?



g-f(2)1222: The Juice of Golden Knowledge






Some relevant characteristics of this "genioux Fact"

  • BOMBSHELL KNOWLEDGE
  • Category 2: The Big Picture of the Digital Age
    • The Lighthouse of the Big Picture of the Digital Age
      • The "Positive Disruption: AI Revolution" has accelerated
    • The internal title
      • g-f(2)1222 The Lighthouse of Knowledge: Must AI Accuracy Come at the Cost of Comprehensibility?
  • [genioux fact deduced or extracted from geniouxfacts + Insead Knowledge + Bard + Bing Chatbot]
  • This is a “genioux fact fast solution.”
  • Tag "GkPath" highway
    • GKPath is the highway where there is no speed limit to grow. 
    • GkPath is paved with blocks of GK.
    • "genioux facts", the online program on "MASTERING THE BIG PICTURE OF THE DIGITAL AGE”, builds The Golden Knowledge Path (GKPath) digital freeway to accelerate everyone's success in the digital age.
  • Type of essential knowledge of this “genioux fact”: Essential Analyzed Knowledge (EAK).
  • Type of validity of the "genioux fact". 

  • Inherited from sources + Supported by the knowledge of one or more experts.


References


“genioux facts”: The online programme on "MASTERING THE BIG PICTURE OF THE DIGITAL AGE”, g-f(2)1222, Fernando Machuca, July 21, 2023, Genioux.com Corporation.


ABOUT THE AUTHORS


PhD with awarded honors in computer science in France

Fernando is the director of "genioux facts". He is the entrepreneur, researcher and professor who has a nondisruptive proposal in The Digital Age to improve the world and reduce poverty + ignorance + violence. A critical piece of the solution puzzle is "genioux facts"The Innovation Value of "genioux facts" is exceptional for individuals, companies and any kind of organization.

Featured "genioux fact"

g-f(2)3219: The Power of Ten - Mastering the Digital Age Through Essential Golden Knowledge

  The g-f KBP Standard Chart: Executive Guide To Digital Age Mastery  By  Fernando Machuca   and  Claude Type of knowledge: Foundational Kno...

Popular genioux facts, Last 30 days