Friday, December 3, 2021

g-f(2)703 THE BIG PICTURE OF THE DIGITAL AGE (12/3/2021), MIT SMR + BCG, ME, MYSELF AND AI, EPISODE 304, is an exceptional "Full Pack Golden Knowledge Container"




ULTRA-condensed knowledge


"g-f" fishing of golden knowledge (GK) of the fabulous treasure of the digital ageArtificial Intelligence, Full Pack Golden Knowledge Container (12/3/2021)  g-f(2)426 


Opportunity, MIT SMR + BCG

EXCEPTIONAL “Full Pack Golden Knowledge Container”, geniouxfacts  


      • Within the fabulous treasure of the digital age, containers of rare golden knowledge can be fished.
      • The “ME, MYSELF AND AI, EPISODE 304” podcast is an EXCEPTIONAL full-package golden knowledge container because it discusses relevant and inspiring use cases of responsible Artificial Intelligence (AI) on Salesforce, a large enterprise technology company.
      • Paula Goldman has been a passionate advocate for the responsible use of technology for her entire career. Since joining Salesforce as its first chief ethical and humane use officer, she’s helped the company design and build technology solutions for its customers, with a focus on ethics, fairness, and responsible use.
      • In this episode of the Me, Myself, and AI podcast, Paula joins hosts Sam Ransbotham and Shervin Khodabandeh to discuss her specific role leading the ethical development of technology solutions, as well as the role technology companies play in society at large.


          Genioux knowledge fact condensed as an image


          References




          ABOUT THE HOSTS


          Sam Ransbotham (@ransbotham) is a professor in the information systems department at the Carroll School of Management at Boston College, as well as guest editor for MIT Sloan Management Review’s Artificial Intelligence and Business Strategy Big Ideas initiative. Shervin Khodabandeh is a senior partner and managing director at BCG and the coleader of BCG GAMMA (BCG’s AI practice) in North America. He can be contacted at shervin@bcg.com.

          Me, Myself, and AI is a collaborative podcast from MIT Sloan Management Review and Boston Consulting Group and is hosted by Sam Ransbotham and Shervin Khodabandeh. Our engineer is David Lishansky, and the coordinating producers are Allison Ryder and Sophie Rüdinger.



          Extra-condensed knowledge


          Lessons learned, MIT SMR + BCG


              • Paula Goldman.
                • Within that, my role, I’m chief ethical and humane use officer, which I know is a bit of a mouthful. It’s a first-of-its-kind position for Salesforce. I work with our technology teams and more broadly across the organization on two things. One is, as we’re building technology, thinking about the impact of that technology at scale out in the world, trying to avoid some unintended consequences, [and] trying to tweak things as we’re building them to make sure that they have maximum positive impact. Then, secondly, we work on policies that are really about the use of our technology and making sure that we are putting [up] sufficient guardrails to make sure that our technology is not abused as it is used out in the world.
                • Having done that for a long time, I think we started to see this shift in the role of technology in society. For a long time, I think the technology industry viewed itself as a bit of an underdog, a disrupter. Then, all of a sudden, you could sort of look and see the writing on the wall, and technology companies were not only the biggest companies in whatever financial index you want to name, but also, technology was so pervasive in every aspect of all of our lives, and even more so because of COVID. And I think we just saw the writing on the wall and saw that the sort of famous adage, “With great —” oh, I’m going to mess this up: “With great power comes great responsibility.”



              Condensed knowledge




              Lessons learned, MIT SMR + BCG


              • Paula Goldman: It’s time to think about guardrails, particularly for emerging technologies like AI, but across the board, how to think about “What do these technologies do at scale?” In any industry that goes through a period of maturation, that’s where I think tech is. That’s my motivation around it. As part of that role, I was leading a tech ethics practice. I was asked to be on Salesforce’s ethical use advisory board, and through that, they asked me to come lead this practice.
              • Shervin Khodabandeh: How do you think the nature of the dialogue has changed, since you’ve been in this field for some time as one of the pioneers in this area? How do you think the nature of the dialogue has changed, let’s say, from 10 years ago — guardrails and ethical use — versus now?
              • Paula Goldman: I would say, 10 years ago — and let’s start by giving credit where credit is due. Ten years ago, certainly there was a ton of leadership in academia thinking about these types of questions, and I think if you would go to a campus like MIT, you would find a lot of professors teaching classes on this and doing research on this. It has been a long-standing field: society and technology, science and technology — call it what you will — and many other disciplines. But I don’t think it was as widespread of a topic of public conversation. Today, you can hardly pick up a newspaper without seeing a headline about some sort of technology implication, whether it’s AI or a privacy story or a social media story or whatnot. Certainly, it was fairly rare 10 years ago to think about companies hiring people with titles like mine.
                • AI is really an automation of human intelligence, and it’s as good as the data that it gets fed, and that data is the result of human decisions, and it makes it imperfect. That’s really important for us to look out for. It’s very, very important for companies that are using AI to automate processes or, especially, make decisions that could impact human outcomes, whether that’s a loan, or access to a job, or whatnot. I’m sure by now you’ve heard many times about the research that was done — I think actually partly at MIT — about facial recognition by folks like Joy Buolamwini and Timnit Gebru, showing that facial recognition is more accurate on lighter-skinned people versus darker-skinned people, which can have catastrophic impacts if it’s in a criminal-justice setting. There’s a lot of stuff to look out for to make sure, in particular, that the questions of bias are appropriately safeguarded when developing this technology.
              • Shervin Khodabandeh: That is only going to get more complex as AI gets smarter, and there will be more data. Do you think that there is a possibility of AI itself driving or being a contributor to more ethical outcomes or to more equity in certain processes? I mean, there’s clearly a case for making sure AI doesn’t do something crazy. Then, is [it] also possible for AI to be used to make sure we humans don’t do something crazy?
              • Paula Goldman: Of course. I think that’s the flip side that maybe doesn’t get talked about as much. Humans making decisions about who gets a loan or who gets a job are also very subject to bias. So I think there is the potential, if done right, when AI is used in those circumstances, in combination with human judgment and appropriate guardrails, for the three of those things to actually open up more opportunities together.
                • I’m just giving you examples of use cases, but I think that’s probably across the board. Going back to the health care example, a doctor could be tired the day he’s looking at a scan for cancer. That’s why sometimes we get into these polarized discussions of AI versus humans. And it’s not an “either/or”; it’s an “and” — and it’s with a set of guardrails and responsibilities.


              Some relevant characteristics of this "genioux fact"

              • Category 2: The Big Picture of the Digital Age
              • [genioux fact deduced or extracted from MIT SMR + BCG]
              • This is a “genioux fact fast solution.”
              • Tag Opportunities those travelling at high speed on GKPath
              • Type of essential knowledge of this “genioux fact”: Essential Analyzed Knowledge (EAK).
              • Type of validity of the "genioux fact". 

                • Inherited from sources + Supported by the knowledge of one or more experts.


              References


              “genioux facts”: The online programme on MASTERING “THE BIG PICTURE OF THE DIGITAL AGE”, g-f(2)703, Fernando Machuca, December 3, 2021, blog.geniouxfacts.comgeniouxfacts.comGenioux.com Corporation.


              ABOUT THE AUTHORS


              PhD with awarded honors in computer science in France

              Fernando is the director of "genioux facts". He is the entrepreneur, researcher and professor who has a disruptive proposal in The Digital Age to improve the world and reduce poverty + ignorance + violence. A critical piece of the solution puzzle is "genioux facts"The Innovation Value of "genioux facts" is exceptional for individuals, companies and any kind of organization.




              Key “genioux facts”


              Featured "genioux fact"

              g-f(2)3219: The Power of Ten - Mastering the Digital Age Through Essential Golden Knowledge

                The g-f KBP Standard Chart: Executive Guide To Digital Age Mastery  By  Fernando Machuca   and  Claude Type of knowledge: Foundational Kno...

              Popular genioux facts, Last 30 days