WATCH THE VIDEO ---> Transcript of the video.
>> I think one of the things we've all gotten a littleobsessed about at the moment around AI is what's technicallypossible. And I think we need to be paying much more attention towhat's culturally, appropriately, socially acceptable and worksinside our laws and governments.
>> So, you're focused on will robots share our values, notwill robots take our jobs. Will they? Will they share ourvalues?
>> Oh, another really good question, right? So, listen, weknow that artificial intelligence is going to go to scale. We knowit's going to end up in lots of different places. The questionbecomes, how do we ensure that that's something that we'recomfortable with, something that we feel good about, something thatreflects the things we care about. And that means asking questionsbeyond just what can we do technically, but to ask questions aboutwhat are the values we want these objects to enshrine, who gets todecide what those values are, and how do we regulate them.
>> Are these questions being asked as often as they shouldbe?
>> Well, I'm an anthropologist so, you know, I think theanswer is no. We should ask them all the time. At least the goodnews is I think they're starting to resurface. So, the more youhear talk about AI and ethics, AI and public policy, AI andgovernance, those are at least the beginnings of conversationsabout what's the world we want to build and how we're going to livein it.
>> So, let's take the pro side. Let's say these questionsare asked as often as they should be. What is the potential of AIto affect our lives in positive ways?
>> So, I think if you manage to kind of think through the,where are the places that AI can be most useful, and frankly forme, again, as a social scientist, the question I always want to askis not can we do it technically, but should we do it socially. So,are there places where AI makes better sense, not because it'sabout an efficiency, but because it either has a way of makingdecisions that's a little less messy than humans making it. By thesame token, depending on who programs it, depending on what datathey use, sometimes we have the potential of these technologies toreproduce and enshrine really longstanding in equities and bias.And that seems like not a good trend at all.
>> Right. So, what are the gravest dangers? What are thegravest dangers if these questions do not get asked?
>> I think the gravest dangers are we take the world thatwe live in now and we make it the world in perpetuity movingforward. So, all the things about the current world that don't feelright is what the data reflects, right? It's a world where womenaren't paid as much as men, where certain kinds of populations aresubject to more violence, where we know that certain decisions getmade in manners that are profoundly unfair. If you take all thedata about the way the world has been, and that's what you buildthe machinery on top of, then we get this world as our totalfuture. And I don't know about you, but I'd like something slightlydifferent
- Imagine if, starting tomorrow, U.S. corporate hiringand promotion programs began using American-made AI softwarewritten by a young, white male who built subtle stereotypes againstminorities, women, and older workers into the algorithms. Whattypes of hiring and promotion conditions would these differentpopulations expect to face? Would the changes in the hiring andpromotion practices be a planned change or an unplannedchange?
- Continuing from the first question, would the hiringand promotion situation eventually become an internal force fororganizational change? If so, how?
- If the bias in the AI hiring and promotion algorithm isdiscovered, analyzed, and set to be corrected. Which type ofemployee would be likely to have a reactance – a negative reaction– about the subject? Of the eight reasons people resist changelisted in the text, which would be the most likely reason theseemployees would have a negative reaction?