Skip to main content

tv   Meredith Broussard Artificial Unintelligence  CSPAN  March 3, 2019 10:30am-11:00am EST

10:30 am
6'" thought the entire civil war, stands up and point the finger and says that fellow democrats, if any member will order me to remove this dictator from this position of power upon the podium i will do do so by force forth with. he says, the honorable gentleman from texas is out of order. the next day martin shows up come takes out a 16-inch long but we nice, sits in front of speak of house and methodically sharpens it on his boot sold. i don't member of pelosi this in 2011. i just don't. we've been to before. we'll get out of it again. thank you. >> you can watch this in any of our programs in their entirety at type the author's name in the search bar at the top of the page. >> our first bigger is meredith broussard has a professor at nyu.
10:31 am
last week professor broussard quickly came book "artificial unintelligence" was released as a paperback edition. it is a guide to understand the interwork and outer workings can in a workings of outer limits of technology and why we should never assume computers always get it right. we have copies you can get side by her just outside and her research focuses on artificial intelligence and investigative reporting with a particular interest in using data analysis for social good. our newest project explores the future of the stories will read today's news on tomorrow's computers commit former features editor at "the philadelphia inquirer" is also worked as asa software engineer at at&t bell labs. her features and essays of the peer into lante, harpers, slate and other outlets. please join in getting a warm welcome to professor broussard. [applause] >> thank you, luther. hi, everybody. i'm delighted to be here today. i'm going to talk a little bit
10:32 am
about artificial intelligence, what it is and isn't, and the going to talk about how this relates to antitrust, which is a topic of our gathering today. so i come to this discussion as a data journalist. i begin my career as a computer scientist and equipped become a journalist. and what data journalism is, is it's the practice of finding stories in numbers and using numbers to tell stories. one of the things i do as a data journalist is i build ai systems in order to commit acts of investigative reporting. and when i tell people this they often can't look at me and say, well, what do you mean? and then i have to explain what is ai. the best way to explain ai, are one way to explain ai is explain what ai is not. so think about what you defaulted when you think about
10:33 am
ai. i would like to assure you that that is probably not what ai actually is. it's not the terminator. it's not something out of star trek. it's also not this, right? there are no robots roaming around anywhere, not even at ces, the las vegas ciaccio, and it is also not this. so hollywood colors a lot of what we think about ai, and it colors our defaults about ai, but what is ai really? ai is a branch of computer science. now, inside artificial intelligence, the branch of computer science, there are lots of different subfields. the way that inside the field of mathematics, there are lots of subfields like algebra. so ai is a subfield of computer
10:34 am
science. inside ai with other fields like machine learning or expert systems or natural language generation or natural language processing come subfields, probably keyword you have heard recently, and those are legitimate kinds of ai. however, we have got some linguistic confusion happening here because when you think artificial intelligence and you think machine learning, it sounds like there's a little break inside the computer, right? one time i was demonstrating a program i wrote at the kind of science fair for grown-ups and his undergraduate came over and said you built and ai. i said yes. they said is it a a real ai? i said yes. then he kind of starts looking like under the computer, as if he was looking for a little
10:35 am
person or like a like a littler something. and i realized that, like really smart people get confused about machine, something or something more sophistical gone. artificial intelligence, sounds like there's something more sophistical gone. this is really, really, but we need to push back added in order to have clarity. so inside artificial intelligence, let's differentiate between the real and imaginary. imaginary ai is everything that falls under the realm of general artificial intelligence. so that is the killer robots, that is the singularity, that is the paperclip machine that decides to start making paperclips and make so many that it drowns out all of the human beings. like all of the wacky their stuff that you've heard of, that is all general tranfour and it is imaginary.
10:36 am
narrow ai is what's real. and narrow ai is just math. so another way of thinking about narrow ai is we could possibly called computational statistics on steroids. so we've got what is real and we have once imaginary. i want to talk about another concept that i talk about in the book besides the definition of artificial intelligence, , and e concept is something that i call techno chauvinism. it's a idea that technical solutions are superior because the idea that technology itself is superior to other solutions. it's also the idea that technology is superior to people. now, this idea of techno chauvinism is at its core an idea that comes from a very small and homogeneous group of people. what people are really saying is that math is superior to people,
10:37 am
because computers are machines that compute. we kind of forget this sometimes because i mean, our phones are so much a part of our everyday system and what the computer to think and seem magical if you don't understand all the way down. but it's just mechanics. it's just math. everything a computer does ultimately it is turning real life into math and it literally computes, okay? this comes from a tradition of mathematicians saying hey, our fancy pursuits are superior to the ordinary human pursuits. math has this cult of the lone genius inside it. we can push back against that. that is not true. mathematicians are not better than other people. they are awesome. everybody is awesome, but math
10:38 am
is not superior to people. it's not a competition. but let's find out who told us that math is superior to people. let's find out who told us that technical solutions are better than human solutions. and, in fact, let's find out who gave us all of the ideas that we have today about technology and society. so here we have a couple of inventors of this field trip we've got on the left claude shannon who is the father of information science we have alan turing who everybody knows who alan turing is. we've got marvin minsky with the fancy hands there. marvin minsky is, referred to as the father of artificial intelligence. and then we've got john von wayman was a mentor of minsky and is responsible for the underlying architecture of every
10:39 am
computer everywhere and then we've got larry page and circuit, the founders of google. what do these folks have in common? they are dudes. what else? they are white dudes. anything else? how about their background? they are english-speaking. anything else about the educational background? they are math guys. these are white dudes mathematical backgrounds from western countries. they have short hair. are white, and they are ivy leaguers more or less. there is nothing wrong with being an ivy league educated white dude mathematician. some of my best friends in fact, ivy league educated white dude
10:40 am
mathematicians. but there's a problem here, which is that everybody embeds their own biases in technology. people embed the own biases in technology. so when we have a small and homogeneous group of people making decisions about technology for the whole world, we have run into problems. we also have another phenomenon that happens in groups. that phenomena is called positive asymmetry, and this is the tendency in groups of people to go along with the group and not bring up things that are icky. people want to focus on the positives. when you're in a group at work you don't want to be the person who brings up the fact that, hey, this algorithm we writing is going to discriminate against black people who are trying to
10:41 am
get mortgages. you don't want to be that person, right? is a psychological phenomenon. so the issues that are embedded in ai, the issues that are embedded in algorithms, the cultural issues, the social issues, and get overlooked or tend to not be discussed. and another thing that we can identify about these folks who are the core group who have given us ideas about technology and society is their homogeneity comes a very specific place. it comes from the summer of love. it comes from and the new tremulous, the hippies in the 1960s. you can see up on the screen now we've got some folks on a commune in the dome and we've got the magic bus chronicled in
10:42 am
the acid test. other ideas about the internet for what the internet is supposed to be come from the whole earth catalog, steve jobs once said he was trying to re-create the community of the whole earth catalog. the whole earth catalog had this section in the back of it that was people writing in letters and talk about their domes and talking about like a homegrown recipes for lsd and oozes very vibrant community which is really interesting. that is the community people were trying to re-create online. that's the committee people were trying to re-create i creating comic sections which assassin. a lot of this comes down to an internet binder, who is right in the middle. this all comes from fred turner's book, from counterculture cyberculture, which chronicles the relationship in today's
10:43 am
high-tech world and the new communalism of the 1960s. the problem is some of this is just wrong. and a lot of this ideology is antigovernment, is anti-regulation. and what this does, together with ideas about techno chauvinism is it muddies the waters around antitrust. so we have things like people believing, , falsely, that tech companies are different than other companies, that the tech industry should not be regulated because it's going to do such a really great job of self-governance. we have ideas like things that happen on the internet are different than things that happen in real life. and what we can do now 20 years into the internet there is we can say tech is a pretty mundane
10:44 am
part of everyday life now, we can critique these ideas and we can say to these ideas match up with what we want out of a democratic society? and so if we think about antitrust and we can ask ourselves, this will that we have made, is this really a world in which we have competition? now, i hope you've all read the recent series about what happened when she tried to cut the tech giant out of her life, right? raise your hand if you read this series. i see a lot of hands, fantastic. what she finds is it's practically impossible to not use the tech giants in your everyday life. we can also think about discrimination go specifically would you talk about christ is commission, not going to talk about racial discoloration to although that is happening but i will talk about price discrimination which happens
10:45 am
pretty frequently in the unregulated online world. here is "wall street journal" article from 2014 that asks or explores why you can't trust your getting the best deal online, major retailers are guilty of price discrimination, or were found in this case to be guilty of price discrimination. what can we do? we have made this world. we need to make it better. what can we do? the first thing you can do is buy my book. easy first step. and the next thing you can do, you know i've got to say this because i'm a professor, you can read widely. i've a couple books appear i would also recommend. weapons of mass destruction by kathy o'neill breaks down ms. shaheen encumber extend down the way that modeling has been
10:46 am
weaponized and turned against folks. we've also got automating inequality by virginia eubanks, which looks at the ways that technical systems are used to police, profile and punish the poor people is also god algorithms of oppression, which is about racism embedded in search engines. so, for example, when you google white girls, you will find images of the girls were as if you google black girls you get born. profound inequality. and then finally we have program inequality which is about how britain lost its committee or edged out its community of women programmers after world war ii. and another thing we can do is we can start having
10:47 am
conversations. we are a democracy. we can empower ourselves to no longer think technology companies get past because they're doing this fancy thing that i don't understand if we can understand. we can about ourselves and then we can have conversations as members of democracy about how we want to be governed, what kind of world we want to live in, and we can start to ask the question, should we govern online the way that we govern real life? because online is real life. thank you very much. [applause] and we have time for a few questions. all right. all the way in the back. i really appreciate you coming here. [inaudible] in the movie beautiful minds, it
10:48 am
wasn't the issue as opposed to math. your point. it's interesting he came to the conclusion. not the one that you clearly made such as wanted to thank you for that statement. >> great, thank you. so the comment was about the movie a beautiful minds and about the reversal that the character has at the end. there are lots of things in the world that count but can't be counted. and love is one of them. so this is a good reminder that we can't program against every social problem. technology is not going to save us from every social problem. so let's take homelessness for example. the fix for homelessness is not making an app to connect people with services better. the fix for homelessness is
10:49 am
giving people homes. so we need to think about pushing back against techno chauvinism and using the right tool for the task. sometimes that too is a computer and sometimes it's not. it's not a competition both ways, , both strategies are equally valid. all right. although in the back. >> you mention of governing online like we do in real life. lots of debate in d.c., not just d.c. the california and jeb gdpr going out in europe. any high-level thoughts about any of those sort of consumer privacy frameworks, something missing from the discussion or things you would urge policymakers to focus on if they're not today? >> i would urge policymakers to get more educated about how the internet actually works because there's been a lot of obfuscation by tech companies is a we can't possibly enforce national boundaries on the internet because the internet is
10:50 am
global. there are national boundaries on the internet. so when i travel to europe i i can't stream hulu on my computer because they are very practical things in place that show where i am. they worked okay, and they show where i am, and so is my behavior as an individual is governed by the law of the land where i am, the fact that that is not enforced on the internet was a strategic decision by creators of the internet and creators of internet culture. it doesn't have to be that way. so i would like us to open up a dialogue about, well, about changing that. more questions? right there.
10:51 am
>> adding to the question what are the things policymakers often hear when there's any proposed change come for instance, to govern the internet is we govern in real life is any small tweak will break the internet. would love your thoughts and how to respond to this consistent talking . >> that is a very computer scientist response, that it will break the internet. computer scientists rely a lot on the technological ignorance of people around them in order to not do things that we don't want to do. this is one of the big secret of the tech world. if you don't understand how the thing i built works and i try and explain it to you and you don't get it, then i get frustrated and the second all right, well, you just don't understand. i can do the job because it will just break things. the way of cutting off communications. it's a way of saying i don't really want to do that.
10:52 am
and it's hard to tell whether something that somebody doesn't want to do something that can't be done or something that they don't know how to do. and like teasing of the difference requires a lot of communication and you know what, programmers are really not good at doing? communicating. we are really good at code but less good at communication. not all programmers of course, but. there's this car t that has a man and a woman in an office. a man comes in and says to the woman, i want you to make an where the user uploads their photo and it tells of them, it tags whether or not it's in a national park. the woman at the computer, the programmer says no problem.
10:53 am
easy lookup, give me like an hour. then he says, well, and i want you to make the program so that it tells you whether or not the photo is of a bird. she says sure, , give me a research team and five years. so the caption is that ncsd to be hard to tell the difference come to explain the difference between the difficult and the impossible. so it's really a communication issue. people in both direction or responsible for the person who knows less about technology which educate themselves about technology, and the person who knows more about technology need to educate themselves about communicating better in that situation and over all. in the front. [inaudible] -- to where we will be in five years, individual data ownershi ownership. >> individual data ownership. okay, where are we? we are in a big mess.
10:54 am
where should we be? we should be in less of a mess. where are we going to be in five years? i don't have the faintest idea. i think gdpr is a big deal. i think gdpr has make companies react in a way that is surprising. i mean, i did not know that is going to be as effective as it has been. so that feels like an important step in the right direction. is it exactly the right thing? i don't know. it hasn't actually been in place for long enough, and i don't personally don't understand enough about what's going on in europe to be able to make a pronouncement about it. what do you think about the future of data ownership? >> i think the balance of power is shifting. i think it's a question for how long people have a lot more control. >> i like that vision of the
10:55 am
future. okay. and we are out of -- sorry, one more question. go for it. >> i think some of the anxiety about the computer-driven models, whether it's showing search results when you type in black girls are whether it's these icing mechanisms, it's the public cannot see the decisions being made behind the scenes by the mask. do you think it would be both helpful and doable for there to be some kind of transparency about here is that these decisions get made for these prices for insurance or these search results for images? >> absolutely, yes. more transparency would really help. one of the reasons that i wrote my book, "artificial unintelligence," was so that people could be empowered around technology. because you can't actually understand the writing about fairness and transparency insurance models in chile
10:56 am
understand some basics about artificial intelligence, and those basics are not easy to access. so the book breaks it down in plain language that most people can understand. that i think is the important first step to understanding more about fairness and transparency. and there's been a really exciting conversation going on around fairness and transparency and ethics in machine learning, in artificial intelligence in general. there's a conference called the fairness and transparency star conference that just happened i think two weeks ago in atlanta. it's lots of computer scientists and mathematicians and policy policymakers and lawyers all getting together to say, what do we want in terms of fairness and transparency? it's really hard to explain what's going on inside a machine
10:57 am
learning model, and we don't have any kind of unanimity about how to communicate that yet. we don't have good visuals about how to communicate it yet. and lots of the explanations that i've seen have been over most peoples heads. so i think that communication issue is another one we are going to have to solve. we are going out to be able to can make it about these things at a high level mathematical level and at the level of everyday communications so that everybody can understand because that's a really important part of democracy, is that everybody can understand what's going on. thank you very much. it's been a pleasure speaking with you today. [applause] >> here's a a look at some boos being published this week.
10:58 am
10:59 am
>> look for these titles in bookstores this coming week and watch for many of the authors in the near future on booktv on c-span2. >> c-span, where history unfolds daily. in 1979, c-span was created as a public service by america's cable-television companies, and today we continue to bring you unfiltered coverage of congress, the white house, the supreme court, and public policy events in washington, d.c. and around the country. c-span is brought to you by your cable or satellite provider.


info Stream Only

Uploaded by TV Archive on