Skip to main content

tv   Jennifer Neville Discusses Artificial Intelligence  CSPAN  December 30, 2016 6:56pm-8:01pm EST

6:56 pm
chatbots. the speaker is jennifer nef i will who focuses her research on machine learning and data mining, this is just over an hour. >> well, good morning, ladies and gentlemen and welcome. it's my pleasure, my great pleasure to introduce jennifer nef i will to you. she is an associate prove or and chair of computer science andsta tis ticks. she has come with us even though her department is enjoying an external review right now, she is giving up a lot to be here and i appreciate that. her research interests lay in the development and analysis of relational learning algorithms and the application to real world tasks. today she will present a talk entitled ai easy versus ai hard.
6:57 pm
if you haven't already done so i would ask you to please silence your electronic devices, don't put them away. as we learned in the last session, 15 years ago, 10 years ago if you were heads down on your phone during a presentation that was regarded as a bad thing. because you weren't paying attention to the talk. now it's regarded as a bad thing if you are not on your phone because you are obviously not tweeting about it all the time and communicating. snapchating, what not. but please do silence your devices, we hope to see you tweeting, the hashtag is dawn or doom. any posting to instagram or snapchat or whatever social means you prefer. please join me in welcoming dr. january fdr. dr. jennifer neville
6:58 pm
[ applause ] >> thank you,gerry. is the mike on? you can hear me okay? can you hear me now? >> yes. good. okay. thanks for having me here again at the dawn or doom symposium, this is one of my favorite places to come and talk, because i can think at a higher level to a more general audience than my typical research conferences, i have a joint appointment between computer science andsta tis ticks, my area of research is machine learning and data mining and specifically i focus i focus on very complex demands where there are -- you need to take into account interactions between many entities in order to make efficient predictions, accurate predictions but today i'm not going to talk about my own research, i'm going to really try to give you a sense of what's going on in the ai community right now. there's been a lot of interesting breakthroughs over the last few years. there's a lot of excitement about ai and what i'm going to
6:59 pm
try to do today is compare and contrast two events that happened earlier this year and talked to you about how to think about what's hard and what's easy in this space given those two occurrences. so really i want to contrast these two recent events that happened earlier in march of this year. so the first major breakthrough that happened was you can see here march 15 was the event was google's alphago system compute ere game playing system that plays go won over a professional go player here, lee sedol and that was major breakthrough for the community. it was something that at least when i started in the field of ai was something that i thought about doing. i taught myself how to play go and realized i couldn't even learn how to play go let alone learn how to program a computer to play it.
7:00 pm
so success happened much earlier than people anticipated and so in the news this was touted as a major breakthrough and something that was going to transform what we saw in terms of successes in ai. less thank 10 days later we had another event happen which you may or may not have heard about in the news. here this is march 24. microsoft released an ai chat bot on to twitter, it was called tay and it was designed to mimic a young teenage girl that an ai robot was supposed to interact with millennials and learn how to engage them in conversations on twitter and this experiment went horribly wrong, if you didn't see about this in the news. within 24 hours this pleasant teenaged chat bot turned into a
7:01 pm
racist abusive sexist entity and microsoft removed it from the internet within 24 hours of releagues it and apologized profusely about the event and the important thing here is that they didn't anticipate this was going to happen. some of the news that was explained what was going on blamed us as humans for being horrible people on twitter and turning this cute little chat bot into a horrible mess. what you might not realize is that microsoft didn't do this unknowingly they had a similar type of chat baht called jao ice that was being used for two years previously very successfully in china and it was interacting with as many as 40 million people so what was the difference between when they rolled it out in china versus when they rolled something out on the general internet for the general public on twitter.
7:02 pm
that's what i'll talk about when i get more into the systems but the important thing i want to contrast from a technological standpoint is that al or go -- algorithmically, there's a lot of shared programs. the computer that is going to play go would be different than a robot chat bot that is going to a intact conversationally with people but some of the -- many of the methods underlying these are the same and so why was one system a major success and the other a major catastrophe? to understand this let's go back to the beginning of ai to learn about the history. ai has been around 60 years from now. it was started in 1956 when john mccarthy, marvin minsky and claude shannon proposed to have a two-month summer conference at dartmouth to explore and define
7:03 pm
what this field should be under the conjecture that all aspects of human intelligence could be encoded in a computer system, include manager aspects of learning so when they jump started the field with this conference, there was a lot of excitement, a lot of effort in many different directions in the field, i'll focus mostly on machine learning because that's my area but there are a lot of other areas included that i won't touch on today. so the early work in the field was inspired considering how we as humans might be learning. one of the first machine learning methods was the perceptron. this is a mathematical model where they try to mick him
7:04 pm
neurons. once a neuron reached a threshold they would fire a signal and they tried to enfold this mathematically into perceptron. another was samuels checkers program in 1959. it was one of the first game-playing systems where it would formulate the problem of how to learn and play a game as a game tree with you have to look ahead as to what the outcome of the game might be in order to decide a strategy of what moves to play and the methods he used to develop the system were the precursors to the subfield of machine learning right now that is called reinforcement learning and in that case there's not an immediate signal that tells you whether you're right or wrong in terms of what decision you made but you have to wait for some
7:05 pm
amount of time before you get feedback as to whether you're headed in the right direction or not. and that is the case with games. finally the third thing that i want to point out here is at the same time there was another area of ai that was focused on developing dialogue systems so these were the precursors to what are being used in chat bots. in these systems there was input from users and the ai system would have to figure out how to make a response. one of the first examples was the eliza system invented by weizenbaum in 1964. this was built as a parody, initially of a rogerian psychotherapist and so what would happen -- what the system would do is it would interpret some information about what the user put in through some pattern matching and processing the language that they had inputted
7:06 pm
and then picked responses based on certain templates they had in the system and if you were going to parody a psychotherapist you can imagine that a lot of the answers were things like "how do you feel about that?" and "tell me more." and so at first when weizenbaum created the system, he created it to show people how difficult the problem of interacting with humans in the ai system with humans would be and he was surprised at how many people actually were fooled into believing that this was a -- so this jump started work in that field. so how do these basic systems work? . let's consider the checkers scenario. the way we frame these as
7:07 pm
learning systems reflects how you might think -- if you self-reflected about how you yourself learn how to play a game this is really what you put into our computer algorithms as well so if you were going to teach somebody how to play checkers, you would tell them about the board, you would tell them about the pieces and what kind of moves they can make, how do you win the game and so on. so instead they watched other people playing the games to figure out choices of moves, strategies or scenarios or you might play yourself and lose a lot at the beginning but you would start to understand which moves were good choices and which moves were bad choices so you can improve your strategy over time. so that's what we do when we put that into an algorithm and the way that you -- i'm sorry, i messed up my order here. so before i tell you about how the algorithm would do that, let me tell you about the history of the successes we've had. this is the area where a lot of
7:08 pm
the general public is very excited to see about what is being achieved by these computer programs. and in 1994 the chinook checkers program that was developed at the university of those two types of successes have been combined together and so the system alphago system that's -- was developed by deep mind and google has just now beat this 9-dan professional player in go. and had a surprising move people surprising, but it eventually was incorporated into human play as well. so the humans learned from the computer program about better strategies than they hadn't
7:09 pm
thought about before. some of you may be old enough to remember in 1997, deep blue, the program from ibm also achieved this level with chess when they beat gary kssparov. much of the success was from architectural computation nal improvements that allowed them to do brute force search much faster than we had previously, so that success was very different than the ability to learn with td gammon. now recently we've seen with alpha go, those two types of successes have been combined together and so the system alpha go system that was developed by deep mind and google has just now beat this professional player in go and again had a surprising move that people initially thought was a mistake, it's called move 37 in game two,
7:10 pm
if you want to look it up. human would ever have made that move but -- and at the time when they made a move it was very surprising when the system made the move to have turned the tide of the game and resulted in a loss for the human player. again they were surprised by what the program ended up doing. so how did the programs doing this? they framed this problem of learning how to play a game in a search tree like this and so what this search tree, this game tree, shows, is all possible board configurations and so this shows you a game tree for tic-tac-toe. up at the top you can see the board is completely empty and everybody knows the rules of tic-tac-toe, you have xs and os. x chooses to move somewhere on the board. these three positions represent all possible positions of where x could go because this in the middle could be here, here, and here and you have the same outcome.
7:11 pm
same for the corner position. so x has to choose where to go and then after that at the next level then o needs to make a choice based on the available spaces still left on the board. and so you can imagine following through this game tree as you have a sequence of choices by the two players who would form a branch in this tree all the way to the bottom of the tree where you end up with something winning or losing. so this is how we'd frame this as a computational problem. in a problem in terms of deciding how to -- what strategy what -- to play. so at the beginning of the game you have no idea whether that's a good signal or not until you get to the end of the game. so you've made the choices through the game and get signal s in order to develop a strategy
7:12 pm
based on their experiencing. so the complexity of this algorithm callie comes from the size of this game tree and how easy it is to encode it all and get to the bottom and prop eugate signals back and forth. to look at the complexity is to look at the game tree. tick tak toe is a relatively simple game you don't play any more unless you have children because you know you can't win and has 100 thousand leave nodes in the bottom of the tree. and so that still is fairly large. checkers has 10 to the 31 and then you can see with go here this is an order of magnitude larger than checkers, so we have 10 to the 360 here. and so just to give you a sense of how big these numbers are, 10
7:13 pm
to the 28 is about the number of atoms in your body. and 10 to the 80 is the number of atoms estimated to be in the entire universe. these numbers are astronomically big, if you think of exhausttively searching those game trees there is really no hope. to give you a sense of the back of the envelope calculation how long it would take to search these trees. give you the situation where you could evaluate one million board positions every second, which is a lot. if you could do that, maybe our current state of cpus would be able to do that kind of evaluation, then you would be able to search over the entire tictactoe tree in tenth of a second. when you move on to checkers it would take you 10 to 18 years to search that tree. it's entirely impractical to think about searching these
7:14 pm
entire trees, so what we need to do to be able to solve these problems is to resort to machine learning. we need to be able to learn whether a particular move in a particular board configuration is a good move without having seen the entire tree. and the way that we're going to do that is through learning. so how does the alphago system work? there are papers and many talks on this and this all tried to characterize it very simply for you. it's a combination of the ideas that have evolved from these two historical methods here. so deep learning is really a very complicated version of a per accept tron, it's many combined together in many layers, and so then learning is much more complicated than it was with a single node here in the middle, but that's where deep learning has grown out of, and so what alphago does is combines together deep learning and reinforcement learning so the idea that's come from the checkers play game where we have to wait and understand what the
7:15 pm
reward is going to be after we've won or lost the game, are also combined into their system and the way they do this is by learning multiple models and combining them together. so in particular, they have two neural networks they learn with deep learning. i think i have an animation here. here is the details from their paper. so one neural network, you can see there is more complicated networks here and one nature reca neural network is designed to predict the next move given the current state of the board, so that is a very immediate feedback because they're not trying to predict whether they're going to win or lose, they're going to predict the next move. and they train their model on 30 million positions that they acquired from traces of expert games, so human decisions about which moves to make. they learned a second model that is going to predict the likelihood that they're going to win a game given the current
7:16 pm
state of the board. so this is a much longer range prediction of what -- how good is the situation that they're in at that point. and then they update these two models using reinforcement learning and combine the two together in a complicated mon at the car low search procedure, which -- this is only to tell you that it's not a simple approach, it's a very complicated system to solve this problem. but it's really using many of the simple foundational method that's we've built up in the areas of machine learning and ai. so when you think about these game playing systems, the two dimensions of complexity that decide whether the problem is easy or hard are -- depend on here the size of the search space as we go from small a trackable trees to tick tack toe, to things like for good, things get harder and also the amount of delay you have until
7:17 pm
you get some feedback that we can use in the algorithm to do learning is another dimension of complexity. i think i might have forget to say one of the factors in these games that impacts the amount of delay you have. it depend ds how long the game is. if your game only requires 10 or 15 moves then you are going to get feedback after 10 or 15 moves, but if it takes you on average 150 moves, then you have to go much further down in the game tree and your reward is much more delayed. so now let's contrast this with dialogue systems and what went on with tay. so in figuring out how to learn how to interact with humans in a dialogue system, really the basic learning procedure is very much the same as what i described for games, but in this case you need to learn about language and behaviors and interactions with other humans
7:18 pm
and so what are the experiences that you are going to have, they're not going to be games with a fixed length of time and a very clear signal of whether you won or lost in the end. you are going to have conversations with people, and you are going to have subtle feedback about whether they like you and want to continue engaging with you or not. so this is something that maybe even humans are not all that good at or we're not uniformly good at because you hear about these things like social intelligence or emotional intelligence, people who are able to interpret those signals from other people more effectively are valued as having more emotional intelligence. so from those signals, though, once you interpret them, you again should learn from the feedback good strategies of how to engage with people and how to extend and make that engagement more effective. so in terms of the history of
7:19 pm
dialogue systems, we've also had a number of successes along the way in the ai community. so it's the history really started with liza, the psychotherapist model i talked about before. then we had the alice system which was able to convinceingly mimic conversation al -- this was inspiration for the spike joans movie "her" that came out in 2013 and was aired at the dawn or doom conference back in 2013 and then you might think that this next one should have gone in the game playing list, in 2011, ibm's wat ton system beat top players at jeopardy. but much of the tech nomgy it was using are the same types of technology you need for dialogue systems. they needed to understand the
7:20 pm
input from the clue and figure out how to answer the questioning using natural language processing and retrievable techniques, although there wasn't the same sort of interaction, it was really a big success for the natural language processing community. so then something you may have heard about in 2014, there was this system called eugene, which nominally passed the test. it was a system that was developed by three russians and in a competition in trying to convince a set of judges that it was human, it was able to convince 33% of the judges that it was human. and so they claim that it passed the test. there are some people that disagree with that claim. because this chap out here, eugene, was developed as a teenage ukraine boy, and they used a lot of particular tricks
7:21 pm
to hide the limitations of its processing system, so because it was a young boy who was from another country, it was very for giving to not quite understand the language or the questions that people are asking it and also to be more curious so to be able to ask more questions and deflect the conversation when they didn't know what was going on. and so this system, while people think maybe it hasn't really passed the test, it did use a lot of techniques like hugh march and deflection in a very interesting way, that was able to convince these judges that it was human. and so really that should be considered as a achievement in this area. and so then in 2016, i guess this is -- you could disagree whether this is a successor not, but when tay was released by microsoft, really it was successful, in fact maybe it was a victim of its own success.
7:22 pm
tay acquired 50 thousand followers and tweeted 100 thousand hours that it was released. there was a lot of interest in tay from many types of groups on the internet. but really it was vulnerable to this coordinated troll attack, so hundreds of users identified a way to interact with tay in order to change the language and structures that the chatbot was using in a way that was not anticipated. so how do -- how do these systems work? i do describe them in a simple game tree representation for you. so they're much more complicated architectures that have a lot of components, the main components are the user has a message, typed in or verbally said. there is some sort of speech recognition and natural language processing that tries to understand the context and the
7:23 pm
intent and the task that the user is trying to achieve and this goes into some sort of dialogue management system which has all the guts of the natural language processing and response generation. and there is two basic ways that methods have tried to figure out how to generate responses. the older method is a rule based matching sort of system that takes some word from the input and tries to match them to certain kinds of templates of the types of response that they should generate and then goes to a database of responses and figures out how to generate those and sends it back to the user. the more current methods are retrievable based method that's are based on what happens in information retrieval systems that are used at google and bing to identified which documents to return. they take the same basic
7:24 pm
technology and take the input as information need and use language models and the context, and figure out from the responses how to decide what to return to the user. so what is the complexity and how to develop these dialogue systems. the two dimensions that contrast with the game playing systems are that although language has structure and rules like games, it's not as clear-cut as the actions and the outcomes in games. for one reason, it's continually evolving, so words are being added to our language all the time. rules are broken. the way people speak, they don't necessarily follow those rules. they invent slang, there is colocation yal terms dlrks is use of sarcasm, irony and humor, in ways we use words in which they were not tended.
7:25 pm
so this means that the search space for what these algorithms have to consider and how they learn how to behave with this user input is effectively unbounded. so there are an infinite number of possible scenarios that the user can be put in and you have to develop a sna yes, a system that will be robust and able to adapt to scenarios they have not seen before. the second type of issue is that the feedback about whether you are doing something correct is much more vague and unclear. so you would know that from your own social interactions and how you try to interpret who is happy with you, who is not, who likes +]r;zyou, who doesn't. the feedback is often unclear and often there is a much longer delay before you get such feedback. and so you can see this as an issue in terms of what we put as objective functions into our systems, right now they would be trying to optimize something like the length of conversations, that people have
7:26 pm
with the chatbots to use as proxy for things like satisfaction or user engagement. what does tay do as a system. so again, just like with alp alphago. the microsoft people have not written papers describing the system, so the information that i have here is gleaned from many talks that i have seen from people at microsoft research about other component technologies that they're developing as well as some of the information that's been discussed about tay. it's pretty clear that they're using very complex natural language processing. that is based on deep learning techniques as well. and so what they've really done is move beyond the simple matching and retrieval based systems to use complicated deep learning models to predict what kind of responses are likely to be a good match to particular
7:27 pm
user inputs. again they're using massive amounteds of training data, just like the alphago system, they're using mim i don't knows of examples between users they get from twitter or other online sources you can see these kind of engagements over time. the big difference here is in that chatbot system is an open system versus a closed system. there is no clear bounds to the types of interactions that they might have, the types of behaviors they might see from people. this makes it very vulnerable to attack. for example, they never anticipated this kind of coordinated trolling attack would happen, when they were developing this system. so the dimensions of complexity here with respect to dialogue systems are similar to the things that we talked about with games and in this case, now we have a third size of space here which is an open system or an unbounded search space and now we have feedback that is not
7:28 pm
just even just delayed but it's vague, or it's unclear and so we see as we go on in the world in the development of ai, we tackle harder and harder problems with these systems. so when we compare the two, the game playing systems and the chatbots, we can see that when the problem can be formulated with clear, immediate feedback in a track tabl search space we're able to solve it pretty easily. those are really the situations where it's considered easy and we have our major successes in the community, but as we go further up to this upper right-hand corner, that is where our problems are still hard and really where the current work needs to happen. so if we go back to discussing the difference between alphago
7:29 pm
and tay, when i said they were built on the same underlying technologies, they both learned deep learning methods that show significant and impressive improvements in machine learning lately as long as they have massive amounts of data from which to train them on. one major difference between what alphago did and what tay could do is tay can't learn from playing against herself. which is what alphago did. so in the alphago system because the rules of the game are clear and whether you win or lose is very clear. they can take different versions of their system ask play them against each other to generate even more experience of the different types of sequences of moves and outcomes you would see. but that's impossible to do with tay or it's you could do it but it wouldn't be successful to make the system robust to the
7:30 pm
kind of interactions that you would get in the real world with real users because of the variety of kind of behaviors you would see from users. and so what tay needs to do to get the kind of feedback and experience that alphago is getting, tay needs to be out in the real world interacting with users and using that to figure out h to behave. that meant when tay was released onto twitter she was still learning. so what happened with this coordinated troll attack was users created conversations with tay and used language in a particular way that eventually made tay think this is how people talk and this is what i should say. and so eventually after enough interactions with these kinds of people, she was turned into a horrible person and what i like to explain to use as an example
7:31 pm
to explain this to students is you can think of although tay was very successful as an algorithm and had learned a lot of things about int acking with humans, you might think her amount of knowledge would be equivalent to a 7 or 8-year-old child who had had some experience with interactions with people in a very limited environment and you don't let a 7 or 8-year-old child out on twitter, right, to interact with the general public on twitter. so the difference when ice was operational in china, is that in china there is a lot more restrictions on what kinds of information is on the internet. because of the cultural control and so they don't get really the same kind of interactions as you might on twitter. and so if you think about this, you might say as i did, from an algorithmic perspective we
7:32 pm
should know this would happen and the algorithm should be able to tell people are changing their behavior and how they talked in order to make this chatbot behave differently. they can change the input in a way to change the output in a terrible way, but this is very complicated to the detect algorithm. it's easy to say there are hundreds of people that participated in this coordinated attack, but when it started it was very subtle and it's very hard to detect and know that one individual interaction is not a valid one that it's an -- to figure out how to identify it and adapt. this is why when we have our young children and teaching them how to interact with people we don't let them go on twitter right away, we send them to
7:33 pm
kindergarten and they act in a restrained environment with people who are kind and loving to them, maybe there are bullies in the class, eventually they have only small doses of it until they figure out how to interact in those situations. so in terms of what we need now for future research, really what we have to think about is that we have to push in this direction where we're moving to unbounded kind of situations where we're going to have our systems behave. and this is something that humans are able to do very easily. well, maybe not very easily, some people are able to do them easily. we expect people to be able to adopt to new situations and learn how to behave with relatively few examples as they have -- after they have developed over time into an adult and so the way that microsoft is trying to encode this information this their
7:34 pm
chatbot system is working with improve actors, this is something very non computer sciency, right, that they found that what do improv actors have to figure out what to do. they're thrown into new situations all the time and they have to figure out how to keep the act going, right. which is exactly what they want to do with chatbots, this is something they're working with humans who seem to have the skill to figure out how to al ga ridge that ability and maybe if we learn more from how they've view their interactions, we'll be able to put that into our al ga ridges easily to deal with this open-ended system. the second issue is with this dimension here of feedback. and so we really have to figure out how to deal with this subtle inconsistent maybe long delayed feedback, but that is also something that we do fairly well, maybe people with emotional intelligence do it
7:35 pm
better than others, but that is something that we can figure out how to do ourselves so it should be something we can figure out how to do in the al ga ridges, so the new areas of computation nal social science or humor are the kinds of research directions that are trying to take these into account. so computation al social science is nominally the area i'm in. when i try to come up with machine learning algorithms that will take into account interactions between users, really what i want to move to are complicated situations where many people are interacting with many other people and we would like to learn how to redistripr their behavior. where you have continual one-on-one interactions over time. people interact with different sets of people and then move and change groups and things like that. so the ideas from social science of interpersonal communication, impression management, aus
7:36 pm
trisism, all of these kinds of ideas, need to be put into our algorithms and learning methods. computation al humor is another area where we'll have a few speakers later on in this conference this afternoon talking about how do we -- how do we understand what is funny, how do we know what is funny and how do we put that into -- how might we put that into an algorithm is another interesting direction to try to deal with the situations where there is not clear feedback. okay. so to wrap this up, we've made really great progress in ai over the last 60 years. we have things now that are starting to become reality where we have self driving cars, that are able to automatically identify what they see from sense sors and figure out how to adjust the car as it's driving. we have smart buildings that are
7:37 pm
able to ougautomatically adjustr flows, and we have the beginnings of personalized medicine, where instead of deciding one treatment plan for large populations, sub populations of the general public, we're starting to have methods that are being personalized to someone's genome or history of illnesses over time. so this is really an exciting time for this field, computer science in general. ai and machine learning specifically. and so you might be tempted to say well these dimensions of complexity that i talked about, it's not too hard, we will eventually go into that upper right-hand corner without much effort, but i would caution you here that as a community, we've always notoriously you understand estimated the difficulty of the problems that we're addressing, so what i will end with here is this story that
7:38 pm
all ai students know about but you might not know about. so in 1966, so ten years after the beginning of ai, papert, one of this group of mit faculty that started the field, told his graduate students to solve the computer vision as a summer project. why did he think they could solve it over the summer, he said because unlike many other problems in ai, many other problems in ai, computer vision was easy because it was clear what we were trying to solve and it was easy to encode that al ga ridge callie. now, 50 years later we're still working on this problem, it was supposed to be a summer project. so and -- but maybe we're close to solving computer vision fairly soon, based on all the work going on in ai, this is -- so i will close here with this
7:39 pm
image from the facebook's vision system, which is able to take images that are uploaded on facebook and it can automatically identify components of the image and know what they are so it can produce a textual representation of what is in the image and that can be used to produce a verbal description for blind people. and so this shows you the capabilities that we have with our systems now in terms of how much can be identified so from this image here these are going to be identified as sheep, so all the little blobs are surrounding the objects in the image and not only can we identify and segment the image and identify the important components, we can also decide what those things are and produce labels for them so that the image can be described to people who can't see it. and obviously we can use that information to, for many other aspects of the facebook system, but the benefit to larger
7:40 pm
society is clear from this. so i will stop there. thank you very much. [ applause ] >> that was a great presentation. can we take some questions from anybody who would like to ask because we're recording the event i would ask you to come up like i did to the microphone. we'll go until quarter of, okay. >> hi jennifer. so you mentioned emotional intelligence. and i have heard some scientists talking specifically this biologist church who is a future religious, he has mentioned before the idea of trying to teach ai emotional intelligence, it seems like if we could have taught tay to have morality or a sense of emotion that maybe she wouldn't have fallen into the trap she would have fallen into.
7:41 pm
and i wonder is there a way to preprogram morality or is that itself something we still don't understand that we can't do that? >> yeah, that's a very good question. i think that issue of morality also comes into play when you think about the self driving cars and the current requirements on how safe the self driving cars will have to be before they're allowed on the road is a threshold of safety that we as humans can't even meet. some how we think that the system has to be much, much safer, we can't possibly release a system out in the world that we think will conveniently end up killing someone. even though we as systems might go out and get into a car accident and have somebody die. so i think the complexity in that, absolutely there are people that are thinking about how to algorithm that, but we have to think differently about
7:42 pm
it than we would with a game playing systems. i think with game playing systems, everybody agrees on which outcomes are good and which outcomes are bad, you either one or lost. we predict credit card fraud or to defect spam in your e-mail there is still a fairly clear signal as to whether a transaction is spam or fraudulent. but when it comes to something as complex as morality, we don't even all agree on what the answer is. and so that is something that probably we could encode very easily if we were willing to be satisfied with that kind of encoding. so if you somehow decided on how to value human life and what kind of action you would take if you were about to get into a car wreck with say ten people in front of you, if you are driving you would get into a car wreck and possibly kill ten people or
7:43 pm
there is one person on the side of the road, but it's your grandmother, what would you decide to do. we would all make different decisions. if we could answer what decision we would make, we would probably like to think that we don't want could come up with an answer, but we could come up with a distribution of what humans would finds acceptable. we have to think a lot more carefully about acquiring that feedback and encoding it in order to use it. so i think it will come, but i think there is going to be some tough philosophical discussions amongst us as humans as to how do we actually value these things. because what the algorithms are doing is they're simply valuing different kinds of outcomes. once you put a value on things, we can optimize for it. so if you as a general public can value human life, then we can start developing systems to
7:44 pm
optimize it. so -- >> various points in your talk the ideas came to me of the possibility or probability of things like computation nal propaganda and computation al surveillance, i wonder if you would comment on that. >> absolutely. yes. so that is those are two good issues to bring up. so computation nal propaganda would be the situation where we would inject bots into the world that would try to
7:45 pm
try -- if i go too far in doing that persuasion you might think of it as propaganda. i think related to what we just talked about, trying to probably identify the sna yores is going to be much more robust than making a strict decision, yes or no. and having the system self monitoring or interacting with humans to say this is a situation where i think this is going on let's detect it or not. that is something that happens in fraud detection right now. when systems are -- some of the systems are detecting automatically whether a particular credit card transactions are fraudulent and
7:46 pm
they might immediately shut down your card that it's clear it's a fraudulent transaction. in other cases they'll notify you as a user to try to get your feedback about that. what was the second topic? >> surveillance. >> surveillance. that's a very important thing to discuss as well. so you might not realize that the systems are really tracking everything about what you do. everything you say online, every search that you do, and it's very easy, the security and privacy people have shown it's very easy to reidentify you from the trace, your electronic traces of behavior online. if you had access to my search history, from my computer, there would be very few people that live in west lafayette, travel to the bay area a lot, look at machine learning topics and are vegan. quickly from a few aspects of my
7:47 pm
profile, you would be able to narrow down that it's probably me from that search history. and so it's sort of scarey thing, and i guess my answer to that is when that data is out there, we can choose to use our powers for good or evil. there are always people that are going to be trying to use them for evil, but that doesn't mean we shouldn't be developing the computation al methods because if we understand how things are happening computation naturally, maybe we can identify this type of thing is going on. and i would say that the privacy preserving machine learning and data mining community is really focused on trying to have these methods, developing methods that can be applied for learning in a situation where personal information is aggregated but yet you can guarantee there is no leakage of information about any one individual being in that setting. i think those are the type of methods that should give some
7:48 pm
solace to people, so i also say this to my students, you enjoy being targeted. so in your systems, when it get ds personalized to your own specific behavior, you enjoy the fruits of that analysis all the time. so your e-mail is figuring out and people are valid senders and receivers for you, so that it can identify spam. the search engine is looking at your history, once you type a few letters it already know what's you are searching for. so this is making your life easier without maybe you even knowing it. so that is the good from it. but if people get to the point where they deny you health insurance because you've searched for something particular online and they've identified that you might likely have a preexisting condition, that would be a very bad scenario, so -- >> that was a really great talk, thank you. and i -- to me you are
7:49 pm
describing a kind of 50 year process of interaction between data and computation/algorithms, where the data sets and rules move from a relatively finite area to a more complexity and richness in terms of the data sets and the computation comes up because of law and better al ga ridges and that holds and explodes with search which kind of gives birth to cloud computering and regular unstructured data. and media does that so we have these big watersheds over the last ten years. with tay you get into this whole other interesting dynamic, you touched on, which is not simply the activity of speech but speech as it is received and speech in its social context. this is this next complexity we'll have to take on for al ga ridges to handle.
7:50 pm
it seems like there is two fixes to this. one is the elementary one is recognize that you know, it's bad to be a nazi and so cross that stuff out. the other one is to look at social circumstances. this is coming from the gamer gate guys. suspected. or even more, create counterveiling messages to that that will correct ta. part of the problem was once she started to get a little nazi, all the nice people stopped talking to her. so one counter would be more nice people come here and talk to this poor lady. i guess the question is as we move into this kind of ai, aren't we also reprogramming ourselves? >> that's a very good observation. i think that the observation you
7:51 pm
made that we should have more nice people talk to ta affects not only the algorithms. you talk about people leaving twitter after they've had horrible interactions with people and we can't even fix that problem for humans but maybe, just maybe if we understood how to fix it for the algorithms we would be able to improve things for ourselves as well. and so that's some of the social research, social science research that i was alluding to focuses on things like encouragement, positivity, creativity and how to foster those kinds of feelings and behaviors in people. and if we understood better how that worked in humans we may be better able to put it into algorithms. but if we come up with solutions algorithmically that would fix ta's devergence into this state
7:52 pm
that might inform the social science to look at the structures they've been thinking about more computationally. something like ten positive interactions a day might make everybody feel happier. i don't know. but it's an important thing to think about and i guess my point of interacting more with people in social science and philosophy and humanities is that as engineers, we tend to not think about these things. we like to put it into a mathematical equation. i did not put any math in this talk but usually every talk i have has equations. i just want to distill everything in a very precise mathematical equation but all of these issues are maybe hard to codify in a mathematical equation. so it's really from discussion with these people with more
7:53 pm
social, emotional intelligence than we as engineers have that would be able to help us put that into our algorithms and systems. i think we will have -- if we have the right kind of back and forth between those two communities, i think we will make great progress in the systems while at the same time learning a lot about ourselves. >> professor, i was wondering about your opinion on ta. what if she was trained isolating it to just kindergarten conversations where like kind people talk to them and learns from them and then people that you usually talk to when you're a kid and then grow up teenager and finally adults. what would have happened if we tried to train ta as a perfect human rather than just laying it out open in the --
7:54 pm
>> well, so i guess i'm not sure quite how to answer that. absolutely one of the issues with ta is that ta was doing continuous learning. and so there are two things we could have done. we could have had more training to reinforce ta's behavior in kind, trustworthy environments before rolling it out to the general public. and then stop our learning, right? so we could say we've learned only in these very fixed environments and then we'll stop so we'll be robust to these attacks moving forward. but that kind of system would be brittle because it would not be able to adapt to new situations that it hasn't seen before. so we really need to be able to adaptively learn from the new situations we see but one of the issues that could be adjusted and probably is being adjusted in new developments of the
7:55 pm
systems is that the algorithms learned based on the data they have. and so we could -- you could imagine a scenario where we weight the training data coming to us baseod trustworthiness so that for particular kinds of interactions, if they involve words like nazi, genocide, maybe we say that's fairly untrustworthy so we should weight those very low and we won't update the aspects of the model very much when we see those interactions. if there's particular people we trust like our parents or siblings or relatives and see new interactions with them, we'd say now adjust to these new kinds of interactions because those are probably valid interactions. work on figuring out, there is work in the area with reviews and online systems in trying to automatically identify how trustworthy something is. so we could potent yelly use
7:56 pm
those things to make the system adapt and learn more robustly by basically understanding what kind of data is coming in. >> i had a question. the difference between artificial intelligence and intelligent machines is great. you never know exactly where the line is drawn. so, for example, people hear like, you have been discussing about ta. you have ideas like specific talks, specific people, specific situations. if we do that, can it really be called artificial intelligence because if we have been training it and it's just another intelligent machine and not exactly -- >> yeah, sure, the definition of what is intelligence, what is learning is very fuzzy. so these early systems like
7:57 pm
eliza was not doing anything that you would think of today as being intelligent but at the time they talked about it as an ai system. there are things that we do in machine learning that i even had this discussion with my grad students last week that if we simply write out an equation and optimize it but we hand crafted that equation to reflect a particular scenario, are we learning? i don't know. we have a system that does something based on optimizing that equation but maybe it hasn't learned anything about the environment. and so in everything that we develop as computer scientists, we tend to go back and forth between the two. so, really, when we focus more on the engineering and making a system that works, we often put in things that are hand coded or manually specified in order to just get the outcome we're looking for and tweak it to behave well in the scenarios we want it to. and then as we try to abstract that theoretically, we try to
7:58 pm
move to a more general concept. those two kinds of things are always there mutual hand in hand in these scenarios, and when we're making better progress of understanding it at a higher level, we're pushing to the more theoretical abstractions but also make a huge amount of progress by just making specific decisions. so i would think they're both useful systems but maybe a philosophical issue to decide whether you think of them as actually intelligent or not. okay. >> so professor neville always gives a fabulous presentation. i think i've discovered my next career which will be providing consultation to social media users. i'd like to get an early start by telling you, you are all better looking and smarter because you attended this talk with no fear of contradiction. i'd like you to give a very warm thank you to jennifer. [ applause ]
7:59 pm
sunday "in depth" will feature a live discussion on the presidency of barack obama. phone calls, tweets and e-mail questions during the program. the panel includes april ryan, author of "the presidency in black and white." up close view of three presidents and race in america. princeton university professor eddie glaude, author of "democracy and black." and pulitzer prize-winning journalist david maraniss, author of "barack obama the story." watch "in depth" live from noon to 3:00 p.m. on sunday live on book tv on c-span 2. c-span, where history
8:00 pm
unfolds daily. in 1979, c-span was created as a public service by america's cable television companies. and is brought to you today by your cable or satellite provider. coming up on c-span3, programs about the impact of world war ii on the u.s. and the world. next, examining the origins of the cold war. then u.s. democracy and international relations. later, a look at the legacy of the cold war. now from the world war ii museum in new orleans, a discussion on the origins of the cold war. historian alexandra richie looks at josef stalin and reasons behind expansion while professor conrad crane examines the western response to a leader who had been an ally during the war. this is an hour and 20

34 Views

info Stream Only

Uploaded by TV Archive on