tv Ajay Agrawal Prediction Machines CSPAN June 16, 2018 4:15pm-5:30pm EDT
at once. the navajo have a word for this kind of balance and beauty, hoso. we are one with the universe. without a spirit actual dimension to our working as conservationists, we are only working for ourselves, not the future, and certainly not for future generations of all species. >> you can watch this and other programs online at booktv.org. [music] >> ready to get started. if people out having breakfast could come in and join us, that would be great. we have some folks from the entryway downstairs. i realize that an early morning tech event is a little of on
oxymoron so our apologies for the timing but thank you all very much for coming. my name is nick beim. i'm a partner at venrock, and i have spent much of the last five years trying to figure out the economics of artificial intelligence and i have to confess, i am not yet there i'm still working on it. but thankfully our speaker today, ajay agrawal, had swept a lot of time on the emergency subject, and has written a become on the subject. i have had the opportunity to work with ajay on multiple projects, and i think it was particularly interesting is he is both a distinguished academic and a successful entrepreneur in the field of artificial intelligence. his is a professor of entrepreneurship at the university of toronto, where the focuses on the members of artificial intelligence. he is also a research associate
at the national bureau of economic research in boston and on the entrepreneurial front he is founded the entities, all associate with artificial intelligence, that has been really interesting. one is but to open up operations in new york. the first, the one that is going to open up operations in new york is called creative destruction lab, and i have been an adviser to the organization that started in toronto. it is an accelerator that is home to 150 a.i. driven startups is a largest concentration of the planet so really interesting achievement and they're going to be standing operations into new york. the second is next a.i., a program for young a.i. entrepreneurs in canada, and the third is kindred, which is a startup that builds machines that with human-like intelligence and recently named
to the m.i.t. tech reviews 50 smartest companies and sew see the insight a.i. 100. and due in part to his work on a.i. in 2017, the global and mail, national okayedan newspaper named ajay one of the 50 most power people in canadian business and today he is going talk but miss new book: "prediction machines. ethicism economics of artificial intelligence" which is easily understandable to everyone and it was published worldwide this week by harvard business review press. so, here's how the logistics work. ajay will speak for an hour, and then we'll have 15 minutes of q & a, and then we'll have coffee and discussion for everyoning to outside afterwards, and as you probably saw on the way in there's a big stack of books in the back. feel free to take one on your way out. with that as background, join me
in welcoming ajay agrawal. >> thank you very much. thanks for hosting this. and the folks at app nexus, thank you very much for hosting here. and i don't know if kelly is here but she coordinated all my logistics to like to thank her. and also thanks to jim and beth who are the literary agents and helped us bring this book to fruition. thank you for being here. just so i can calibrate, how many people here have an a.i. company? i suspect there's a few. okay. entrepreneurs. and then just in terms of familiarity if you had to just stand up now -- who would feel comfortable standing up and in a few sentences defining was a.i. is. three or four.
and if those that organizations, if you were to characterize how you wou -- where you would place your organization in terms of happening an a.i. strategy -- having an a.i. strategy, category one is, we're fully prepared, we have an a.i. strategy in the works. category one. category two is, we're putting a team together to start working on that. and category three is we have no idea where to start. those in category one, a show of hands. one, two -- three people. category two, the medium category. okay. and then the category three people. all right. so, hopefully by the end of this, at least for the first question, which is defining a.i., everybody will leave the
room feeling comfortable you could define it for anybody. and also people most people hopefully will feel comfortable in the second one, you have at least a sense of where to start. okay. so, i don't need to tell this audience, you have all self-selected to be here. that a.i. is i think the reason is affair amount of enthusiasm but this technology is that it's what economists call a gpd, general purpose technology. that it seems to be everywhere. it's now hard to find any market where there's not some a.i. applications rolling in to advance productivity of whatever they're doing. at the same time there's a fair amount of angst, people trying to figure out what it is and what the implications are, how they can deploy it and what that means for humans. so my role this morning and what we tried to accomplish with the book is to remove some of the
anxiety by way of provide something clarity, how can be think of a.i., particularly those people who don't have a computer science background, and to do that, what we do is we take a topic that is really domain of computer science, and put a layer of -- a different lens on it from a discipline we don't associate with clarity and that's economics. so by putting an economic layer on top of computer science background, in this case, we think we can provide a fair amount of just insight into how we can think about development of a.i. so my background is i'm an applied economics professor at the university of toronto, and teach a business school, and the reason that this matters is because two blocks down from my building is the computer science building. and due to some serendipity and
a higher mistake 25 years ago, toronto became really the epicenter of the recent renaissance in a.i. so, today several of the most powerful industrial a.i. groups in the world are headed by people who were ten years ago at toronto. so the person who was the -- who originally led the a.i. group at facebook here in new york was ten years ago at university of toronto. the person who leads a.i. research at apple in cupertino, ten years ago, the university of toronto. the person who leads the a.i. research for elon musk, opening a.i., a billion dollar opening -- ten years ago, university of toronto. and so the reason that's relevant is that i'm the founder of a program at toronto that is
opening up here under the leader son -- called the creative destruct lab. the focus of the lab is to transition science projects into massively scalable companies. and so five years ago what started as a trickle and turned into a flood, was graduate students come delivering computer science lab -- the companies are coming from around the world to creative destruction lab but out of this lab on our cam niksch first one that came was a graduate student who said i'm going to use this new technique for predicting which molecules will most efficiently bind with which protein for the purpose of drug discovery. and then right after him, another one came and said i'm going to use this technology for predicting which credit card
transactions are fraudulent versus legitimate. and then another one came and said i'm going to use this technology to can use another medical images and predict which tumors are benign versus malignant. and then another one came and said i'm going to use this technology to predict which automobiles have defects before they roll off the production line. and so on. and so as we were sitting in the lab, dismiss two colleagues, -- didn't take a rocket scientist to figure out there's something unusual coming out of this commuter science department. the same underlying technique which we now refer to as deep learning and reinforcement learn, the same underlying technique is being used to give a material lift to this very wide set of applications. and so after a while, we just started to document what we were
seeing. then we compiled our notes into what are the general lessons to be learned that are applicable contracts -- across all of these different indicates and that resulted in this book that hit the book stores yesterday, and so what i'm going to do now is just give you the key points from the book. okay. so i don't need to say to this audience that a lot of people are feeling that today in a.i. feel as lot like 1995 felt with the internet. heidi, can we have a little echo. so feels like 1995 felt with the internet. so most people will remember that 1995 was a real transition year for the internet. so we had the internet for at least of couple he decade were '95, mostly use by the mel tear and academics. and it was growing over time,
and then in 1995, there was a big jump. that was the year that bill gates wrote his famous internet tidal wave e-mail. microsoft launched windows 5. nsf net was decommission chez removed the last commercial barriers to carrying commercial traffic on the web and in august of that year, netscape went public. so a $3 billion i oo for a company that generated almost no profit. and right after this -- so '96, early '97, the language around the internet started to change. we stopped referring to the internet as a new technology and instead began referring to it as new economy, and the point here was that it had permeated so many parts of our lives that we stopped think offering it as a technology and started thinking of it as a different way of interacting in the economy. and so ceos and journalists and entrepreneurs and investors,
politicians, all started referring to this as a new economy. everybody referring to a new economy except for one group of people and that was the economists. economists said, wait a minute, this isn't a new economy. this is exactly the same question with haulses a. in fact, we won't have to change a single word or a single page in an economic textbook. all the underlying economic model still hold. everything still driven by supply and demand. production and consumption. prices and costs. it's all the same. the only thing that has changed is that the relative costs of a few key inputs have fallen dramatically. the cost of digitally distributing goods and services. the costs of search. the cost of communication. and that is the way that economists view the world. the thing that economists are
quite good at is to take a new technology and strip all the fun and wizardry out of it and dissolve the toke a single question and the question is what does this reduce the cost of? and surprisingly that often gives us some great insights. so, for example, the heart of silicon valley is the semi conductor industry, and if you ask a computer scientist or an electrical engineer, can you please describe to me the rise of the semi conductor industry inch their they'd have an image like this and they will describe to you the underlying science behind cramming more transistors on to a chip. so they'll explain the number of chill doubling and the implications -- that's what the technologists will explain to you. if you were to ask the same question to an economist, can
you describe to me the rise of the semi conductor industry, this isn't the image. they'll have this image and the reason that economists will says this semi conductors are so foundational while there's many things happening in the bay that they're the engine of the innovation economy in the silicon valley is because the thing for which the cost fell was such a foundational input, and the case of semi conductors that was amyth ma tick. so economists think of semi conductors as a -- the rise is dropping the cost of arig arithmetic. how many people saul saw the film -- hidden numbers. the women came to work and they computed. and there's a scene in the film where they roll in the big
machine, the big new computer and everyone is trying to figure out what that means for them. okay. so, when the cost of something falls, three important things happen. and by understand that we gain insight figuring our out how this will affect the economy. you go back to economics 101 and the very first thing everyone words is downwards sloping demand curves and the key insight that win somebody gets cheaper we use more of it. and so, for example, things for which we already used arithmetic, things like the tipoffs calculations they were doing at nasa, in the film, and the census bureau and the military, we start using a lot more arithmetic so it was better, fastest, cheaper and we starts to do more of it. so the cost falls, use more of
it. we also -- this is where things become interesting. we use more of it not just for thingses that were traditionally arithmetic problems but star taking things that weren't arithmetic problems and convert intime ath my tick problems to take advantage of the new, cheap arithmetic. an example is photographery. photography used to be chemistry problem. we solved it with chemistry. and made film. but as arithmetic became cheap we transition into an aright my tech based solution for photography and music and communication bankings, one thing after another we transition spied an arithmetic based solution. now ton a.i. if we were to ask a technologist, an engineer, computer scientist, can you please describe me the rise of a.i. that, would describe is the rise of the science and the statistics underlying
neuroneatworks and talk about input and output and the nodes and the links the weights hover he links and explain to you a development of a process called back propagation and so on. but if you were to instead as s an economist, can you please describe to me the rise of a.i., they wouldn't have this nick their head. they would have this image. so this point people will be thinking, wait a minute, economists are single minded and they see everything the same. some level that's true. but this is the key to understanding why economists think of a.i. as an n a category of its own. if grew to the consumer electronics show in las vegas and see a rainbow of new technologies and robotics and drones and artificial -- virtual reality and so on, you see so many different years of tech, and you say, there's so many thing. why i a.i. so special in the
reason economists will say it's a different category than all the other things and the reason is because the thing for which the cost falls is a foundational input into such a wide range of activities we conduct. in the case of a.i. that means prediction. so we can think of a.i. as a -- the rise of a.i. as drop in the cost of prediction. what i find is a useful exercise is anytimor reading a magazine article something about a.i., replace it as your reading it with the words "cheap prediction" and the article witness suddenly seem less magical and more practical and you can make sense of exactly what irthey trying to do? so first of all, how do we define prediction here? prediction we define as taking information you have to generate information you don't have. so that includes what most people would traditionally call prediction, like demand
forecast, taking the last five years of sales to predict or forecast next quarter sales. so that's an obvious form of prediction. but a less obvious form that we would still call prediction is something like classification. i mentioned looking at a medical image. that's the information we have or the pixeled in the image and the information we don't have is whether the tumor we're looking at is benign or malignant, a a a so the a.i. generate that conversation would -- we would call prediction. so we have a substantive drop, plummet neglect cost of prediction. what does that mean? what are the implications for businesses? and for society? okay. so the three big implications. implication number one is that downwards slope degree commands curves, when the cost of something fallings we use more of it. so in all the things we
currently use prediction, like demand forecasterring, supply chain management, insurance, all of these things will simply use the new super powerful prediction, better, faster, cheaper. so just start seeing a.i. coming in and replace the traditional statistical techniques we use for doing predictions. at the same time, we'll also -- this where is it will become interesting. where in my view a.i. on the business side, separate from the computer science side, becomes an art more than a science, which is we will start converting nonprediction problems into prediction problems. so start using more prediction by converting things into prediction problems, and example of that, just like we did in arithmetic, an example is driving. this is the one that everyone's most familiar with. so driving, we have had autonomous vehicles for a long time. 30 or so years depending on howl you count. but we always deployed our
autonomous vehicles in a controlled setting. so a factory, or a warehouse. and the way we did it, simplified version, is that an engineer would have the floor plans of, let's say, fracturety or ware house and would program a robot to move around the factory floor. and then they'd give the row boot bit of intelligence of put a camera on the front and then they would tell the robot, if somebody walks in front, then stop. if the shelf is empty, then move to the next shelf. if, then. if, then. so a series of logic, of rules that gave the robot some intelligence. the problem was you could never take that robot out of the cold environment put it in an uncontrolled environment because there were too many ifs. in fact, there's an infinite number of ifs. if it's dark, if it's raining, a child runs up to the edge of the road, car comes -- oncoming car
with the left turn blinger on. if, if, if, and the ifs are interactive and because of the infinite number of ifs in an uncontrolled environment, as recently as six years ago, the experts in the field were saying, we will not have an autonomous car on a city street in our lifetime. because we simply cannot program all the ifs. until people in the machine learning field reframe the problem and said, rather than programming an init in number of ifs, what if we change the problem to instead making one prediction, and that prediction would be, what would a good human driver do? and so the simply identified version of how autonomous vehicle works, way to think about it, is he imagine a car and in the car you put a human, and the human puts a human in the driver's seat and imagine putting an a.i. in the passenger seat. so what we do is we tell the
human, who is sit can in the car, drive. just drive. drive for a million miles. and so the human sits in he car and starts to drive. and as they're driving, they have data coming in through the cameras on the front of their head and the microphones on the side of their head and as the data comes in we process the date to with our monkey brains and then take an action and the actions are very simple. very small set of thens. we can turn left, turn right, brake and accelerate. that's it. we have many, many ifs coming in, and we have a very small number of thens. okay. now on to the a.i. so imagine the a.i. is signature beside he human. the a.i. doesn't have its own eyes or ears so we give its own sensory inputs. camera, radar, around the car. imagine the ifs are coming in, the data is flowing in as your driving and every fraction of a second the a.i. is looking over and trying to predict what will the human driver do in the next
second? and so in the beginning, the a.i. is not a very good predictor. they have big confidence -- they're not very accurate. and so they say i think she's going turn left, going to go straight, going to brake. and then something happens. either she turns lefts or doesn't turn left. and every time they make a prediction, and then they get to the a.i. observed what's human driver does, they were right, they double down on their model. they're wrong, they update their model. and then they make a different prediction potentially the next time. okay. so as they're driving in the beginning they a.i.'s confidence integrals are wide and making a lot of mistakes but as they drive and learn and correct mistakes the confidence intervals get smaller, small, smaller until at some point the a.i. u.s. jump a good predictor of what the human driver would do that we say the a.i. can do itself itself. so the a.i. has become a
prediction machine for driving. and that is really where i think a lot of the enthusiasm is. it's converting problems that weren't traditionally prediction problems into prediction problems to take advantage of the new, cheap prediction. so, driving is an obvious one. but we have done submit so many other areas. transplantation layings used two rules based problem. we had length long linguist and which would do translations but we converts tragics police prediction problem and now for those who, for example, use google translate, even between a year ago and today the improvement is significant and it feels like it's not too far away we'll have a commercial grade translatessor based on predictions rather than rules. okay. in our lab creative destruction labs, mentioned that this where is we all have these atie
companies and as far as we know, the creative destruction lab is no them to greater consenting tryings of a.i. startups than any program on earth. they're coming in from all over the place. and these a.i. companies, each one is working on solving a prediction problem, and now we're've having a lot of corporate, large companies who want to better understand a.i., and interesting group are people who are head h. r. and the common conversation we have is the head of h.r. with fly to toronto saying we're training to learn more but a.i. need to know for recruiting what types of skills ooh we recruit for, how should we train our staff, and in order to prepare them for a.i. we need to know nor other parts of our company if the sales, manufacturing, design, but not for my department because i work
in h.r. h.r. is a very human business but their the other parts of the company we need to learn about a.i. for recruiting. now, most of you know where this is going. one by one, a.i. companies are transforming the h.r. process into a series of predictions. what do h.r people do? i they're, they recruit. recruiting is effectively now a prediction problem. we get a series of resumes and cover letters and interview transcripts and then we predict from set of applicants applicane best nor job. once we hired people the next thing we do is promotion. what's pronotion a prediction problem. a asset of people working the company and we need to predict are those people who would be best in the next level up? then our next issue is retention. say we're at 5,000 or 10,000 person organization and we needs to retain our best people. what's that? it's a prediction problem.
we need to predict which of our stars are most likely to leave and what types of innocentstives would be most effective keep them and so on. one by one these roles are being converted into prediction probable showed that a.i. can tackling them. so, item number one, when the cost of something falls, we use more of it. use mow for both traditional things and also converts new thinks in this case into prediction problems to take advantage of the better, cheaper, faster prediction. here's the number two and number three. when the cost of something falls, it affects the value of other stuff. in economic language we call that cross-price selectivity. ow can think of things as being -- related to the focal thing as compliments or substitutes. so think of coffee. if the cost of coffee were to
fall, the compliments to coffee, the things we use with coffee, like cream and sugar, the value of those thing goes up because if the cost of coffee falls we couple more coffee and also consumer more cream and sugar and therefore the value of cream and sugar will go up or the cost of golf clubs falls, the value of go golf balls goes up. so those are compliments. the value of compliment goes up. the value of substitutes go down. so in the case of the cost of coffee falls, at the margin, some people switch from tea to coffee, the value of tea falls because the demand for tea diminishes. tea is a substitution cream and sugar are compliments. now, how does that work in a.i.? we can take any task and break its down into these components.
any task. and so doesn't matter what part of the organization you work. in there will way series of tasks you do and every tax can be broken down into bits. so let's say as i'm giving this talk i bang my knee on the lectern and three days from now my knee is really sore and so i go to the doc and i tell her my knee is sore and so she says, okay, she asked me questions and make she sends me for an x-ray; that is input. collecting input. then she makes a prediction. and her prediction might be, think with 089% probability you have bruises your knee. with 10% probability there's a hairline fracture. then she applies judgment. and her judgment is the way to think about it is, how costly would it be for this patient if they actually have a bruise but mistakenly trite as a fracture? versus if they actually have a fracture and i mistakenly trite
as a bruise? that is her judgment. she is take all to the things she knows about me and is trying to figure out the cost of a mistake. that's judgment. then she takes an action. so her action might be saying issue decided i think it's a bruise and treat it as a bruise, put some ice on it, races your leg and if it still hurts in a week come back and see me. that's the action. then there's an outcome. the outcome would be, a week later, let's say my knee is better, i'm good to go and so we have learn that she was right and that the feedback data which we use in this case strengthens the model or let's say it was wrong and a week later my leg is worse and then that's feedback data and we update and change the model for the next time. okay. so, any task we can break into these componentses and we find this very useful for designing
strategy around a.i., and for thinking through what are the implications for jobs and the economy and so on. here's how. if you look at this diagram, it's very clear what is the substitute for machine intelligence? so is machine intelligence increases, as the costs of machine prediction falls, what is the substitute for machine prediction? it's the box in the middle, human prediction. the value of human prediction will fall as the capability of machine prediction increasees. so we are quite poor predictors. we're slow, we're noisy, we have all sort owes systemic biases in our prediction. it's been very well-documented in books like thinking fast and slow, predictively irrational. documenting hour terrible humid are at making predictions. but we still make them all the time. and so as the capabilities of
machine prediction increase, the cost of machine prediction falls, the value of human prediction will plunge. so, our human prediction capabilities become less and less valuable because the machines can do it so much better, faster, cheaper. and that is the part i think the press has been fascinate with and that's led to a mischaracterization of a complete wiping out of the role of humans. what they miss when they focus on that are all the other boxes. the other boxes are the compliments. they're the cream and sugar. they're the things that will increase in value as the price of machine prediction falls. the one that the press has talk but its the first one, input. how many people have herd the phrase data is the new oil? most people. that is effectively talking about the first box. input. we have a always had
data and we have had for a long time. why its the new oil? what makes its knew? it it's way more valuable -- the same data is more valuable today than it was ten years ago. why? because the cost of prediction has fallen and so the value of to the data has gone way up. it's compliment. the database a compliment. it's more valuable now because we can do more things with it because prediction is better, faster, cheaper. so the press has done a good job of talking but the input box this press has not done a good job so far of describing the other bit us. human judgment. a.i. do prediction. they don't do judgment. they don't know that would with their predictions. we have to give them guidance on what to dive we predictions. we decide. that's judgment. so for example the doctor who is deciding what is the cost of a mistake? she's using her judgment. and so what is interesting is that we're always applying our
judgment but the value of our human judgment goes up as the cost of prekicks goings down. as we start get hing better, faster, cheaper free disk expose the fidelity of our -- the value of human judgment goes up because we're applying our judgment to much better predictionses. the next one is action. the predictions are generally used to inform an action. what action should we take? companies -- operating companies own very often now with thymine culp thymine culp bens athey own the action. not just data is the new oil. your actions are valuable because now you'll be able to take better actions because you're basing the actions on higher fidelity predictions so in our laby we have a.i. companies, one issue is they build and sell predictions but
they don't own the actions. and so they have to sell their predictions to someone who does own the action because without the action they're predictions are worthless. so, actions are a compliment. the value of actions goes up. where we have the most amount of negotiations with all the a.i. startups, a lot of them who are sell progress disks to larger enterprises, is on feedback. that's the gold dust. when we take an action, we find out later whether that action was a good one or a bad one and that's how we learn to update the a.i. so all of a.i. startups trying to license their predictions output, sell predictions to enter prize are trying get to heir hander on the feed back data and the companies, some realize that's the gold dust. that data in very -- is to some extent much more valuable than this dat. this data using the oil analogy,
you use it and you burn it, it's gone. you use is to train your a.i. the first time but once you trained your a.i., there's a few caveats here but the main one is you use it to train your model and then it's done. the value is gone. the ongoing value comes from the feed backdates that allows the a.i. to continue to learn. okay. so the key points here is in thinking bought strategy and think bought the implications of jobing that's thursday value of human prediction fall but the value of us all these complimeny assets go up. and the useful way to think about this, an example, is think about when spread sheeds rolled into town and the accountants. accountants had two broad skill sets. one is they would type in the numbers and add and so they had a -- we value their ability to type faze and add fast and then the second skill was to ask good
questions. so let's say they were doing a model to system the present value of some asset, and then they might ask a good question by saying what would happen if interest rates went up by 1% or if a sales went up by 4% in the fourth quart center and so they would ask a question but then they'd have to -- as soon as the ask the question they the start at the beginning and retype everything and reappeared up the whole set of numbers to test the new scenario. when spread sheets rolled into town the value of the human ability to type fast and add fast went down because the machine could now do that. but the value of being able to ask request g questions went up because it became more quicker and easier to a s a question so if you're good at asking questions, doing scenario analysis, the value of you as an accountant went up. i your key skill was add fast, your value went down.
so, if a.i. is just prediction, if the current renaissance in a.i. is all about throwing cost of prediction not delower res from world world or c3p0, then why is so much fuss? why there is so much fuss about a.i.? if it's is it's prediction tool. the answer is because prediction is a key input to decisionmaking and decisionmaking is everywhere. it is riddled throughout our business lives and our personal lives and so this is a structure of the book that's in five section. sex one we talk about free disk, we explain what is so interesting about the new method of prediction compared to traditional predikes tools, stackal technique. section two is decisionmaking and while a.i. is new, decision theory is old. we have 50 years of well-developed decision theory and so we can actually have a pretty good sense of what happens when we take this new
prediction tech nothing put it inside a process of decisionmaking we understand quite well that has been studied for a longum that's section two. the twoback parts of the book. then we can get into the practical parts, section three is on tools which is the actual building of a.i. tools and in that section -- i'm not going gish guess i'll give it a couple minutes own the tools the basic idea is we ick take any work flow, work flow is inside an organization, turning an put into an output so the line of business is a work flow and we take the work flow and break it down into tasks and every task is pred dates on a decision or -- pred indicates on a decision or couple of predictions and a.i.s do task. don't do work flows, don't do jobs, they do tasks predicatedded on a decision. so this is an earl many people would have read by the cfo of gold communicate they open up
with a dramatic statement by saying in 2000 the u.s. cash equity trader employed 600 grade today there are two left. that's the dramatic opening but later down in the article they have more interesting sentence so they talk about now they're working on more complex areas of trading like currencies and credit and want to emulate as closely as possible -- replace that with predict -- -- predict what a human trade we're do goldman has already hand 146 steps taken in any ipo. the idea is they're taken the ipo work flow and broken it into 146 different tasks. and so what we do when we're building a.i. is we take each task and we estimate the return on investment, the roi for building an a.i. to do that particular task and then we just stack, rank order the to says.
the highest roi for building a.i. and start at the top and work down. this is not nor goings and faints who are already well into this but the 99.9% of other organizations just getting into a.i., this is a approach we used to getting started. we map out the task, we rank them, and then we start at the topped and working down. okay. so sometimes there are companies that come to creative destruction lab, large companies and say, hey, we have three "pilot at the company. are we at the front tier of a.i. so just to calibrate, goinged now has almost 2,000 a.i. tools under development and they're probably the high water mark. or possibly -- so, we built this thing in the book, it's called the a.i. canvas and found this a very useful tool for getting companies started with a.i. so what we often do is let's say there will be 50 people in offsite and they come from all
portions of the company usual usually vice president level and above and they sit in tables of four and halfway through the day they go into breakouts and fill out this page, and very often there's a -- the whole set of people, not a circle person in the room has ever written a line of code so don't leave to have any technical background but go through their work flows and they pick out tasks and they say, okay, what if we bail prediction machine and this is the prediction so they specify the prediction, and the say with that prediction, their human judgment applied in prediction, and this is the, a that the prediction motivates, and then here's the outcome that wilt -- will result from. that. the training dat so sew a.i. can learn. ...
>> i'm going to talk about is strategy. so most of the time when we build a.i.s, we are building tools that are very specific, and their role is to just make a process more efficient in the service of exout cuting against -- executing against the organization's strategy. so just like microsoft word or excel, an a.i. is a tool to make you more productive, executing against a given strategy. but occasionally an a.i. can so fundamentally transform the economics of a process that they change the strategy itself. so this is what the popular press often refers to as disruption. and so what i'm going to show you now is the process that we
use for helping to give us guidance on which a.i.s might lead to a disruption, a strategy change for the organization. our view is that the main thing when we're going through what i'm about to show you is to develop a thesis on time. so in other words, the thinking about the time it will take for what i'm about to describe to happen will very much influence the investment decisions made today. in other words, whether something will happen in three years or ten years, it has very different implications for the investments you make today. so if i were giving this lecture even two years ago, most of what i'm about to say would be if, you know, if this were to happen, wouldn't this be interesting, or if that were to happen, then imagine the possibilities. over the last 24 months, we've had way too many proof of concepts to be thinking about this as an if anymore. in other words, we've had things
as vision, natural language processing, allowing them to comprehend words and language, effectively predict what those characters are trying to communicate. in motion control, robotics, being able to do things. so this is no longer a discussion of if, it is a discussion of when. so we know it's possible, now it's all about turning the crank and getting those predictions up to commercial grade levels of accuracy. so we're just -- we are in most cases now in the turning the crank mode as opposed to in the wondering if it's even feasible. so here's the thought experiment. we use a process we call science fictioning. but it's not science fictioning in the sense that you can sort of sit behind a desk and blue sky and think a.i.s can do anything. it's a very specific kind of science fictioning. the thought experience is imagine a radio knob, and you can turn the radio knob. but instead of turning up the
volume, when you turn the knob, you're turning up the prediction accuracy of an a.i., so that's the only thing, the only parameter you're allowed to move, is turning up the dial on the prediction accuracy. so here's the, here's the thought experiment. everybody's been shopping on amazon, so we use this as an example that everybody will be able to imagine. and when you shop on amazon, you know, you are immediately introduced to an a.i., that's the recommendation engine at amazon where they recommend, oh, we think you might want to buy in this or that. and right now for myself and my co-authors, on average that recommendation engine is about 5% correct, meaning out of every 20 things it shows us, we end up buying one of them. which 5% accuracy might sound lousy, but it's not too bad when you think there's millions of things in the amazon catalog, and it's pulling out 20 of them, show ising them to us, and we're
buying 1. the recommendation engine shows us some stuff and we shop around and we see things we like, we put in our basket, we pay for it, and the the orders arrive from somebody's tablet in the fulfillment center in amazon. and the robots are moving things around, the human pulls it out, ships it to your house, it arrives to your door, and you bring it inside, and that's how you shop at amazon. we can summarize that by calling that method shopping then shipping. you shop for the stuff and then amazon ships it to you. okay. so here's now the thought experiment. imagine that recommendation engine, which most of us just breeze by, we don't put a lot of thought into it when we see it on the web site. imagine that, you know, well, we don't have to imagine, this is just happening. every day at amazon the people in the machine-learning group are working on turning the knob. so let's say right now that knob is at a 2 out of 10.
and they're working, they're improving their algorithms, they are testing some, you know, different approaches, they are acquiring data assets like buying whole foods so they can learn more about how you and i shop offline. each time they do that, cranking the knob. so maybe they're at a 2, maybe they get it up to a 3 and then a 4 and a 5. they are entirely focusedded on increasing the prediction accuracy. as they increase the accuracy, they don't have to get up to spinal tap levels. maybe it's, you know, a 6 ott of 10 or 7 out of 10. there's some number where when they reach that number, let's say it's 6 out of 10 they get right, somebody at amazon says, you know, we're so good at predicting what they want, why are we waiting for them to order it? let's just ship it. and so just go through the thought experiment. as you turn the knob, you get up to some level where amazon says, let's not wait anymore, let's
just ship it. let's say they ship you a box of stuff, and the box arrives, you open it, and you take out the things you want. so let's say you want 6 out of the 10 things, and you leave 4. and you might say, well, wait a minute, why would they ship you things that they know you're not going to take all of them? from their perspective, they are potentially significantly increasing their share of wallet. so whereas they might have sold you 2 of those thicks, and you might have bought 4 of those other things from their competitors, now they're selling you all 6. and maybe you would have only bought 5 of them, and the sixth one is something you kind of wanted, but now that it's on your doorstep, you know, i might as well keep it. further more, now you've put four things back on your porch, the things you didn't want, and now amazon -- it's in their self-interest to invest in a fleet of trucks that are going to drive down your street once a week and pick up all the things that they deliver to you and your neighbors that you didn't want in order to lower their
cost of handling the returns. now, i'm not sure whether amazon will ever do this. it's not like they've never thought of it. this is a patent they filed on an approach they called anticipatory shipping, and they've already started testing it in some markets with clothing items. but the point here is that the thought experiment is very powerful. the thought experiment is what happens -- remember, the only parameter we moved was turning the knob. and what's so interesting about turning the knob is that as you turn the knob, it's like nothing happens, nothing happens, nothing happens. in other words, we see it, it's getting a little bit better in terms of recommending stuff as we're shopping, but we don't even really notice. nothing happens, nothing happens, nothing happens, and then it hits some threshold where all of a sudden everything changes. they changed the entire model from shopping then shipping to shipping, then shopping. we shop on our front porch
instead of on the web site. and they vertically integrate into a fleet of trucks and so on. and you can go through that thought exercise where imagine if you could predict insurance claims or bank loans or acceptances into mba programs. if you could crank that knob up to a certain length, there's some number where you cross the threshold, and it creates opportunities for a completely different approach. a so-called disruption. so when google announced last year they were moving from a mobile-first strategy to an a.i.-first strategy, is that marketing or does it mean something? it means something. what what mobile first meant not just that they want to be good at mobile, mobile first means they will be mobile at the expense of other stuff. in other words, they will sacrifice their web site and physical stores or whatever in order to be mobile first. that's their number one priority. so what does it mean to be a.i.
first? it means putting that dial at the very top of your strategy friars. it means that's -- priorities. it means that's your number one priority. they will trade off short-term user experience, they will trade off revenues, they will trade off privacy in order to crank the dial. when asked this question, what does it mean for google to be a.i. first, peter norvig, research director, responded. and the essence is down at the bottom. with information retrieval -- that's when we do a query on the google site -- can anything over 80 recall precision's pretty good. not every suggestion has to be perfect. since the user can ignore bad suggestions. with assistance, there is a much higher barrier. you wouldn't use a service that booked the wrong reservation 20% of the time or even 2% of the time. so an assistant needs to be much more accurate and, thus, more intention, more aware of the
situation. that's what we call a.i. first. so from a computer scientist perspective, that's how he dines a.i. first -- defines a.i. first. we would add to that the trade-off. when you make a.i first, you make other things second and third and fourth. so they're putting this as the priority, turning the knob. one example of making a priority is reshuffling the deck, putting -- moving people who were outside the ceo's office somewhere else and moving the people who are working on a.i. right in the same office as the ceo. so a year ago the google brain team of mathematicians, coders and hardware engineers sat in an off the building on the -- office building on the other side of the campus, but in the past few months, it now works right beside where the ceo and top executives work. okay. so to conclude here, when people come to our lab, what we notice is this dissonance. and the dissonance is they arrive at the lab and they say, i get it, i get the amazon
recommendation engine and, you know, siri. remember, sirri is just a prediction machine. siri doesn't understand what you say. siri hears an audio signal and predicts the vector of words and predicts the response that you want from what you said. and so people say, i get it, these are very clever, they're amazing, but they're not transformational. these are not transforming any business or any economy. but on the other hand, this is a chart of venture capital into a.i., you know, a very steep curve. in the last two quarters of the obama administration, the white house released four reports on how to prepare the american economy for what was coming around the corner with a.i., as far as we know, what we can find, it's the only technology where the white house has released four reports in two quarters since the second world war. then there was the google announcement followed by a series of other companies announcing moving to an a.i.-first strategy. then last july the government of china announcing their strategy, incredible amount of resources
to compete in a.i., and with the goal of by 2020 catching up in some fields of a.i., by 2025 dominating in a few subfields of a.i., and by 2030 dominating across every fie in a.i. was the aspiration. then in september the president of russia announcing a. i. is the future not just for russia, but for all of humankind. and the country that leads in a.i. rule the world. then later on in that month we hosted in the toronto what i think is still the greatest gathering of economists to meet on the economics of a.i., so the former treasury secretary larry summers, austin dwools by, the chief economist of google, chief economist of microsoft, a nobel laureate and others gathered to talk about the economics of a.i. because computer scienced had gotten so far ahead of economics in this subject. at the end of the meeting, danny
canaman, who's the author of thinking fast and slow, concluded with this, he said, i want to end on a story. a well known novelist is planning a novel about a love triangle between two humans and a robot. and what he wanted to know is how would the robot be different from the people. i propose three main differences. one is obvious, the robot will be much better at statistical reasoning. the second is that the robot will have a much higher emotional intelligence. we think of us as -- we'll always be better than a machines in emotional intelligence. the professor says, not so. a.i.s are already better at detecting minor facial changes, detecting changes in moods, they're better at detexting minor changes in audio to detect when someone's voice is reflected they're getting happier or sadder or jealous. the third is that the robot would be wiser. wisdom is breadth, not having
too narrow a view. it's broad framing. a robot will be endowed with broad framing. when it has learned enough, it will be wiser than we people because we do not have a broad frame. we are narrow thinkers, we are noisy thinkers, and it is very easy to improve upon us. i do not think there is very much that we can do that computers will not eventually learn to do. so on the one hand we have all of, we have these things, these a.i.s that we see that look neat, but they're not transformational, they're not disrupting industries. and on the other hand, we have all these learned people and powerful people who are making claims implying that a.i. is going to have this spectacular effect on the economy. how do we reconcile these two things? and in our view, the single thing that reconciles these is time. it is having a thesis on time. that the dial in most applications is sitting there at a 2 out of 10, and people are working 24/7 on cranking the dial to a 3, to a 4, and it's
moving faster in some domains and slower in others. it's not just, you know, and the last time we had a big technological revolution, some companies had a good thesis on time, some had -- underestimated how fast things would move. in this current revolution, it's not just the googles and facebooks and teslas and apples, it's also older economy companies that are making bets, companies like gm, john deere and so on. we had one company, one of the fellows at the creative destruction lab built a company in less than a year, less than 20 people, had virtually no revenues and was acquired by td bank for just over $100 million. and so to sum up, there was this nice quote that came out not that long ago from a former deputy secretary of defense, and he's referring to the race between u.s. and china in a.i. and the incredible amount of resources the chinese government's putting behind winning in this. and he uses the phrase this is a
sputnik moment. of course, referring to the soviet launch of the sputnik satellite that kick off the space race and, you know, the creation of nasa and so on. and what i think here, my co-authors and i, this is not just a sputnik moment for defense, it is a sputnik moment for all of us. for the people who are right now in the position of leading organizations or running organizations, you know, we get one of these once in a generation. they don't come every year, every few years. it's once in a generation something like this comes along that has the kind of potential that this has. for the kids in the audience, i think to a large extent, you know, you will be the ones that will have the creativity of how to really apply prediction machines in ways that the rest of us don't even think of. but the main point here is that, in our view, this creates an
opportunity like most of us will never have again in our professional careers. and so for some people, that will pose, you know, a set of opportunities to seize and pursue. at the same time, of course, there will be challenges. the biggest one will be to make sure that as a society everybody benefits, because it has potential to, you know, to be, to change a lot of structure socially. that that's it. i'll end it there. thanks very much. [applause] so i can take -- nick, questions? questions, i'm happy to take. yes, please. oh, sorry, there's a mic coming just because they're recording it. [laughter] and if you don't mind just introducing yourself. >> my name is elmo. [laughter] no, i'm patrick, slat erie.
patrick slatly. if we go back to the one diagram you presented on prediction and judgment and action, do you see a role for innovation in the prediction process itself eventually, or is it something that's more constrained to judgment and action? and if there is a role for prediction -- for judgment in prediction, will machines be best at doing that, or is that a human complement to -- >> i didn't follow what you meant by innovation and prediction. >> i think of it as creativity, i can see a role for creativity and judgment and in action. do you see, in the long run, a role for creativity in prediction? >> so i don't, but i might be missing something in the sense that i think of out in the world, in nature there are probability distributions that describe the phenomena be around us, and the a.i.s will simply
be better than we are at understanding what those probability distributions are. so that's not really a matter of creativity, it's a matter of getting higher resolution on understanding the probability distribution. yes, please. >> hi. >> hi. >> thank you so much. this is super neat, really enjoy it. hopefully, i'll get to read the book soon. so clearly you're a fan of danny canaman, so i assume you buy into behavioral economics, right? so that kind of cuts with, like, oh, human beings are rational. in fact, they definitely respect. and to me, there's kind of a parallel. i want to hear your opinion because to me right now we make very strong assumptions about a.i. and it's like, okay, if you have enough data and particularly if it's a vision problem or if it has particular correlations, then, boom, you're going to have really good precision recall, and you're going to have a magical black
box. but i just -- my sense is that we are kind of in the rational world. like, these machines -- the assumption is -- we have these neural nets, and we have enough data and, boom, it'll work out. to me, it seems a lot like the classical economics like this is just going to work out, and i wondered what were your thoughts around maybe, you know, these neural nets only work a certain way because we have this magical vision data that happens to correlate are spatially, and things correlate spatially and it works properly. and what are your thoughts around if that's true, then what would be that analogous behavioral economics or, and a little more concretely, are we being overexuberant about the possibilities around deep learning? >> sure. so i definitely think there's room for being overexuberant in the sense that, first of all,
people, people imagining a.i.s can do some things other than prediction where, effectively, that's all they do. further more, a. i.s can only do good predictions where they have good data. i think that was the other part of your question. when you're in domains where we don't have good data, the a.i.s can't make good predictions. the one thing where a.i.s sometimes feel magical is because they're able to find patterns in data that traditionally we either couldn't use or didn't have access to. so with just the falling cost of censors, we can -- sensors, we can sensor up so much stuff. it's not just the lines of data that we in an excel spread sheet. you can use so much more data of all kinds of types that can complement the process of making a prediction. and furthermore, in the old
methods we used to have to, before we started making a prediction, we used to have to create a model in our head of what data we were going to use to predict what outcome. now we can virtually take a kitchen sink approach where we give it everything, and we let the machine figure out, you know, what's related to what to make the prediction. so i think on the one hand we probably have much more predictive capability than we, many people realize because of the way the prediction machines work. but on the other hand, there's certainly not a silver bullet, and there's, you know, one of the things in that first section on prediction is we have a whole section on all the limitations of the types of things that prediction machines are poor at because they don't have a model, heir basically statistical correlations, and they rely on the underlying data. i'll take two more questions, and i know people have to get going, and i'm happy to stick around for offline questions afterwards. >> in your diagram you've labeled a whole lot of boxes as complements to prediction or inference.
right? and it's clear for me how some of those complementary boxes you could establish monopolies around -- [inaudible] there seems to be absolutely no reason why you should expect those boxes will be, you know, uniquely retained by human beings, right? every single one of those boxes is subject to research in a u.n. u.n -- in a subfield of a.i. generally. and as they become more valuable relatively, you can expect the research to go into them, right? so a lot of people say everything will be great, and human beings -- human judgment will become more valuable. but, you know, what reason really do you have to expect that that's true apart from, like, comforting? [laughter] >> okay. so that's a great question. i will, i will attack it with just one example which is the judgment box. because in some sense, that's the place where we seem to take
the most refuge in, okay, we're safe in the machines because we have judgment, and they don't. so the question was largely, you know, are we just still in the first inning, and a.i. research is moving into these other boxes. at least in the moment as long as a.i. is prediction, that's all it is, is just prediction. but the point is if a.i.s see enough examples of a particular type of judgment, they can learn to predict that judgment. so an example of that is driving. when we're driving, the a.i. is effectively baking in the prediction of our judgment so that when we are approaching a yellow light, the a. i.'s learning, inferring our judgment of how -- whether we're going to step on the gas or on the brake depending on if it's raining and how far we are from the light and how fast we're going, and judgment's been baked into the decision. it's true that today things we
call judgment, some of those things a.i.s will eventually learn to do because they'll get enough examples of us making that judgment in order to predict them. that, you know, i think of this to some extempt, you know, like those russian -- to some extent, you know, like those russian nesting dolls. once the a.i.s are good at predicting a certain type of judgment, we now can focus on a new set of problems, a new type of problem where our judgment is useful in the domain, you know, in that new domain. the question is, you know, at some point do you run out of nesting dolls, and there's nothing left. your guess is as good as mine. but what i would say is the thing i think we're very poor at, like that our monkey brains are not good at, is in anticipating things we've never seen. so just the same way that if we would have asked people 200 years ago when 47% of the population or even almost 100 years ago, 47% of the population were working in agriculture, if
we would have said a world -- imagine a world where 45% of the world are doing -- less than 2% of us are working in agriculture, nobody would have raised their hand and said, well, they're duong to be game developers -- going to be game developers or -- nobody would have done that. because nobody would have imagined that such a thing would exist. and i think that's what we're very poor at. we're very poor at imagining when we have high fidelity prediction machines, what are all the other things we could be doing? most people would say our health care system still is not great, but it's not bad. i think ten years from now people will say i can't believe people were living in those terrible conditions with that horrible health care system they had back in 2018 because we'll be able to do so much more so much more efficiently, so much more intelligently because the machines are able to handle -- same with space exploration. all things where we kneel like we're at -- feel like we're at the frontier, and in fact we're
just scratching the surface. i'll take one more question and wrap it up. please. >> [inaudible] so my question is around picking problems that we want the subject. now, we've dabbled a little bit in looking at deep learning and so on to solve different problems but there are, of course, lots of con constraints. is the assumption that eventually we will be able to turn that dial or are there some ways in which we can make better decisions as to what problems to go after and what not to go after? >> yeah, sure. so that's the process of estimating the return on investment for building an a.i., in other words, how long will it take, how much will it cost, how much data do we need to turn the dial far enough for this a.i. to give us u you know, some lift and performance. so in the book we describe some processes of how to do that. it's, i think we go far enough in the book to help people get
started, just far enough for non-technical people to get started to then know when to be able to bring in the technical people to provide more definitive answers in terms of how much data will we need in order to do this or what types of sensory information will we need to collect in order to do that. so there's no single answer to your question, but there is a reasonably step-by-step process for coming up with the answer of where to focus which a.i. is to focus on first. okay, great, thanks very much. [applause] ♪ ♪ >> booktv has recently are covered several books on technology which include talks by former world chess champion
garry kasparov on artificial intelligence, ellen olman on her 20-year career and brian deere on the precursor to today's online community, the plato system. if this is a topic that interests you, visit booktv.org and type technology book in the search bar. several programs will appear and can all be watched in their entirety online.
with a discussion on the current state of book publishing that all happens tonight on c-span2's booktv. 48 hours of nonfiction authors and books every weekend. television for serious readers. >> i'm pleased to introduce carlo to politics and pros. revly italian near physicist at the mmarsi university and one of the founders or of the gravity theory. his previous books includes 7 brief with lessons on physics and international best seller translated into more than 40 languages. and r