tv David Weinberger Everyday Chaos CSPAN June 15, 2019 2:16pm-3:43pm EDT
media and richard discuss how they've maintained long-time marriage despite opposing political views and we sit down with several authors to preview our forthcoming books including samantha power, mo susan rice and malcolm, watch all the programs and more this weekend on book tv. for a complete schedule check your cable guide. good evening, i'm honored to be with my friend, favorite philosopher alive. he has written other books, manifesto was the work of internet culture written in '99, i think. >> yeah. >> that's how i came across david and also across and other great books, and every day chaos is a wonderful book that is
energizing and, again, even a little frightening, we will look into that shortly. i want david to take you through books and idea and move on from there. >> thank you, thank you for hosting this event, thank you for coming out. so the plan is i'm going to give a presentation about the book, jeff and i are going to talk and then we all talk, okay, so -- don't fail now. excellent. so i want to let you know that i'm actually going to say positive things about the internet about artificial intelligence which i understand are objects of great fear and many ways appropriately.
one of the great fears about and very real and important future about -- it's actually about the sort of artificially intelligence which is machine learning which is what we are going to be talking about, machine learning learns from data and learns from it and if you feed in bias data, it learns biases and can amplify it. so bias results come out so if you fed it data about employment it well might learn that women correlate poorly with senior management positions, it will learn the biases in our culture and there's a lot of work being done to try to address this or at least to recognize it, a very serious important problem, gets worse when the machine-learning system is working in a way that we can't talk about which is fundamental to what my book is talking about because you may not know about its biases, so these are really serious issue, it is not the topic of my book, the book references it, it's not
what the book is about so i just want to make sure that this is an the table because it's really important but what the book is about a way in which our future is changing, not by content of the future because that always changes, this is elon musk has hyperloop thing, this is the 1912 and what the book talks about is not that, there's no predictions, the book talks about the way we think the future happens is changing. so one of the ways we think about the future casually is we think about the future consisting of huge number of possibilities and as the future comes closer those possibilities narrow until there's only one left so our job is to think about what is possible, choose the one that we want and do everything we can to make sure that that's the one that one possibility is the one that becomes real.
if you do, that's great. the way that we have done this -- the strategy of strategies for a very long time has been that we try to anticipate what's going to happen and then prepare for it. this is genuinely a strategy, it's really, really old when we first started flinting axe handles or spheres, we were anticipating use the next day, we would never give up entirely because if you do you get hit from the next bus because you didn't anticipate and lock both ways. i am not going to be saying anticipation is dead but something is happening here. so the book talks both about the net and about ai because the hypothesis of the book that our life on the internet, the success of the internet, the things that we do and enjoy well on the internet have been training us to succeed and prop sper -- prosper and thrive.
in a sense that lots of things happen beyond control, you never know what's going to happen and being surprised and suddenly if something goes viral it doesn't make any sense, so we don't know why. internet has gotten used to that. model of how that happened. we will start with internet and spend more time on ai. so on the internet we are succeeding, right, and we do very weird things that we take for granted on the internet and i think that there's a thread that goes through these, so i'm going to point to examples of things that we do on the internet that we don't recognize as weird and what the phrase holding together so the first is minimum viable product, anybody know what mvp is in a sense?
well, good, thank you. it's a very popular approach on the internet for launching products where you figure out what the one key feature is that people want from your product and you hold off on all the others, people pay for it, they are paying for this one feature, drop box, you can use your files anywhere. one key feature and then you watch what people do with it and you start adding features and drop box feature product now. this makes total sense because you can see what actually people want not what they think they want, not what they say they want, what they really want and what's in their hands, what works and what doesn't but this in the face of how we design products forever so if you're henry ford and 1908 and launch the model t and don't change it for 19 years, it's the same product, tinny little changes over 19 years, 15 million of them, you've changed the world.
this is a fantastic example of designing by figuring out what people need ahead of time. mvp, min male viable product, no, throw that out, instead ship one feature, key feature and then see what people do. this approach along with other weirdnesses on the net hold possibilities open rather than anticipating purposely holds open the possibility, so a whole bunch of other things that do this, on demand, conferences in which the attendees decide on the agenda when they get there rather than the conference organizers anticipating what users, attendees are going to want to talk about, open possibilities, basically a new thing in the world and then there are second example, open platforms, technically open hpi's, with open platform if you are -- you watch your product,
it's going well, maybe mvp, plain it's not, you decide that there's lots of things that you could do with your product but you -- you don't have the staff to do them all and more important you can't even think them all so slack is a well known messaging app that's used by teens and great product. anybody on the web, any developer in particular with use what slack has developed to great new features, new way to integrate in current working environment, a niche version that satisfies users, these are things that slack well might not have even thought of and if they did they probably wouldn't have the staff to do it all. this is very much not the way products are supposed to work, you don't hand off the power to develop your product to users who generally with these platforms can develop these new
features without even asking permission, they can just go ahead and do it but makes your product far more valuable, more useful and addresses each needs and gets integrated, in fact, has 80-dollar -- $80 million to encourage people to do this sort of work. very weird, widely adopted and part of many other such instances in which on the internet we do this thing of opening up, giving up some measure of control and allowing others to extend and modify, not the way we used to build products, very counterintuitive, both of these sorts of things, one holds open possibilities and one makes new possibilities and enables people to create new things, both of these are examples of switching from insisting on anticipating what will happen to unanticipating to seeing benefit and not trying to
guess what's going to happen. it's as if we collectively as species spent the past 25 years doing everything we can to make the world more unpredictable, it's not as if, that's exactly what we have been doing for the past 25 years. everything we can to make the world less predictable and the result is in some sense this reverses the flow of the future rather than it being a narrowing which possibilities get eliminated, we are now systemically engaged in multiple ways in trying to enlarge the future, enlarge, this too is pretty new in the world, that's a quick view on how things change practice, now let's talk about how ai may be reframing it. and i think -- when i say aii do mean the particular time known as machine learning and i will try to explain as quickly as i
can, machine learning is a deep art and science. i'm not a computer scientist, r, it's really complex. this is the simplest version i can do. if you are a normal computer programmer, traditional programming traditional computer and let's say it's to come up with software that will predict sales for your business, you will figure out these are the factors that affect sales and salespeople and what are their incentives, well, if we increase incentives that might increase more and if we sell more increase support staff and so forth, all of the things are interrelated. here are the factors and here are relationships, that's the conceptual model and you program it up to working model and that sounds like a spreadsheet that's exactly what it is what we do on spreed sheets, it's really easy
way to program computer and obviously these things really work great but is not at all how machine learning works. machine learning drop it is relationships. give us data and give the machine data and give it labels, like what data stands for but the labels are in a sense meaningless because it doesn't know what the relationships are, basically -- i want to note that we like it generally when the conceptual model match it is working model but with machine learning you basically throw away the conceptual model and take the data, put into buckets, so to speak and then the system goes through and works over that data, looks for correlations among all the different pieces and could be millions of pieces of data, find correlations of
straints, one piece connected to thousands of others until you have network and this is an artist rendering, network in which there are -- the pieces connected in dependencies and relationships with so much detail and complexity that if you wanted to walk-through it as human being and start with data point one and see where it connects, it will take a a long time and you will learn nothing and see where things are connected. nevertheless the system when you feed in data will output if you've trained it correctly allowed input data that makes predictions, predictions than better than example humans can which is why we use for predicting, more accurately or faster or both. it works, it works, we cannot -- at one level we simply cannot explain how it works because it's too complex and at another
level we try to figure out, make sense of it we frequently also simply cannot explain how it works but it does work. every day there's something new being done with the technology that's unexpected and sometimes occasionally slightly miraculous. .. .. >> tend to work starting from broad general relationships. it's not the way machine learn models works. so, the old is in general, the way we think about how things work when we're modeling or coming up with conceptualad we book nor general principles, the generalizations and then apply tome the particulars.
the generalization has to be understandable because be need to know how to apeople them, and then the value the prims more. looked at the principles for the truth. the data we put in is transjet stuff. it's the principles that abide and that's the old model. i mean thousands of years old. . with machine learning models, it's really different because you pour in particulars. won't tell it generalizations. it just may not couple with them or may not understand them or notice them. so it connects particulars incredibly complexly and densely and del at that timely in the sense that a small change can ripple through and have unexpected results and ways that are frequently beyond our understanding. it's a very different sense of how -- of models and also what those mottes are modeling which is our world and future. so, if it is the case, as i
think it, can that athlete cholesterol we we have understand offices in terms of technology. only after the steam engine was invented we started to feel under pressure. that's steam engine term and we have to let off steam or blew a gasket. he would feel under pressure because the adopted the technology residents way of thinking about the world -- the technology's way of thinking about the world. the age of information in the 1950s, suddenly everything was information, including ourselvesle. we felt information overload, we feel like we are processing information. i can't process that right now. we have so thoroughly taken up the metaphors and the models that these -- its that's the case, we understand ourselves through our technology, and if it is the case, aim sure it, at that time a.i., machine learning
will name the next age, age of information, age of a.i., we start think about what that might mean for how we understand ourselves and understand our place the world? that's what a lot of the book is about. i i want to give you three examples. two of them pretty quick. one a little bit longer about the sorts of effect this might have. when i say might, i think we already see elements of this now. that's what we opinion. to the first is what happens to strategy. strategy is a very new idea in the world. military strategy is a 19th 19th century idea. business strategy is measured in decade, not citizen. strategy is a new idea and that's because for us to have a strategy, the concept of strategy to make sense, we have to believe the world is pretty stable, it's run by laws that we can learn and teach and use as our guide.
and that the stability is -- we didn't have that sense for a long time. in 19th century we started to get the sense even for bar and business as -- for war and business as well. the past 10 or 15 years in the brief life of business strategy it's getting knocked around. so this book black swan is very important best seller, very influential. this basic idea is you sure you have your strategies and businesses, but at any moment a black swan, unexpected thing can happen. your supply chain could blow up. never know what is going to happen. and what could brain down your business and you should -- bring down your business and you should be aware of and that try to prepare for it. the black swans -- it's al surprising we needed to be told this because this seems to evident in life. but apparently we did. and it's important thing -- it's
been an important idea in business. another quick example, think can about how strategy -- the problem width strategy just along these lines. recommend that people, businesses, pay very close attention to small changes in the vast amount of information that is around us, to look for opportunities and risks rather than only having a long-term strategy which assumes pretty stable environment. no. pay attention to the small things around you. some people talk about this overall, this trend is a minimum viable strategy. whatnot the least strategy possible because you recognize the world is so, so unstable, so chaotic, so unknowable that the general principles may actually be totally right but applying them in such an environment means that you cannot plan as closely and as carefully as you
would like. i just want to add a note. plato was the first person to separate tactics from strategy. and what he meant by tactics is logistics. what he meant by strategy, though, his go-to example, the first thing he says, he gives an analogy and says a strategy -- fact like what musicians do when they make up new tunes. it's improv. that is not how we think about strategy these days. but i think we may actually be going back to that based upon what we're seeing happen in the business world. the second thing is our concept of idea. we have a straightforward idea of progress. progress is relatively recent ideas, around 1700 we -- in the west we got the idea that progress makes sense. we have this -- ask somebody what progress is, they'll draw a
line like this, this progress and telephones and big markers and big extend that mark progress. and this is fine, sure, but absolutely not how progress works, it's never been how progress works. this leaves out all of the mistakes, many of which were important and helpful and leave otherwise intersections of each technology with all the other technology that enabled them, all with their own points that then lead out to other points. this is not -- this is fiction. this is -- stores are great, this one is not a true story but how we think about how the world has proceeded. this is a different picture online. this is a screen capture of get hub, get hub is a wildly popular and influential site for developed. software developers and post their code so so-so other can -- so other people cannite. add on it to, make suggestions
or fork the code, take the code and turn into it something else. without asking permission. this is open source. as a result each piece of code -- there are tens of millions of get hubbers and multiples of that. each of them can become a launching point and be mixed and matched with other pieces of code. i think the most popular in term offed the number of times it's been reused or called forked. it's forked when somebody builds a new branch of it. this has been forked well over 27,000 times also this point. that's a lot of reuse, and if you tried to map this -- each fork may be using piece from other pieces of code and what gets forked may be forked itself. pieces that may be reuse. if you drew map of progress at get hub, it wouldn't look like
other nice straight line. it would look like this, way, way more complex, a lot of dead ends and leaps forward and splittings. this what i progress looks like and this is somewhat better idea of -- picture of what progress looks like, a picture of the internet. a small -- very hold map of the internet, nodes that are connected. the internet -- architecture reflects the way progress reuse worked on the internet because the internet allowed that sort of thing. so, companies, developers and companies do this sort of thing because they see benefit in it. they use other people's work, legitimately, and make what they're doing available, their product, what you're dying at get hub gets better because other people it it iterate on it but if you want to tell the story of progress it is not a straight line of nice little notches in it.
the third thing that changes is explanation this will take a minute. so, let's say you're on a back road driving along, get a flat and you want to know why so you take off the expire look and figure it out. it's the nail. great explanation. i'm not going to argue against that. a great explanation. an list and explanation of a particular type of, which sin na quan non, there but for the nail i wouldn't have gotten the flat. works great. however, it's not the only sinna quo nonin the picture. here's the road again. you're one on the road because you were late. if you hatchet been late you would not have got an florida. if you had not served for the rabbit, i no nail or flat. if met alyear softer than rubber no flat. if pointy things didn't penetrate better you wouldn't
have get an flat. if people didn't want to good fast and wouldn't be cars especially if there were not car companies that were eager to serve the need no flat. if there weren't gravity, you wouldn't have gotten the flat. if space aliens had done their jobs and vacuumed up all of the surface metal because they're a rust-based species, then you wouldn't have got then flat. everything had to happen for you to get florida. everything in the universe and all of history up to this moment had to happen for you to get that flat. all sine quo nones so why pick the little nail at an explanation, and the reason for this, the nail is the one thing you can change nell scenario. can't get rid of gravity. can't go back and time and say i'll run over the bunny. can't undo any of that. one thing you can change is the nail and that's why it's the
explanation and it's a good explanation. explanation are tools. not part of the world. they're tools that we use because we have purposes and we need explanations sometimes as a tool. they north always the best tool. -- they're not always the best tool. a issues about reagan lating a.i. and everything has to be explainable. not always the right tool. explanations always hide more than they show. that's okay. but it's really important to recognize that. if the machine learning model is put in front of us as a problem that we were worried about these black boxes, they're called, these inexplicable -- i don't
know why -- every time we're told that we are going -- in effect we're being told there are systems we rely on that do things we cannot, these things are sort of cognitive, air quotes here because machines don't think, these machines don't think but sort of cognitive work they're doing. every time we're told that the idea is being reinforced that a machine that works in an inexplicable ways are doing things we cannot. if you look at nose the exception -- a handful of things you can't explain but as part of life, the part of the landscape, then we will be enabled to not only make more progress with a.i., where that is important and interesting, but i think we start think can but ourselves differently. i think we are at a disruption that is sort of -- in the west,
we have started with acovenant saying humans are special. we understand how the world works. to some agreeing. god create news his -- i'm sorry for gentlemen gender can -- credited us in his image -- create news his image, not to look like god but in our ability to understand appreciate at least some of god's creation. for the greeks, the same word is used to -- for the beauty and harmony and order of the cosmos, logos, the same word is used to describe the human capacity to appreciate the logos. the order of the universe. there's unity, magical unity that makes us special. that's the covenant has been.
there's no point in being the rational animal in the greek terms if the cosmos isn't rational, if rational thought doesn't disclose anything but the universe. so we have had this covenant insofar as we are beginning to learn through technology that does things cocog cognitive-issue things we do that does nose rely on general results and doesn't spawn generalities and don't think the way we do that covenant is broken. and that is i think a big deal and that's what -- we are coming -- i think -- actually hope -- to question whether our way of think is the way of thinking. who we're more alien in the world than we thought. our thinking is more -- has evolved more for survival than necessarily for truth. the generalizations are good short hands but we now have a system that is able to do more
by focusing on the particulars and their relationships. this seems to me to be an important night our history, and i actually am very -- tons of problems about both the internet and a.i. and we can talk about enemy a minute it about change in the covenant seems to be an opportunity and a possibility we are in a new stage in our evolution, that we are at new stage of maturity and recognize our small and fragile and incomplete our thinking is. a moment of great humility, i hope, as we reconsider what our nation the universe is. and moments of great humility, as i think there's an inverse relationship. great -- not inverse. great humanity is occasion enables great awe and i'm hoping that we manage to make our way through the difficulties with the new technology to both become more humble and also more
in awe of where we live. thanks very much. [applause] >> talking about -- machine learning that beats one-off. he says here, [inaudible] algorithms work abuse they capture better than any human can -- >> keep going. >> better than any human can the complexity and even beautoff oto universe which everything affects everything else out at once. >> i want to read a moment from early in david's book, when he
talks about the machine learning of alpha go that defeated humans in winning the game. this is deep learn'ing as algorithms work because they capture better than any human can the complexity, fluidity, and even beauty of a universe in which everything affects everything else all al once. at-wellry see machine learn eggs one of many tools and strategies that have been increasingly bringing us face to face with the incomprehends sable inintrick cassie of our everyday world but this comes at a price. we need give up our insistans an always understanding our world and how things happen it in, i with that i realized that david was pulling the rug, the cognitive rug, out from underneath all of us. he was taking out the idea that we can explain the world.
and to go further in the book, through concrete examples, what i learned was that often times a machine learning algorithm with a lot of data fed in and a lot of connections made, can make predictions better than we can. and can't explain it and so david start off with this simple a-b test which i beloved of the platforms. let's just try it out. test this and that, which one is better. do it all the time. con live you're bombard evidence with a-be tennessees throughout the day when you're on google or amazon or affection and the a-about test says this works better thank that. we're reliably. done 1900 times. this works better than that. b is the winner. why is b the win? we have no fing idea. there's no explanation from the
algorithm. the allege go rhythm cannot explain it and neither can we. that i found a little frightening because it starts to take away our sense that we cognate the world, we can explain the world, we're the best at explaining the world, we're the humans. and the sense what the computer starts to does take away the hubris from us. is that right? >> gee you want to sell your barbecue sauce online and you have the model on left and 100,000 people on the left and 100 people on the right. microchanges and you discover that on the left 2% more clicks and i quite significant and then you say, okay, going with a.
and tomorrow you're selling that barbecue -- selling catsup and it's not the model 0 on theft and it's the greens brown and the bottom of the pain. we don't need to know why. the thing that may be going on -- why is it that we don't have generalizations for this? we don't learn from it. for one thing, don't need. to keeping up just to run the test again. second of all, it's likely -- seems likely given the great variability of these tests that what determines success of barbecue sauce model on the right is some kind -- some complex set of factors, maybe different for different people, maybe so sensitive -- that's why it's only a 2% difference, may be that in cities where it just stopped raining, then people 2% more likely but in industries
the sports team just lost and somebody -- and people -- people h just heard loud traffic going by, they like on the left. maybe some confluence of factors so delicately balanced that you couldn't -- there's nothing to learn. but if that's -- that's a really interesting lesson for think how the world works because it's not just a-b testing which we harness and able to validate and access it, it is everything. everything you did on the way here is subject to the same sort of, we think, microinfluences of basically everything that has happened, everything in the world. that seems to be a pretty -- for me -- i'll give you another example -- pretty convincing sort of model of how life works and the second sort of example is up the scale which is for what causes war, and people have theories of war.
popular topic among historians and the like. and so -- the world war i started when somebody shot the arch duke and that caused it. maybe. the sine qua-non. if nobody spot him or missed it wouldn't have started. that dramatic act caused the war because of the -- you have to know everything. the economics, the politics of the time, the socioeconomic everything about it would be required to understand why a war starts. arch dukes have been shot before without starting a war. may be things happen for reasons that are so complex we're not just not set up to think about. >> you have the fiction that we can explain things.
we can understand the underlying law. so, i read another book by alice rosenburg how history gets things wrong, the neuroscience of our addiction to stories in a quick instance, he says that the theory of mind that we have had ever since we were in the savannah that we -- the human being match the desires. says this -- [inaudible] so, says that newton robbed the universe -- robbed our sense of the universe of divine purpose. why did these happen? because of the god. no, it's a law. and then why -- [inaudible] sorry, we got laws to figure it
out. and now rosenberg and weinberger come along and take away the last port of that. the sense of human and you were we know what we're doingle. the theory of mind doesn't work. may be just player in a grid of a-b testing in life and who knows why we do what we do. it takes a. the sense of story telling and the sense of explanation. that i think is somewhat frightening and i'm not tech know phone. i'm accused of been a technologist to the stream more than david. but i think there's coming crisis of cognition here , that we think we know how to figure out the world and if the machines can fig it out better, will we become even more resentful of system. >> such a rich comment. jeff has a post from a couple
weeks ago about this, and then applies it to less beyond for journalist because he is a journalism professor. that is fabulous. i i say that not because you talk about me in it. that would be enough. not just that. so there's been a lot of discussion by lots of people on brooding this idea. philosopher who say, no, thinking is an ever luigs -- evolution project. and there are confluences bringing to us this point. the fact we have technology that does it and that we use all the time is i think reaches a lot -- you have cellphone inside i'll get you cellphones. all using machine learn all the time.
everyday you -- auto completes typing or you use a mapping to get somewhere or you read e-mail and the spam filters are all machine learning, basically a little bit of exaggeration, everything on your -- is machine learning. met in ask you a question, jeff. how frightened are you of the mapping software you use. >> i love it. >> so where -- >> i i'm a bridge and tunnel guy, and so ways -- without waifs would lose another year of my life. eye frighten of people's fright of it. so i think that in the world we have now, -- this is outside the book but i think what we see going on is a panic about people
who are in power who look like me, a old white men who are now panicking about new voices that can be heard thanks to internet, new connections to be made and the reaction we see so that is what is frightening. so worry about the moral panic that will rise, that people will have a -- the luddites against machine learning. thought it would be row boats that scare us. maybe the thought that amazon will replace a box packer with robots and fastfood is replacing people with robots. we know that. that doesn't change our view of the world but this stuff -- >> other sorts of very del tierous affects. >> and takes away ore sense of ourselves and our humanity . we feel unempowerred. we lose agency. i just worry that people's reaction.
>> very worried but the reacts. do want to reiterate that the problems of bias, if nothing else -- there are other problems as well including the fact the systems can be fooled by adversaries and become more reliant on them and -- these are really problems and should be engage i with them thoroughly but that -- i don't disagree with that. how do. >> how do we calm down mankind, person-kind. >> why its frightening -- let me put it like this. we have been in an enlightenment parenthesis. as a way of -- well go ahead. >> for another time. >> as a way of putting some of the internet changes into -- so we have been in the parenthesis,
this historical parenthesis of thinking we are the masters of the universe, and this is so well -- so nonoriginal; it's involved with -- a sense that we can master everything. we can control the environment. we are painfully learning that may be fatally learning that's not the case. why isn't the coming to the understanding that we are not in fact masters of the universe, we cannot find the magic keys that will unlock everything in terms of knowledge, and so everything becomes understandable and controllable. why isn't -- i understand why that's frightening to some people but they wrong to be frighten another it. >> maybe i'm just full of
myself. doesn't bother me but bothers everybody else. but i -- in this age. one thing struck me and goes to your male attire. you say that which we can explain, we explain, we have laws, figure it out. we know why. everything else is an accident. and what you're saying is there are no accidents, only things we can't figure out. >> yeah, but i think that people -- that the idea that the world is a rule-based system that we can't figure out is as old as newton. newton knew that everything affects everything. that's actually core to his thinking. gravitation goes on forever but weaken sod quickly it doesn't matter. i'll tell you an anecdote about
newton. which actually you moral raised. newton was very, very -- newton is, the stunning genius who gave us the idea that there are simple rules that humans can understand that apply equally to everything in the universe through the universe, and this undyed the greek notions that had held on forever. and this is a stunning act of genius. he was a very, very deeply committed christian and he was very disturbed by the fact his laws could explain everything, because then you don't need god. it's a mechanical universe. god creates it-winessed it and then don't need god to explain agency and that distressed them. one speculation -- comets were a big deal back then.
and he said, you know, planets in their beautiful orbits just a sign of god's grandure so perfect, but gravity -- everything in the universe exerts gravity that affects everything else even though it's minor. add all that up and it -- the mass of the universe should maybe be pulling pulling the plt of their perfect orbits and maybe, maybe a comet is god's -- god throws a comet interest the system to expert -- pull the planes back into their orbit. he find way to try to get god back into it. so he -- i'm sorry. i went down this path -- i frog -- he and newtonians some of them understand that's universe is really complex and everything affects everything. but we can safely -- if you want
to know when an eclipsees coming don't have to worry but the graph asia tall annual of a star a gazillion miles away. they're aware of that but didn't matter. so the focus on the universal laws and that's what is permanent and real and eternal and the rest is swirling matter that we -- it's determined by the laws but we can't every possibly know it and it's an -- we don't pay attention to it. if we can -- if we -- if the machine learning modeled that focuses on the particulars and leads to us appreciate the particulars and maybe rather than throwing out laws, which we don't want to do. we want more science. rather than throw out the laws, maybe we can in more human ways appreciate the accident, the particulars, the details, the thing that are all entirely different but all affect all all the time politically i'm much happier with that metaphor then
rule-based one or the master -- the people in power and the rest of us metaphors. >> we have to share microphones now. >> sure. >> so, professor rosenberg says in the end that machine learning operates like our brains and tests a whole bunch of possibilities and then sedles on one. so factors we don't know, we scan complaint but we do that. in line with that i would love if you would tell the crowd but chicken sense -- chicken sexers. >> an example used by philosopher offices knowledge, -- fill philosophers of knowledge. it's important to chicken producers to be able to tell the sex of a chicken because they want the lair, known as the female and the males are just
fed and just an expense. so, they want to get rid of the males immediately. and so there are chicken sexers who can look at a chick and tell you what sex it is. they can be really fast and incredibly accurate in doing this and cannot tell you what the observable differses are between a male and female chicken. just can't. we still don't know. accomplish so if you want to become a chicken sexer, just so you can say -- or have a cart presented it up, then you get trained by an existing chicken sexer. never said the word chicken sexer so many times in my life. -- who you pick up a chick and you say, male. chicken sexer says, no, next one. male? yeah, next one and you guess and over time you get this power. you're able to tell what sex it is without being able to tell how.
this is very -- a really good example for philosophers of knowledge who are committed to the idea that knowledge is something you can explain and justify. otherwise it's just a guess. you start it's just a guest and can't tell why when you know you should be able to point out why and say, well, look at the wing tips are pointy or whatever its but we don't know. and machine learning -- this is one case in real life which machine learning actually sort of works this way. >> so, indeed our brains operate without explanation with the chicken sexer and machine learning are very similar and our brains operate like that. we don't know why but it works. >> a different example. a lot of examples of this but you can do a retinal scan, show it to a machine learning system that is trained in a particular
application and it will thank you you thing bit be person who owns the eye, what gender, i think some age stuff and a whole bunch of heart health related things. one of cholesterols and the like. the thing that is interesting is neither the computer scientist nor human doctors can tell what in the scans the computer is paying attention to, and maybe they'll figure out it but as of now, can't tell and may just be some odd combination of a sect of factors, maybe chicken sexer could. train somebody on it. an something interesting experiment. so, machine learning can work this was. works without have -- not giving us any general rule or things to look for, pointy wing tips. as a vegetarian you do not want to watch videos because the male
chicks are thrown on to a conveyor belt and ground up alive. so enjoy. >> thanks for that. so, i want to talk about the regulatory regime starting up in the world. must be able to control this. can't control the universe. he can control that. when i started working with computers anything had to do with a computer, the comfort of the computer was you could always find out what was wrong. always a reason and always fix it. usually blamed microsoft for something. and but but you fixed. that's not the tway the world operates but now you have regulators saying we must have accountability and there's a law being written in congress to look for algorithmic accountable and make the account able to what?
>> one thing you hear often is, be tran parent, show me in the algorithm. no one would know that would do withhe source code of an algorithm. it's gibberish, that's just dumb. so another says, open up the algorithm know it's impact. might be able to do part of that but don't know how to do that. is there -- but yet our lives are more and more and more going to be served by algorithms that predict reliably what we'll want next do a good job of it. and they may have bias. as you mentioned before. may have a problem does you see a way that authorities, either cultural or governmental, will be able to regulate and manage this algorithmic machine learning, a.i. world or is it beyond them?
>> i am worried about this as well. there's certainly areas where we want regulation. places where i think it would be completely reasonable to say we don't want to use machine learning, a.i., in -- for sentencing, which is currently being used, for setting bail and the like. i think actually quite -- it's only because we need to trust in the judicial system. not to mention that there's vary strong, convincing evidence that some of the algorithm have amplified systemic pledge days, unacceptable. certain where are we want regulation, awe ton news vehicles we need somebody other than the car manufacturers to decide what these systems should be opt almightied for. win goal is few are fatality and then go down the list and dozen
of things that could aim for. including lower environmental impact, shorter travel times, comfort counts. we don't want dish don't want the car manufacturers to be in charge without anybody looking over their shoulder which of those because we are allowed to say as a culture, you know what? you're a big rich car company and you're going for comfort. if that turns out it means you'll get worse mileage, we are killing the planet and we have a right to say, no. optimize for -- and there are tradeoffs among these different values. make is very complex took say what we need is algorithmic transparent simple final not shower what that means, maybe -- it's really not what you need in this and it's going vary by -- from case to case. the case of autonomous vehicles a lot of it could be achieved -- you could regulate what we want
them optimized for and encyst on transparency of the metrics and the outcomes and if the cars are continuing to kill a lot of people that's an important metric, energy savings aren't going down and then you do what we do with product which is is we send them back. we do a roo -- re-call and the manufacturer is required to get it back up to snuff. what the manufacturing process is so long as they can prove that we fix it it and now it's doing what it's required to do. we don't need to insist on the transparency. rather than algorithms, in many instancees we'll need data transparency but even that is complex bus i don't think regulators are capable of analyzing dat tamp that's a -- data. that's credit specific excel skit, deep skill set. maybe the transparency for that may include are the samples diverse? where did the samples come from.
representing the people and the discommunity what has that process been and what are the outcomes. there's a lot of different places in the process where you could insist on transparency and some plays you don't need transparency. we need to hope that regulate lores get it right. aircraft safety pretty well despite -- well, okay, fine, but really pretty well. going vary by product and a nightmare of struggles because the impetus will always be among legate are to do the simple thing and the thing that crushes everything so long as they get the outcome they need, and we will lose some benefits we need from that. it's a scary time. >> two more questions and then questions for fears or trepidations or joys you have. >> i'll try answer yes or no. >> was at google i owe, the
developer's conference where they brad it the machine learning and what struck me most was that they've taken machine learning models that exist, that were 100 gigabits and gotten them down half a gigabit. that means the entire mottle fits on your phone, an inexpensive phone so they short a heart, warming video of a mother in india who was illiterate. she could point the phone at a sign and would read the science and translate and rated outloud her because the entire model existed within this small little bit of minerals. and so i think we'll see more -- by the way, that reduces fear of privacy violation because the old machine learning model everything went in the cloud and now it can happen locally and
only the conclude goes to the cloud so there's a lot of developing in this. so you'll see more and more and more and more machine learn which is why you want do buy david's book to understand the world we're going into. it's going be in everything. intel aside, going to become ml inside. let's go back to the internet and ask this and then come out and have a discussion. so the net, i think of as a connection machine. that's the relationship of machine learning and the net think both enable or use or exploit connections. connections among data in machine learning, connections among people and information in the net. and i get arguments all the time who tell me that the net is a medium, look the one i work, in like what is surround us and i argue, no. it's something enough.
and i get into this fight all the time. the reason it matters is because i think we don't want to bring the same presumptions and regular layings to the net we had in the other world. >> also a power relation, call it's medium and you have set up -- assume a set of power relationships not necessarily based on the net. >> right so part of what i've been thinking is that if you drew a chart of how many people and government looked at is, there's a big circle that said meese ya and inside that's print and broadcast and the internet. i think that's wrong. i'll draw a birth circle and call it the internet inside that it is a circle called media. inside there is one called communications and other circles outside that are being drawn in by newton's gravity. finances, maybe 15% inside. retail, maybe 25% inside. crime, 10% inside and so on and so on.
and so i'm coming to see that the internet describes society. in this way. in any field in journal jim, we keep covering the internet as itself is a technology story and i'm coming to see it's not. it's a story of people. the story of the internet is our behavior on it. it's not the technology that screwed us up. it's we screwed ourselves up and maybe its amplified it, made it easier or worse. don't know. but it's soilent green, made of people. so i'm starting to wind about the -- >> talking to a communications school, media school, and i thought that's kind of retro, whole school around media. why isn't that school around the internet? how are we city toking the internet as osite, as the purity. not predicting.
the dumbest job title is futurist, but understanding the controverse where we're going. what would -- you have done that through your books. small pieces loosely joined was about how we have connections that aren't as clear in organization chart as they used to be. everything is miscellaneous. too big to know. was our brains are too small. and now in this one you're looking how the machines are bringing their own order to this we don't have. so as you look -- what you have done really is since the manifesto is trying to understand this new world that is ruled a great measure by the internet. how into shoo we be studying it? doing and universities? tick mace clearer? >> i don't know. how discipline i get started. historically. i know it's complex and i don't know how you do it.
what the forces are that actually get universities to commit. i just -- i don't know. i agree that the internet is uniquely important, has not turn out the way some of us thought initially. i attribute that, at least my own blindness. everybody who encounters this will be liberated to find more information and make connections and is central and that is not -- that's an assumption that privileged people make about life. it is for me it's important for the same reason -- many of the same ways and reasons that machine learning is. that the architecture of the
internet gets repeated and made visible to us even when we're not paying attention. aninginging ater texture archite of loose connections and allows you contributed you're own stuff and make new thing. a general ative -- generaltive architecture that enables us to do more of that and at its worst, it does the same thing except what results are all of the evils, the passing around of false information, the bullying and the gathering together of clusters of bullies. that's also inherent in the internet architecture but the architecture enables us and gets the connection even though we do it negatively. i think that same repetition of
architecture having some defining impact on how we think but the world is now about -- already being repeated in the world thanks to machine learning. i take that as been earning -- i didn't know it had an arkansas but i think that's why almost all my books have flaky pieces on the cover. loosely joined. >> we have to share one microphone. repeat the question. >> i'll out and share one microfind here. does this wired one work? don't go out and repeat the question. i'll -- you will yell loud enough for my old ear and i will repeat it for the recording. sunshine? any thes or desires or fears?
yes, please. [inaudible] question. >> the chicken sexting. the gentleman wants to know but chicken sex because we all do. >> is that a human being that does this. >> it's a human being. minnesota can't do it bus yaw -- the old style member outside have tell this to look for be 20s to what to tell it to look for so it's a human occupation. it's pretty well paying because for some reason, a lot of people don't want to do it. involves gently squeezing a chicken and -- a chick. new born chick and peering at its insides. so -- it may not be an occupation that is glamorous but pretty well paying and mysterious.: people were
trained during world war ii to recognize enemy aircraft, the type at a glance, and a lot -- actually a distance. you can't do it but the way they get trained was they were told. >> zero has wounded wings and that's how you can telling it from other one which is tilt up or the other sit of distinguishing marks like identifying birds. you can program a computer. you can tell a computer this is what to look for. if you don't know what to look for, you can't tell the computer. this is actually a field where machine learning might -- assumes that machine learning trained on enough instances would be able to do chicken sexting. as far as i know there's no research being done on this. probably hard to get a grant
for. >> how do they know they're right? >> the question -- >> -- didn't throw it away. never know if itself was really a female. >> yes. so, how do they know they're right since it's a male they discard it. so, this is not the right answer but there are competitions, people who race to see how many right -- how many they can do and how many they can do correctly, and the experts get near 100% as concluded by other people. of course you know when you let a male through and they grow and they're not laying egg you know you got that wrong bus as far as we know, there are astoundingly accurate at enormous speeds. >> evolution area process thrill be a transferring of decisionmake can that humans
start losing the need to make decision as the machine makes decisions, as our body -- >> well, repeat the question. in an evolutionary process are we going to lose the ability to make the sorts of decisions we have outsourced to the machines. so, in term of evolution and general jeanettessic -- jeanettes -- jean yetics and it's been addressed for a long time. socrates dish think in the -- argues against reading, against literacy so this fifth century bce. on the ground that first of all you can't have a dialogue with something written, and he is all about dialogue so that's -- why want to just read one side of the story and he says we'll lose
our memories. there's no need for us to exercise use our memories because we'll write everything down and that absolutely has happened. the odyssey -- the illad is 16,000 lines and it was passed along through -- before writing. generation to generation. so there were people who could sit down and recite the illat for you. if anybody here can recite the 16,000 lines of the illa, i'm willing to lionsen but we have lost the able. however -- so socrates thought -- he was wrong about that. don't think enough of wuss give up late wase on the grounds we have greater -- we have greater memory because we write things down and accumulate knowledge because you write things down, huge win even though we lost the skill. so, it's not clear to -- i don't
know if the sorts of decisions that member learning systems make -- machine learning systems make, it's a skill we need and will become in this case socrates will be right and we'll become stupid, incompetent, and be unable to operate in the world. i can't make prediction. i'll tell you my little experience of this. very quickly. i have a terrible sense of direction, always have. and i get lost all the time. i now have and rely upon a -- cellphone, and the routes software, and actually makes me worse. that's the extent of my sense of space. this voice saying, turn left in 400 feet. i pretty sure it had made my already terrible sense of
direction worse. but i don't care because me and my machine are smarter than i was and that has always been the case. this is the philosopher, andy clark, scottish philosopher, says -- makes a bestache point. which is we think out in the world with machines. we think we have been taught to think we think in our heads, very loinly life, us a just being thinking in your head but he says we have always thought out in the world with tools, whether it'sed the old greek shepherd who can't count but takes one stone for each of she sheep he is watching and then when he come backs the drops a stone for each. he is counting with hip hads, instones and smarter with theston without and its likewise, a physicist can do her work without a whiteboard. that means she is stupider because she want work without a magic marker. no. because we can at in world with
tools and have to look at the system and me with my navigator, i'm way better at navigation and smarter at direction than i was behalf. so may well bell we may lose our ability to make those sores of decision us because we outsoressed them to machines but that is okay. we are not lonely thinkers stuck in our heads. thinking is something we do wife-under hands 0 and our hand are makings us smarter and if it's machine learning in a smartphone, that will make is smarter. no. need to say i work parttime as a writer in residence embedded in doing until a machine learning research group that is responsible for the marvel you talk about. they're three desks over, fact group of people. so nothing i say rep anything that google might believe, that's for sure. >> i think we'll see a balancing
act coming forward. books stole our memory also. google does. i don't have to remember anything before. never good at it. we'll balance the positives and negatives. the problem where is do we do that? i think we're in a phase right nowhere we're looking at all the negatives. even though i sound like even them to an on timist but if machine lorens means we make more money or save lives or through out mated cars or disease detection, then he think we'll say, good. and we'll welcome our new masters. but we'll see. another question before we end. anyone else? >> more of a thought. i think of it as read goldman's social intelligence and talk us about a car learning the community because its people are no longer walk through the town square, they're cruising through
on their own. there's no dish don't know if you touched on that necessarily we poo think the advent of tech is ruining how how many interact, not necessarily how they operate in a tool setting. is at that time touch on your book, the depreciation of the interaction of humans. >> so, it's a really good question. very coherent. people are concerned, you opinion out, about the dough gradation of human interaction because of this technology that we're better -- we are worse at it and don't do it as much. does the book talk about that. is that's an compare summary? it touches on it. it's not at the heart of it.
let me talk about this bravefully terms of a.i., machine learning. where there are very real concerns -- so there are cities that turn to machine learning in the hopes it can solve all questions, and answer all issues. a all we need is machine learning and fix it. there's a lot of positivity about that. it's amazing technology. even when the machine learning can help, there's been thankfully a fair bet of recognition and pushback on the idea that machine learning fixes anything or general technology by itself fixes anything, and especially with machine learning. there's -- it is a product of humans in very important and direct ways. so machine learning learned from dat tamp the data reps humans and including in our way -- but we choose which data you put.
in you are designing medical -- and warns people if they're coming down diabetes. you see in all the dad texas medical records and its starks to make correlations and concerns out it's able to make predicts about your halve that your human doctor cannot and these predictions are accurate. if there's 65% chance, then there's a 65% chance so the fact at that time 35% of the people say i didn't come down with diabetes that was stupid, are missing the points. 65% is 65%. these systems have humanity all the way through them in. in choosing which information to put in what algorithms we use is art and science. we can tune it and make decisions. even beyond that, the use of
these systems -- i think this is a really important point that many people are making no, some very effectively. you have to look at the systems in the larger system of our lives in case of the city, the follow-on affects, the -- who it affects immediately, the effects -- the unexpected effects it's going to have because all the places where we use in this case machine learning are themselves systems. they're complex, chaotic systems which are subject to large changes based upon small -- large effects based upon small changes so can just drop this stuff in and walk away and say problem solved. you have to be really thoughtful and careful and consulting the community seriously and have to be listening to the commune that's being affect by this and the community next door and the unit being ignored in order to decide whether this thing is
even working. me a look look it is but may be having terrible effects on your city. very small change can make -- that systems thinking is a crucial part of the change we are going through. makes life way more complicated. but it's really important. >> i think the other thing that is implicit in everything you write about here is data, and data is or data are our friend. the data -- information is knowledge, it's proof, evidence. that's all good. but then you hear arguments about probables and you feel left out if have prostate cancer and you hear when-the medical community arguing that well really doesn't pay to test. don't say after that many lives so it's not worth. in aggregate i mayty true but i only care bit any neglecter regions and whether i have it there or.
>> so we have to re-assert our humanity. that may be the probable us don't like it and i am going enter into a political debate pout this and that's where the political process of conversation i think enters into this weapon have information to form ore decisions but may not seal all of the implications we must assert. any other questions? let me inhere if i may. i will go back to the gutenberg parent cease, who scholars at the university of southern denmark, came up with a motion of gutenberg parent cease which says that before gutenberg, knowledge was passed through memory, and there was little sense of ownership or authorship. the business model was simple. one scribe, bun book, bun year.
the a was to preserve the knowledge of the ancients because we're dumbies. then along comes gutenberg and now things contained in a product called the book, an fall and omega, a beginning and end he. we cognate the world differently. that's the line -- this sentence is an example -- becomes our organizing principle. the business model is clear when copy right invented. the aim of that tex tour -- texture world to honor the experts, the doctor wrote a book and we honor her. now the other end of the parent she's, and there's less of a sense of ownership and authorship. and the aim of all of this is not to preserve the knowledge of the ancientses for honor the
experts but as david said, the smartest person in the room is the room itself. it is the network that czechs our knowledge, -- connects our knowledge, that makes our exceptions happen. so i want to read one sex about books. why have we so insisted on turn can complex histories into simple stories? marshal mccloughan was right in the medium is the message. he we should rank or idea to fit on paneses, sewn in a sequence we floods win card bordes stock. goods are book at telling stories and bad at knowledge as all knowledge does when we let it. but now the medium of our daily experiences, the internet, has the capacity, the connections, and the engine needed to express the richly chaotic measure of the world, this comes at the price of the comfort illusion of comprehension, and as artificial intelligence has been teaching
us. so last question for you, my friend, david, as you're surrounded in the age of books and texts, what happens to us? [laughter] >> how would i know? books seem to have -- one might point out you read that from my book. there's a complex abc to this. part of is boils down a single since, hypocrisy and comes down recognizing that books are a general a, jury a genre, a form of order that has value because of it limitations. so books steam have a great deal of per sis -- seem to have a great deal or persistence. we feel very at home fa a book store, egg for introduce buy
books and i don't see why that is going to stop. certainly i hope it doesn't stop before people buy my book. >> so on that night we want to mention you doing that and get its signed by the author. i want to thank our hosts and david wineberger and you for your kind attention. thank you very much. [applause] [inaudible conversations] here are some of the current best-selling nobody fiction books according to pal's book in portland, oregon:
booktv asked representative buddy carter what are you reading? >> well two things. i just finished a book i enjoyed very much. it's written by thomas carlisle. it's a book -- actually a biography of a fictional philosopher. and very interesting book and i'm studying it now. it's pretty heavy but i reay