tv National Security Commission on Artificial Intelligence Conference - PART 1 CSPAN November 7, 2019 11:15am-11:56am EST
coverage of the impeachment inquiry and the response. if you miss it live, go to c-span.org/impeachment for video on demand. and we've added a tally from the associated press showing where each democrat stands. follow the impeachment inquiry on our web page. it's your past and easy way to watch c-span's unfiltered coverage anytime. and now a discussion on artificial intelligence and national security. former and current executives from google are among panelists discussing the importance of public private partnerships in developing solutions to challenges. it's hosted by the national security commission on artificial intelligence. it's about 40 minutes.
i hope everyone in had a good lunch or is busy finishing up an excellent lunch. i'm joined with two close friends of mine. and i'm probably the only person who can say this in the entire world. i work with and for both of them. i want to make sure i disclose my conflict of interest to start with. so general shana han went to michigan. >> go blue. >> rotc. entered the service of our country in 1984. he's been promoted a ga zillon times. he's been promoted on a whole bunch of activities and eventually we needed somebody operationally to implement ai
and he was the perfect choids. i worked with him in my role as chairman of that. kent walker was a federal prosecutor. law and order federal prosecutor who then chose to come to silicon valley and i think worked at ebay for awhile. we being google snagged him maybe 15 years ago. >> coming on. >> every day together. and during that time, not only did he set up our legal function but he's now in charge of all of global policy, pr, all those sorts of things together. so very, very significant players. and what i thought we should do since you all have heard from me plenty is simply start and perhaps, kent, we should have you make some comments about the world as you see it today. >> sure. so thank you very much. general shanahan it's a pleasure to be with all of you. the topic of today's panel, private public partnerships is
extraordinarily important to me. i grew up in this community. my father was in the service for 24 years. i was born and spent my life on u.s. military bases. my father fin ishtd his career at lockheed. i feel a pro found commitment to getting this right. i want to make sure the defense sector, private sector and universities can work together in the best possible way. i wanted to take on two issues up front. it's been frustrating to hear of concerns around our commitment to national security and defense. and so i want to set the record stlat on two issues. first on china. in 2010 you may remember that google was public, an attack on our infrastructure that originated in china. sophisticated cyber security attack. we learned a lot from that experience. and while a number of our peer companies have significant commercial and ai operations in
china, we have chosen to scope our operations there carefully. our focus is on adver tidesing and work supporting an open source platform. second with regard to the more general question of national security and our engagement in the maeven project, it is an area where it's right that we decided to press the reset button until we had an opportunity to develop our own set of ai principles, internal standards and review processes. but that was the decision focused on a discreet contract, not a broader statement about our willingness or history of working with the department of defense and national security administration. we continue to do that, are committed to doing that. that work builds on national security generally. it's important to remember that the mystery of the valley in large measure builds on government technology to radar to the internet to gps to some
of the work on autonomous vehicles and personal assistance that you're seeing now. just in the last few weekends we had an accomplishment which moved forward the science of technology. that was not achievement by google alone. it built on research that had been done at the universe tooty of california in sant barb rae, benefited with research scientists at nasa, carried out on super computers from the department of energy. those kinds of exchanges and collaborations are really key to what has made america technological innovation as successful as it's been. and just as we feel as though we're contributing to the defense community, national security community, a lot of that work, that community is a part of google. we have lots of vets who work at google. we go above and beyond to make sure the reserve efrts working at google can complete their
military service while having thriving careers. we try to headache sure vets transition to civilian life and can make the best use of their military skills in the private sector. we also are fully engaged in a wide variety of work with different agencies. with jake we are working on a number of initiatives from cybersecurity to health care to business automation. with daurpa we are wog working on a number of projects to ensure the robustness of ai, identify deep fakes and progress the end of moore's law and progress software and hardware interfaces. as we take on those kinds of things, we're eager to do more. we are pursuing actively additional certifications that will allow us to more fully engage across a range of these
different topic areas. we think that's extremely important. we think there's a great partnership to have had on the dib has announced last week, their ai prince plds. which i thought were very well done. it's a thoughtful document and continues to work the groundwork that was laid by the department of defense back in i think 2012 with directive actual3009 whichd about the use of humans, the charter of the jake, the work that d.o.d. has done with its own ai principles. in the private sector we too have been trying to drive forward. we've not only put out principles in overlapping areas. there's a lot in common. safety, human judgment, accountability, explainability, fairness, are all critical areas where different actors in the space each have different things to contribute. and i think that's critically
important. this is a shared responsibility to get this right. as the dib report notes, we need a global framework, approach to these issues. endorsing the oecd work is exstrooechlly important and something we want to support. we're working together to figure out where are they complement airries. we are a proud american company admitted to the defense of the united states our allies and the safety and security of the world. we are eager to continue this work and think about places we can work together to build on each other's strengths. >> thank you, kent. general, take us through what you're up to at the jake. >> so first of all let me say thanksu thanks. it is great to be here and i thank both erik and kent for the opportunity to do this. admittedly though, i will say i'm a poor substitute for the joint chiefs of staff, although i say it to lower probability of growing sand bite. you get that with it.
this is undoubtedly first and last time i will serve as a warmup act for dr. henry kissinger but hang on for the main event. i not only welcome but relish the opportunity to have this broader conversation about public private partnerships. when you ask me to redplekt back as my two years as the director of the project maeven and about a year as the director of the jake, tls one overarching theme that continues to resonate strongly with me. it's the importance and i would say the necessarietsty of strengthening bonds between government, industry and academia. this was said this morning, you brought it up and others had also mentioned it, is this idea of this relationship should be depicted as a triangle and it should be in the form of an eek lateral one, government, academia and industry. that's largely the form it did
take beginning in the 1950s and lasting until the early part of this decade. walter isaacson writes about in in his book. it is what really drove to silicon valley stood. it's not the case today. at best the size sides are no longer eek wi diss tant. you might say they are distorted or a little frayed in addition to being different lengths. the reasons are complex. snowden, apple encryption, miss machd operating tempo and ajiblt, different business models, general mids trust between the government and industry. we started talking past each other. the task is made much more difficult today by the fact that industry is moving so much faster than the department of defense in fact the rest of government when it comes to the adoption and integration of ai. we're playing perpetual
catch-up. some employees in the tech industry see no compelling reason to work with the department of defense. even those who want to work with d.o.d. which i say is far more than sometimes is portrayed, put everyone in this room in this category, we don't plak it easy for them. so i would just reinforce some of the themes that are in the security commission's report or the interim report. and that is this idea of a shared sense of responsibility about our ai future. a shared vision about the importance of trust and transparency. our national security depends on it. and even for those who for various reasons still view d.o.d. with suspicion or who are reluctant to accept that we are in a strategic competition with china i hope they would agree ai is a critical component of
ournation's prosperity and self-sufficiency. no matter where you stand on this, i submit that we can never attain the vision outlined in the commission's interim report without industry and academia with us together in an equal partnership. there's too much at stake to do otherwise. we are in this together. public, private partnerships are the very essence of america's success as a nation. not only in the department of defense, but across the entire united states government. so the message we want to send today, we have to make this triangle back to what it used to be. >> well, thank you, general. i think i'm going to ask a couple of questions to both of you. let's start with the same to both. kent, talk about maeven some more. >> sure. i think it's no secret that we came up as a consumer company.
we are quickly evolving into also becoming an enterprise company. putting a lot of resources into that. but there are different protocols and ways of engaging. as we go along, i'd be lying to tell you all of our employees have an identical view on issues. they don't. in some ways that debate and discussion is the -- it's a positive as well as a negative. in many ways it's in our dna but in the dna of america. you can argue that that kind of debate is america's first innovation. look at great research scientists like returned fineman who was one of the leading thinkers who was a freethinking guy. we think out of that comes incredible strength. if we work together well we can actually have a more robust, moe resilient framework that helps build social trust as well as a framework that works for the world.
as we put forward our ai principles and governing proseesses, because the principles are easy. as the dib report noetsz, the report devotes a couple of pages to the principles and a long section to the implementation. because you quickly discover that a lot of the hard problems are when the principles conflict. we've had debates about whether to publish a paper on lip reading. >> say that again. >> we have had debates about whether to publish a paper on lip reading. it's a great benefit that people who are hard of hearing around the world, et cetera, but you can imagine it can be miss used for surveillance. we determined it was appropriate to publish because that was used for reel only in one-to-one settings, not at a distance. but it's an example of the kind of decisions we have around issues like lip reading or
official recognition or other challenging questions where we have to come to terms with the reality, the trade-offs that we're making. we think there's room nor collaboration for cybersecurity, logistics, health care, many other areas where we're engaged with the military. >> general, same question, tell us about maeven. >> our intent was to go after commercial industry. erik and the dib had told us this is where the systems exist. our approach was a simple one. we wanted everybody in the market that was a small startup of 15 people which is one of the company's we got on contract to the biggest internet data cyber cloud companies world. one of them happened to be google. why do we have to google with project maeven? we wanted to take the best ai talent in the world and put it
against our most wicked project set. extraordinarily difficult problem to go after. we did a successful collaboration with the google team on this. what was happening internal to the company is how that played out is a little bit of a different story. but we got all the way to the ent of the contract and we got products that we were very pleased with. it was unfortunate for even some of the software engineers, they almost felt a little bit ost rau sized because others criticized them for working with the department of defense. but day to day, we got tremendous support in maeven from google. what we found though, and this is the critique on both sides, is we lost the narrative very quickly. part of this was about the company made a strategic decision not to be public about what they wanted to do. our approach is willing to talk as much as the company wanted to us talk about.
we'd do whatever the market would bear. we didn't want to get into operational specifics. this was a project, reconsance on a drone that had no weapons on it. it was not a weapons project. it is not. but what happened is we started hearing these wild stories and assumptions about what project maeven was and was not to the point where if you google today, no pun intended, the adjective controversial has now been inserted in front of maeven. it was not controversial to us or the team or anybody right now beyond some people who just don't like what we're doing. i guess what i bring it all the way full circle, this is an interesting point i've thought a lot and i'm not sure everybody appreciates or agrees with me, i view what happened with google and maeven as a little bit of a canary in a coal mine. the fact that it happened when it did, we've gotten some of that out of the way. you've heard kent talking a
little bit about a reset here and how much of the company and all the other companies that we deal with want to work with the department of defense. i think that's important. it happened. it would have happened to somebody else at some point but this idea of transparency and a willingness to talk about what each side is trying to achieve may be the biggest lessons of all that i took from it. >> it's a real strategy that we don't wear hats anymore because i could borrow three hats and figure out which hat i'm wearing. with my bib hat on i can tell you when i met general shanahan the real problem is we take these exquisitely trained people and put them in front of mind numming tasks. they watch screens all day. it's a terrible waste of the human asset that the military produces. there's a huge opportunity to try to get them to work at a higher level position. and that's why the dib
recommended the maeven procedure and the joint center for ai which you both stood up and now head. let's talk another question for both of you which has to do with ethics. now in the middle of the kerfuffle that went on inside of google kent had the good idea of having a formal ai proposal. he drove an ethics process we produced a remarkable document -- now i have my google hat on -- which i think was quite definitive. maybe you can talk about that. similarly the dib produced a proposal to the military. and i believe you are the customer for the proposal we bro on military ethics. i assume both of you are in favor since kent wrote the first one and all the other companies have now copied vare yachbts of your approach. what are the consequences of
these ethics things? does it really work? does for example does google prevent -- does google turn off things or stop doing things like in the last little while? i mean, how does this actually work? and same question for you, general. there are people who have claimed that the military won't operate under ethics principles. in our report we cite the rules that the military is required to operate under. >> i think as a general noted, having frameworks in place on both the set of principles but also the review processes and escalation opportunities is a critical part of internal and external transparency. it's right, among our prince pals we've talked about surveillance concerns. we want to make sure that the recognition tools and image software are deployed in appropriate ways. we want to make sure we know the
scope of the project that we're developing and when we're licensing that for commercial uses have the sense of the direction of travel. i think that's valuable for both sides in terms of making sure expectations are clear but building trust internally and across society. another example would be when it comes to general purpose apis for facial recognition where you don't know what use is going to be made. until we develop more policy and safeguards, we're going to be very cautious about proceeding in that area. another example, when it comes to weapons, we have said this is a nascent technology. we want to be careful about ai in this area. we're not pursuing this area given our background we recognize the limits of our experience. the military is going to be deeper and have more understanding of safety implications and the like. we're going to continue to work through these. i think there's a lot of --
we're seeing the european commission saying they're coming up with regulations. this will be an interesting exercise as we all pursue a common mission of how we build neck nol jooez. >> looking at it from the d.o.d. lens, this may be the best starting point when you talk -- kent mentions these areas of convergence between areas. probably the ai principle to drive a stake in the ground, do we agree on all of these, some of these? let's get the conversation going. the other part is i need to state the obvious, i can tell you with certainty that china and russia did not embark on a 15-month process involving public hearing and discussion about the ethical, safe and lawful use of artificial intelligence. they're not and i don't expect they ever will.
people may question why we're doing it. we've just embarked on this long process just to make sure we took into account all of the different voices on the ethical use of artificial intelligence. i would say the product that's been delivered is an excellent product shaped by a lot of people who have spent time and attention against this. i've said this in other settings. in over 35, 35 and a half years in uniform, i have never spent as much time on the excal use question. the department of defense actually has a long and commendable history despite flaws along the way of looking at the use of emerging technologies. there are differences with artificial intelligence. what the dib report does, it starts with here is what's similar to every technology, here are some areas that may be different and here are some substantive differences like systems that work on their own. we have a way of looking at this
no matter if it's artificial gents or any other technology, history, processes, our approach, and training are in place to look at technology and how we bring in it to production. now this this report has been presented to the secretary of defense, it is up to -- i get two questions now. one is what do you think about the report? it's excellent, provides the best possible starting point. number two, what are you going to do about it? that's complicated. we have to come up with an implementation plan. it will be department wide, taking these recommendations, putting something together through my boss as the chief information officer, and making some recommendations on how we implement this across the entire department of defense. that is not an overnight task. we now have an outstanding starting point. >> so nthat's a wonderful flamrg
for where we are. i'd like to push on this. open ai and develop technology which would allow arbitrary rewriting of texts that was sufficiently good that they became concerned and they didn't release it but only in certain models and certain researchers. a asked them. they said did anyone -- i said did anyone put pressure on you? they said no, we thought it was our good judgment. you said early we're going to avoid the face recognition thing as a general idea because of the dangers. where will the industry end up in this self-restraint thing? is it going to be a common set of principles? is it going to have an ai ethics common being careful? how will this play out? >> i think you already see some efforts to work across the
industry with a partnership in artificial intelligence to exchange information on some of the work being done. it's going to be an evolving question as we develop more infrastructure, about the appropriate limts, the appropriate safeguards and checks and balances, throughout a whole variety of different areas. i'm hopeful that with a common groundwork, the way that we have started to lay already, we're on the path to doing that. this is true of any new technology. in communications platform from radio to television to the internet, you've needed new regulatory infrastructures, now social con venkss about how you use these. this is powerful. we are at the early days. i think it's understandable that you're seeing a va views come together but notable that you're seeing the degree of convergence you're seeing. >> so general, you have talked inside the spent gon about this notion of a new kind of warfare.
and i think the term that you all use is algorithmic warfare. take us through in the same sense kent talked about this new emergence thing, what's new and powerful about this in a military context with your long experience and understanding how the military frames this? what's the language and positioning? >> i go back to, as we were formed and then deputy secretary of defense bob work was in the room, i'll never forget, it was like yesterday, designating us, you are now formed as the team that's going to figure out how you actually field ai. get away from the research piece of it which was all happening wonderfully behind the scenes. now we needed a team focusing on the warfighter. the same he gave us the algorithmic warfare team. it's become so much easier to
say. >> your acronyms are going to kill me. >> let's focus on -- >> why don't you tell us what it is? >> we're going to face a fight in the future. we're used to fighting in 20 years in a certain type of fight. we are going to be shocked by the speed, the chaos, the bloodiness and the friction of a future fight in which this will maybe be playing in micro seconds of time. how do we fight it? it has to be algorithm against algorithm. it's a bodyion you'da loop. how cast can get. >> remind people what that is? >> kernel john boy yd, he decide id how you get through the cycle of decision making, it was more about the observe sevd and orient face. in the future phase, this will
be happening so fast. if we're trying to do this by humans against machines and the other side has them, we're at an unacceptably high risk of losing that. this is a challenging one because i think part of what you're getting at in that future scenario, how are people going to be assured that our algorithms are going to work as intended and don't take on their own life. what we will fall back on, this is the starting point been is test evaluation, validation and verification. we have to do a lot more work on the front end by the time they field so we know whafz being fielded. if we think we're going to be a pure human against machine, it will be human and machine on one side and the others, but the temporal dimension, this fleeting superiority that you may be facing, it might be algorithm against algorithm. >> to me the key question, what
happens when the whole scenario is faster than human decision making? because i understand the way the military works, when there's a threat in jenl people check with their superior. there's human judgment. it's built around some number of minutes, not some number of nanoseconds. how will the military adjust its procedures to deal with this real possible threat? >> it won't be driven by above. the innovation will happen at the lowest level. what we have to be able to do in laces like the jake or the maeven is give people the policies and authorities to do what they need. the people that will say i have a solution, i'm going write a code, develop an algorithm, going to, if you give me all the things we can do that. it's the idea in that fight it will be more decentralized than a lot of people are comfortable with today. and that brings risk with it. we're talking about higher risk,
consequence, but it's either that or risk losing the fight. it's this idea of decentralized development, experimentation, innovation. the innovation as was described in one of the panels this morning happens at the bottom. we've got to give them the push from above to make it succeed. >> in addition, there are new fronts in cyber security and defense. we're seeing already sort of efforts to destabilize with disinformation campaigns and the like. the more we can work together across a wider battlefield, the better for everybody. >> kent, do you have a model for how the industry -- one of our themes is the industry and the government need to work together broadly. obviously we have a senior general here but i'm referring to the government as a whole. there's a lot more than the d.o.d. that needs ai. do you have a model for how the industry should work with the
federal, state, the d.o.d.s and so forth? >> i've already talked about two important elements that the dib reports talks about. this is the notion of trying to build broad trust. the second is a need for a global framework. the third as general shanahan alluded to, is a more operational administrative question of how do we make it as easy as possible for new companies to enter into these partnerships? a lot of the research being done in silicon valley is not by large but small companies. it's a rich ecosystem of innovation and it's challenging for a company google size to start to get more involved in that environment. it's doubly difficult for some of these smaller companies. we look at moderncizing procurement from the military side and congress as well to make that as quick and nimble as possible, responsive to new
needs. looking at increasing r&d funding because that's been a fertile ground for a lot of these collaborative enterprises to move forward. looking at human resources. there are a lot of authorities out there which authorize private sector, people to come into the government. but in practice it's harder than you would think. a lot of that hard work on the ground i think is important to making this a success. >> for both of you, because we'll make recommendations that will end up in legislation a year from now, are there specific things we could do that would promote private public partnership? as you show know the d.o.d. has diox, sco and a number of others have ink ew tel, the extraordinary help that daurpa has had, the sum of all of that, do you have a model of -- and i'll ask kent the same question. do you have specific things that would be helpful that would
decrease the frictioning and increase the cohesion between small companies, large companies, the federal government, procurement, the d.o.d.? >> so much has started to happen over the last couple of years with places like defense digital service, diu, all what i call these beginning insurgesies to get things moving. >> these are each small teams of software people inside the d.o.d. that have had an outside project in changing procedures in the air force for example and things like this. >> that got it started. we have to figure out how to institution lies, make that systemic. when you ask about what are the ways? i'll tell you, tachblt, bringing in talent from the outside from academia, jill christman who's here has time in the government, in you wera working for a startup, 25 years in the valley,
this man comes in and takes a different view of what we're trying to do. we need people coming in for a year or two, going back out to the outside, us putting people happen. it needs to scale to the next level to really start to understand what we're each talking about. me going out to the valley and talking to the c suite only gets so far. it's the peer to peer relationships that i think are going to be more important than anything else. >> i agree. we are priming the pump on a variety of really important areas, whether that's training or maodels and simulation. another important component of this is the i.t. modernization. it comes embedded within a larger environment of software that's oftentimes very difficult
because you have to get security clearances and appropriate certification for all elements of that piece. there's that combination of successful individual experiments and trial runs to build the familiarity of the peer to peer level, but also the systemic change. >> it's time to put to bed the motion that silicon won't work. we can move forward and build this collective between the public and private partnerships. can you summarize the key message you want to offer us? why are you here?
>> we approach that task thoughtfully. we want to be thoughtful and make sure we have clear frameworks and transparency as we move forward. i think that is a mission that the u.s. government and the military share and we are looking toward the future. >> you're the top, you're the fellow who's going to sort of make this change happen across 3.2 million people, $660 billion, an enormous bureaucracy. >> one person at a time. it was said on the previous panel you must have the full support of leadership from the
very top to show that it is a priority for the department. that's critical but also insufficient. you have to have the bottom up innovation, the people pushing from below. >> just keep plowing ahead and with the resources and the commitment of the department behind us, i know we'll get there. >> i think it's worth saying i worked with kent for 15 years. with my google hat on i'll tell you i cannot be more proud of the impact he's had.