Skip to main content

tv   CSIS Discussion on Artificial Intelligence National Security  CSPAN  November 10, 2018 9:10pm-11:03pm EST

9:10 pm
i will keep looking at c-span. that will not change. >> we appreciate that. >> the florida secretary of state has ordered a recount in the u.s. senate and governors races. secretary issued the order because both races fall within the margin that i law, automatically triggers a recount. the recount the governor's race will be done by machine. in the senate race, there will be a hand recount of ballots that cannot be read by the tabulating machines. the results are expected to be turned in by thursday at 3:00 p.m. new congress, new leaders, watch the process unfold on c-span. reclaimed control of the house. republicans retained majority control of the senate. as the parties organize a new congress, watch it unfold on c-span. now, a look at how artificial
9:11 pm
intelligence is affecting national security. this is just under two hours. >> good morning, everybody. congratulations on getting through the rain. i know it has been wet and soggy and usually when we have a rain like that, 90% of the people skip it. i am very pleased and proud to see everybody here, thank you for coming. my name is john hamre, i'm the president at csis. my role is largely departmental -- ornamental to say welcome to all of you. i do have one kind of safety announcement. we do this when we have groups from the outside. want to say briefly, if we do hear an announcement, we have never had it happen in the five
9:12 pm
years we have been here, if we do, andrew will be responsible for you. we're going to exit through that door or this door. they will both take us to the stairs that are right by that door. we will go down to the alley, take two left-hand turns and right hand turn and will go over to national geographic and we will assemble there. they have a great show right now at national geographic. [laughter] it's about the titanic. people don't know that the titanic, discovering the titanic , was actually a cover story for a secret navy mission. we lost the scorpion, submarine. we wanted to find it, learn more about it so it was a cover story if you can imagine. after the had done all of the finding of the submarine, they spent one week and found the titanic. it's a great show. i'll pay for the tickets if anything happens. if nothing happens you pay for your own ticket. but do go see. we're delighted to have everybody here and i want to say
9:13 pm
thank you for those making it possible for us to have this conference with you today. it's about artificial intelligence. these are the most often spoken buzzwords nobody knows anybody -- anything about in washington. we go through phases like this. i remember when big data was the buzzword in washington. right now, artificial intelligence is kind of a buzzword. everybody is thinking about it. there's really not enough intellectual context to understand what we're talking about. that's what the study is about. in a few minutes, andrew was going to give you a background on the study we are releasing today. a video will introduce it but this is one of those very interesting questions where an enormous amount of momentum is moving all around us, and what are the governance issues associated with what we are
9:14 pm
discovering and how are we going to manage it? these are open ended questions for which we really don't have answers. the purpose of the conversation is to lay out a framework and then to hear from three experts that are going to help us about -- learn about this in a more structured way. before we turn to them let me ask alan to come up on the stage. [applause] >> hello, thanks for coming out. we do sincerely appreciate the opportunity to be with you and see if i is, much. i just have a few minutes so let me just make a couple points before we get on with the program. we are very proud to be a sponsor of this report and this event. i've had a a chance to read an early draft of what you are about to see. i can tell you it is filled with what we consider very important
9:15 pm
and very interesting insights into artificial intelligence and its impact. i'd like to use a few minutes here to tell you why thales felt that sponsoring this project was important, not just to us but to our entire industry. for those of you not familiar with us, we are very large technology company, european-based but with global footprints, especially here in the u.s. and north america. we serve five very large vertical markets. we are a large provider of satellites of payloads, including the space station, aerospace, commercial and military aircraft and air traffic management, ground transportation. if you ride the new york subway system or london underground you had a chance to experience our products in action. security, both cyber and physical security. course, defense, one of the 10 largest defense providers
9:16 pm
honor. in each case you can appreciate that we address some of the most challenging and complex problems that are faced. those that really impact critical decisions and those that occur at the most sensitive times. so in other words, what we're involved with affects lives. now, we realized a handful of years ago that there are emerging technologies that will disrupt our businesses and our markets. they include areas like big data analytics that john referred to but also cybersecurity, internet of things and especially artificial intelligence. we as a company have made significant investments in each in order to stay in front for our businesses. in the case of artificial intelligence we're already incorporating this technology to help us solve some important what we would call use cases. just to give you one example, as a large collective satellite imagery data we now apply
9:17 pm
artificial intelligence to help decipher or make sense of satellite imagery to detect sensitive or perhaps threatening items among a very large set of imagery data. in things like air support security where we're doing facial recognition to help detect potential threats. given the nature and complexity of the problems we help solve at -- and the nature of the customers we serve, our success is going to not depend just on technology but other topics. we needed to better understand the role of government in artificial intelligence. the issues such as artificial intelligence wrote -- reliability. these are especially important topics in critical for defense application. none of us wants to apply artificial intelligence to create the next terminator, for
9:18 pm
example, and these are a particularly important topics. we need to better understand the hazards and risks, and most importantly at thales we want to help shape the conversation around ai because we think it's important. want to be part of the collective force for good and ethical applications of ai to really help address our worlds 's most foundational challenges. and so with those objectives in mind we embarked on this initiative partnering with csis, we felt they would bring and could mobilize the best breadth and depth of strategic thinking and expertise to successfully address this topic at this stage. so from a thales perspective we look forward to this presentation and panel answering to the continued engagement with csis at all of you. so we thank you for coming. i thank you for the few minutes here this morning. we are going to see the brief video that john mentioned.
9:19 pm
i think it's about three and half minutes. i think it will provide you a good summary of some of your of the initial insight you hear the panel expand upon shortly. so thank you very much. [applause] ♪ >> artificial intelligence is a uniquely complex field. we have thought about it for centuries. works toward the modern version of it for over 60 years. and made significant breakthroughs, especially in machine learning since 2012. compared to what it could someday achieve for us, ai is
9:20 pm
only getting started. investors will spread their bets off big enoughid to justify the rest. ai will deliver the greatest rewards to those prepared to make long-term investments and investing in ai applications alone will not ensure results for success. ai advanced capabilities of depend on an ai ecosystem. when properly executed, this enables ai to take root to develop and improve on human performance. a fully developed ai ecosystem and the kinds of results that justify its expense don't happen overnight and in many cases they don't happen at all. most public and private organizations including the department of defense are woefully underinvested in the supportive structures that ai depends on. this creates the debt that must be paid upfront to allow for successful ai adoptions. until it is tackled, the debt of an underdeveloped ai ecosystem
9:21 pm
will only grow, undermining long-term success. the smartest investors have begun to understand how this ai start up debt must be dealt with in their initial investments across the field. since ai emerged as a practical reality in the past decade, the private sector has dominated ai investment. tech companies are perfecting the painstaking process that made famous leaps like watson possible. meanwhile, commercial adopter s have begun to think critically about what can make the ai applications worthwhile long-term. many have seen the consequences of ignoring deficits in the ai ecosystem and have allocated resources accordingly. most government ai adopters start out far more underinvested than commercial users. the department of defense exemplifies this issue. if the nation's strategic goals for ai are to be realized,
9:22 pm
investing in the ai ecosystem must be a top priority. this investment will lay the groundwork for wider government adoption of ai. while they can and do leverage commercial ai for strategic uses, there are some areas where commercial developers will not invest. the technology required to deliver and verify ai results for national security applications differs from what is expected from commercial ai in a high-riske area and extremely secure, assured, reliable, and explainable. the development of this technology is vital to the national interest and must be fast tracked. by doing this, the government can also make a critical difference in ai in the wider commercial market. this public sector development could yield breakthroughs in the field. a viable national strategy for artificial intelligence would require investing in the ai ecosystem to pay down debt
9:23 pm
, especially in the workforce and spreading bets across the ai field. for the public sector it will be critical to focus on ai reliability. if the government works closely with the commercial sector to drive the technology forward, the u.s. can leverage ai to achieve its strategic objectives. >> thank you again for coming today. john opened by saying he was ornamental. i think i may be a little redundant. we tried to pack as much of our report of major findings into that video. i hope you enjoyed it and that will be out there on the web, on our website, on youtube and possibly a couple of outlets for those who want to follow up and watch it again. i am going to briefly run you to
9:24 pm
hrough the top-level findings of our report. there's a lot more there. it's about 78 page report so i encourage everyone -- if you didn't get a copy on your way in, there will be more available through the website shortly. let me give you that top level overview. thank thales for the support for this project. i also want to thank lindsay shepherd, who was a lead author on the rp and really the engine behind the project. two other contribute authors also who worked very hard to make it look and sound good. i also want to thank the attendees at our many workshops on this project. we had six project -- six workshops. there was a hard core of about 20 folks who came to most of them if not all of them, another
9:25 pm
group of 10-15 who came for some of the sessions. none of the errors in this report are the fault i hesitate to say that. they can disown every aspect of it but their insights were deeply valuable to thales as we went through this process. john touched on the fact one of the foundational questions when you're going to do an ai study is to start with a question of what are you actually talking about when it comes to artificial intelligence. there can be a meaningless term depending on the level of knowledge of the person talking about it and the problem that they are trying to bring a limitation to. there's a lot of good work being done on artificial intelligence so i want to begin by saying we do find it specifically for this
9:26 pm
report. not because we want to critique or criticize anyone but because we need her to have an extent of what our scope was to do useful project. our focus was narrow ai. we didn't try to get into questions of general artificial intelligence and issues and the problems that causes. largely because our timeframe was relatively near-term focus, the next 5-10 years. our judgment is during that time frame the issues of narrow ai are going to dominate how this field develops and the significance that it has for people trying to implement ai solutions and government actors trying to understand the technology and capitalize on the technology. and so by narrow ai it means artificial intelligence is a technology that does problem specific tasks dependent solutions to cognitive problems. the way that ai operates is very different in many ways from what we would normally think of as human intelligence, the kind of
9:27 pm
problem solving that a lot of these algorithms engage in there's little to no resellers to what we would think of as human cognition. as we have looked at it here. the various ai technologies within the ai field are not things that are trying to mimic human intelligence but trying to perform tasks and solve problems in whatever way they can do. we also had this study with a fairly broad look. at issues of ai adoption, investment, management, and did a bit of a survey of international activity in ai. it was a very broad study and i of ai. i think there would be a lot to be gained by going in deeper on each of these topics. what you will see is the
9:28 pm
highlights in some ways because of the very broad look. one of the things we tried to do with our first effort was to come up with conceptual framework of understanding the ch of ai if you will. how does it progress? how is it likely to precede the act go there's a lot of different ways to look at it. there's increasing degrees of collaboration and the way ai operates that leads to higher higher order, applications of ai over time, we hope. on ay to capture that framework that we can visualize. as you see along the bottom, we look at ai capability starting with the very narrowest possible tasks. like a telephone switchboard which is trying to move communication between two channels or two users in an intelligent and accurate way. connecting the right people at
9:29 pm
the right time. and then build up towards broader and broader, more general purpose task overtimes. and in some cases you can think of that -- one way to think of is the ai acting on its own and becoming increasingly autonomous. that is not the only path. and it may not be the most important pass because you have heard the senior leaders articulate that we are not at the point where anyone is willing to sign up for completely autonomous activation critical defense missions. there needs to be a human in the loop. one of the other dimensions is how does ai take in the context of the problem of the world in which it is acting, actors, human actors, other ai? they may be collaborators in the process, and so the exposure or the ability of the ai to move up through this hierarchy of behaviors, becoming more
9:30 pm
interactive, more collaborative and then more able to act in a closer approximation of how a human-based intelligent actor would act over time. there is a lot more discussion in the report. i encourage you to go into it. how uninformed our work, i would say we tried not to be overly focused on the increasing levels of autonomy as the only dimension of ai progress. we have a disconnect on this chart. there is a long way to go. if we did a non-disconnected chart to some of the applications, you would see we are very much in the early stages of ai development. there is a long way to go to get to some of the more advanced applications that have been discussed and imagined.
9:31 pm
i wanted to revisit this chart on the importance of ai ecosystem. if there's nothing else we want you to walk out of this room thinking about this report, it's about the importance of the ai ecosystem. this was our biggest take away, which is that for all of the importance of the different ways of implementing ai technology through machine learning, computer vision, other elements of the field. there is something fundamental that gets at how this works, how it can be usefully implemented and managed over time. it's this collection of things
9:32 pm
that we have determined the ai ecosystem that people, the workforce that is developing and engaging, maintaining, and using the ai technology. you see the little symbol here, the ability to secure the data on which the ai operates. it is the foundational piece of the tool being able to secure and also being able to work on gathering data, data quality. it is the ability to have a network, having the computing power to process the data and a network to share data so the critical applications get the data they need when they need it. they are able to do the training that then are able to do the mission specific tasks. required tolicies actually manage ai. there is a lot of work to be done there. there is a very good report put out earlier this year. it looked a lot at policy issues and strategy for ai and i recommend that for everyone.
9:33 pm
they touch on a lot of things we look at as well. last but not least, the ability to verify and validate what ai tools actually do. particularly for government users, high consequence missions , many of which we see at the department of defense. the ability to actually validate the performance of ai as it grows and evolves is not only critical but incredibly challenging. don't knowrds, we how to do that at this point in time. i think we will hear more about that in the panel discussion. here, we give credit leaned heavily on data given by mckenzie. also some other work that looked at ai in the federal government space and where it lies. then there is a great report called ai for the american
9:34 pm
people at the white house put out that summarizes some of the white house investment. some of the takeaways there is very rapid growth in ai investment starting in about 2010 and really starting to skyrocket in 2012. as the curve really went vertical. driven a lot by machine learning and the takeoff of machine learning as a technique for ai was delivering significant results. in 2016, you see that companies across the space inclusive of private equity, companies investing internally of about $26 billion to $39 billion. we also see in terms of government investment, looking at the broadest category, which . networking information technology research and development, about $4.5 billion per year from 2016-20
9:35 pm
something-2018 on average. it was a very broad category. one of the things you can get wrapped up in is what constitutes a real ai. sidesteppedwhat that debate because our argument is the importance of the ai ecosystem means that in many ways an investment in critical computing capability or critical networking is so foundational and supportive of what you need within the ai ecosystem that it is worth considering that as an ai related investment. we have tried to split hairs and say what is specifically ai? what is and how the rhythmic -- algorithmic investment that is not ai because it doesn't meet some criteria? we have chosen to get into that kind of fine distinction making because we think the categorical look is important. we also looked at ai adoption. what does it take? what does a user really want to
9:36 pm
know or need to know? or need to have to make an effective implementation of ai? where you cany of cai being implemented. i mention that in a commercial space. machine learning has taken off in a big way. we see ai in the financial industry, insurance industry, the advertising industry in a big way, especially online. on the government side, i should say self driving vehicles is another huge area of commercial investment driving the field forward. on the government side there's also quite a bit of ai progress or effort, forward momentum of ai going on. we've seen that with image recognition, just ask, with unmanned applications. so project maven is relatively well-known government effort, the scope of which has made a lot of progress in recent years and is that significant investment leadership interest.
9:37 pm
the sea hunter unmanned, surface vessel has likewise been driving the field forward on the government side and there's been some pretty innovative work done in the marine corps under the just 6 -- on the logistic side to try and capitalize on ai. for us, one of the big things that jumped out as we talked to hrough ai adoption was the significant debt that has to be dealt with. this chart says technical debt it should say technical debt and workforce debt. that is at least as much as not more so on the work for side as it is on a networking and computer infrastructure and data housing and data collection side. without the workforce you really have very little to work with.
9:38 pm
you can gather the data, it is not as easy as saying maybe you can just gather the data. there's a lot of work to be done to make sure that his quality data, useful data. it is the workforce that has to do that work. when we get into ai adoption, what we heard over and over was the issue of the startup debt that people try and implement ai face is the biggest issue out there. the specifics of a technology are not right now what is holding the field back. startup debt. we try to incorporate that into the idea of the ai ecosystem. bere are sure to be debts to paid across all aspects of the ai ecosystem. that is what we hope will be the major takeaway of our look at this effort. we also had a session looking specifically about if we really have ai and hand and are starting to use it, what are the issues with managing that usage? we did not think of ai as a general intelligence
9:39 pm
entity, but if you think of it as a unit of application to use terminology within the force, how would you command and control that force? what are the issues that would come with using that to achieve missions in a real tactical operational context? what we discovered is that there are issues at the tactical level for how you use ai. also at the operational level, middle manager level where a lack of familiarity or understanding of ai can completely stymie the ability to use these tools and frustrate the tactical folks who are young and innovative and able to embrace and grasp this technology. then the strategic level, broad organizational level policies in the dod context, dod policies,
9:40 pm
procedures, guidelines, and legal things. some of the things that came out on that side is a lot of work to be done on understanding intellectual property and the significance and ownership and licensing of intellectual property in ai context. when you have algorithms generating intellectual property , how does the management and ownership of that manifest? work todous amount of be done at each of these levels to really make ai useful for high consequence missions. trust, reliability, and security was something hit hard in the video. of ai toerational user truly put that into a high consequence scenario, understanding how it operates, what is it doing, why does doing what it is doing, and how do we know it will do it in a way we expect, or at least in a safe way over time. study looked at some
9:41 pm
of the international activity in ai. this has gotten a lot of attention. we dig into it in some detail across a survey of countries. i do want to run you through a death march of all of them. china obviously stands out as a huge investor. russia as well. one major takeaway from the international look is that the number of countries that are making significant efforts on ai .s vast it is a global competition going on. the chinese are heavily committed, as are others. what these countries are seeking to do with ai varies very much. there is almost as much of an idea of different ways of using ai to promote national interest as you go across as there are countries engaged in doing it. here we returned to the importance of the ai ecosystem. has from a global competitive aspect, i walk away from this study not concerned that there
9:42 pm
is going to be a specific technology developed in china or russia that will give them some indomitable advantage that will take them out ahead of the u.s. or others or vice versa. as much as of the ai ecosystem will be what confers advantage over the longer term for countries engaged in work on ai. and then on recommendations, refer you to the report for the full list. we think issues of ai trust and security are critical areas for u.s. government investment. the need and degree for which this is required for high consequence government missions is in excess i think of where the field on its own will be able to go. there's critical need for investment here. particularly as i said on verification or validation where there is theoretical work to be done to understand how is this even possible?
9:43 pm
ai challenges some of our traditional methods of doing tests and verification of validation. i had already our point about the workforce, the criticality of developing, nurturing the workforce, being able to get the workforce into the organization. there is a lot the commercial industry is going to do in ai development. they will take the lead but if we think we can usefully use ai on government missions without -- we are kidding ourselves that we need this talent organically as well as in the private sector. although there are some strengths in the government's digital capability, what you see reflected in the early adopters technology such as the intelligence committee where there have been a series of investments, there still a tremendous amount that needs to be done on the government side. lastly on policies, and being aie to manage and safely use
9:44 pm
in government context in terms of cooperating with the private sector, successful acquisition of software which ai is, there's a lot of work to be done. i will lead our slide up on the ai ecosystem and the importance to try and hammer that home. broadoncludes our overview of our findings and recommendations. i want to turn to our panel. they will join the up he or the podium. i will let you know up front, we linedve four panelists ourunfortunately representative from ibm was flying down this morning and his flight was delayed and possibly not even going to take off. unless he comes in graduate style in the back of the room and hammers on the glass, i don't think he will join us today. i think we have a fine set of panelist and i will join them at the table.
9:45 pm
>> thank you ladies and gentlemen for joining us today for the discussion. i will introduce our panel and then we will move into it. to my left is ryan lewis, the vice president of cosmiq works which is an in-q-tel lab, dedicated to helping u.s. national security agencies, emotional organizations, academia and nonprofits leverage emerging remote sensing capabilities and advances in machine learning technologies. erin hawley is of publicresident
9:46 pm
sector at data robot. she can type more about how that works, she works closely with federal government users. they have a very substantial commercial business as well, using their ai to draw insights of data.e volumes to her left is david a sparrow. he is a researcher at the institute for defense analysis. they were regulars at the workshop so i want to thank them for that. he has a phd in physics from m.i.t. he spent 12 years as an academic physicist and then joined i.d.e.a. in 1996 because his work on technology, insertion and ground combat platforms and recently has gone on a deep dive on the challenges of autonomous systems, technological maturity , and intelligence machines and on test and evaluation verification of validation of
9:47 pm
autonomous systems driven by artificial intelligence. thank you for joining us this morning. giving eachart by of our panelists and opportunity to give a few thoughts. on your perception of ai challenges. if you reference the report that is great, as long as they are good. [laughter] kinds ofoking for all feedback. then we will get into smaller specific questions as we go. ryan, why don't we start with you? ryan: thank you for the opportunity to speak here today. i had an opportunity to read the report over the weekend and loved all 78 pages of it. , within the labs we go one step further and focused on supplied research projects. that offers us the unique
9:48 pm
perspective in terms of what is not just happening within the artificial intelligence market in terms of activity as well as startups, but also in terms of what some would argue is the leading edge that is coming out of academia or other national laboratories. i think when we compile all of our experiences into one, perhaps the simplest way to surmise what we are seeing is simply that ai in this general sense serves as a fundamental chance for the intelligence community and military to rethink some of the applications. the key question or word there is that it offers. as mentioned a lot of these technologies are in their very early stages. with some very early and attractive results. there are still work to be done. opens a perspective
9:49 pm
of how we think about applications both near-term and long-term for this type of technology. it is important to set the stage because that comment is in stark contrast with all the hype we hear around ai in general. how many people here have heard of computer recognition pattern ?onference if you have not heard of it, it sold out faster than washington capitals playoff tickets. that should be startling to all of us in some ways because the question is why? wire people so excited to go to a conference that just a few years ago was not heard of. the reason is because for researchers, they are just now having the opportunity to have
9:50 pm
niche focused areas that are expanding beyond that. we think about applications from the national security perspective, what that means is how we harness some of that excitement? put in theis well report, how do we design the infrastructure for the ecosystem ? more generally, as we look at specific applications, what are the human machine interface? what does that look like? in some case whether it is a user using different tools, specifically robots. scientist building models. what are our expectations for what do weyees and anticipate are the lifecycle for those tools to be? it is a different way of thinking about a problem from a workforce perspective.
9:51 pm
the other piece from an theseations view is that technologies are still an experimental stages. this can be at times frustrating as my colleagues will tell you. what is really cool is it allows us now in these early days to begin to figure out what processes we want to change and which ones we think are strong. it is hard to believe that deep learning is not a solution to everything. i know that is sacrilege in some areas but it is important to know when these tools should be applied and when they shouldn't. thee go through some of questions, we can highlight examples. i think the main take away is these technologies allow for early experimentation, which could have drastic effects in certain applications or processes and they mail only be tertiary and others. >> next up is erin. i should say because i want to make sure we're not misunderstood, we're talking about a broad problem for
9:52 pm
government and making effective and useful ai. there are companies doing it and erin works for one. there is this larger systemic issue. erin you can leave us with some or insight. erin: we are thrilled to be here today. data robot is a company that back in 2012 our ceo was working in an insurance industry. he is a data scientist. many data scientist who are currently working at facebook, google, and amazon, their spending weeks to months to develop some of the strongest algorithms in the world to help predict some of the things that could happen in the business place. our ceo thought to himself that ande is this competition every data scientist or data analyst who was interested in partaking in the contest, whether they were from allstate or netflix on a certain
9:53 pm
challenge, they realized that even if they work for this insurance company, he and his partner thought it takes is weeks and months to develop out a single algorithm. that is just too long. we will be be in day in and day out by china and russia's advances if we don't try and take a step ahead. to in q tell and their investment datarobot was , formed and the idea behind it was we need to make a software platform where we take with the pieces of what data scientist bring to the table which is a combination of having a really strong domain expertise and a strong background in computer science. a veryrd east was being strong mathematician and statistician. how do we bring that together in a software platform so rather than it taking weeks to months to answer a question for informationu need for, let's develop a software platform so instead of being
9:54 pm
limited to a single algorithm, we offer chances for people to put data in and be able to generate hundreds of different models within a few minutes versus weeks or months which we do not have the time to do. ourfolks we have in organization, we took three years, the investments we have across the community and took the first three years and instead of putting a product out to market, the executive team decided we will take three years and $30 million in order to make sure we built the strongest platform, which is what we view as automated machine learning. we are under the umbrella of artificial intelligence machine learning, actual language -- national -- natural language processing from deep learning, in doing that it is fascinating some of the things we have been able to do. especially in the commercial industry. i started the federal public sector team a few years ago and we are seeing some nice ways to get started for both the
9:55 pm
military and intelligence communities. in commercial, it is really outstanding to see what we are able to do to help across a variety of markets including banking. ofi-money laundering is one our biggest risks to our financial and economic future. the fact that we were able to identify anti-money laundering schemes before they get started has saved banks and those of us who are consumers hundreds of millions of dollars in a short timeframe. what we are trying to do is help people understand how can you -- most people in this room will trulyw many people are data scientist in your organization? you might find one or two uw oh within a massive organization or dealing with volumes of data. deploylook act and you capabilities like ours, we can help take your one or two data scientist and take the amounts of massive data and make solid
9:56 pm
good answers and solutions for you. that is what we have been able to do in the public sector is what we are seeing happening day in and day out in the commercial industry. >> thanks, david. ofid: from the perspective building these tools and making the investments, this is very much in keeping with the ecosystem approach and i'm going to bring it way down to, let me call clarification. i'm delighted to be here. this a real treat for me. myrew said something about window into this. it is from an evaluation perspective. the entry to ai is an element of a system of some sort, either technology assessments or test evaluation validation. when i went through prior to getting the report, i said what would i say on my own? i will try to compare.
9:57 pm
one of our mantras or soundbites at our place is ai is not a thing. andrew's reports says it's a buzzword but it's not even a technology or set of technologies. it refers to everything from mathematical research and provability of software to aspirations for the good of society. it is important to keep reminding yourself that you cannot do anything with anything that broad. you have to narrow it down like they did in the report. purpose filleds algorithms and you cannot lose sight of the bigger issue if you want to have the parts building the ecosystem. the second point i would like to which is largely overlooked, we don't have anything like a predictive theory here.
9:58 pm
we are doing very useful things and it is very profound work on theoretical underpinnings that we are not there yet. this has implications across the board. implications for how you want to do development and legal and ethical implications. i would class the need for predictive theory as part of the technical debt that has to be paid down. you can do very, very useful things without the predictive theory. part of what happens if you work on ground combat is you work on artillery systems. we had a useful artillery systems will me still thought he and a hundred years before the periodic chart of elements which told us about where energetic materials came from. there can be much more utility and i would point out that in the artillery business, when they were still thinking about these things and did not know what the periodic chart was,
9:59 pm
they had a lot of access. this gets me to point number three which is about risk. i like to think about risk in terms of two different types of things that have gotten a lot of attention. go and thealpha other is the self driving cars. to hurtgo is not going anybody. there are no severe consequences. there is no downside risk. that is incredibly freeing for the developers. verification the of validation because if you get something you have to validate which is hard to validate, you don't care.
10:00 pm
-- the report talks about shifting human to the the
10:01 pm
machine. but i think when you do that, aspirational he would want to shift responsibility from the human to the machine. and this is going to be important in very, very many ways, particularly for defense because what is going to do visit will impose the need for experimentation that hasn't been done before. we don't know how it's going to work out as we shift the boundaries of responsibility, and there is going to be a lot of experimentation that has to be done. this is already going on in the self-driving car regime in a certain sense, at least tesla, what tesla is doing is they are beta testing the software with real cars that are driving around on the road.
10:02 pm
first of all, recall tesla gets beat up for the low production, the fact they have had trouble gearing up for production, which means that made 100,000 of these vehicles and if hit as high as 5000 in a single day. switch from that to the department of defense. the biggest platform program we have at the moment is joint -- joint light tactical vehicle. at no point did they expect to make 5000 in a year. so you are in a completely different regime, a completely different learning regime from what you are doing with your fielding. you're going to have to front end load experimentation if you operate in that came to space in a way that the commercial sector, which inks in terms of millions, doesn't have to. the fifth thing out ousting about was data, and i promised dr. hamre, but coming out of the
10:03 pm
research area in physics, this is an area which has an underlying theory and you still have to have 90% of your money and 80% of the people doing the data part. however hard you think did is, it is probably harder than that. i want to close with two other comments. the report labels this area as semantically problematic which i and i haveree with, two semantic issues to raise. very, very commonly in the community, and i think the report lapses into it in a trust of places is that is treated as an un-alloyed good. trusting things that are not alloyed good. an
10:04 pm
the psychology field has the term calibrated trust and it is important to keep reminding yourself that trust going up is not necessarily a good thing here trust going up to exactly the right place where you trust it, you know what missions and what environments the system will perform well in, which also know what once it will not perform in is critical. it is routine, i think a part of human nature, that people assume the systems going to work and, therefore, what you do is trust it. well, no system works perfectly everywhere. so the idea of calibrator trust as opposed to freestanding trust and support. the other has to do with explainable and transparency, and would at instrumentation for this. one of the things you'll need to do to build that there is to be able to look into the decision-making processes of the ai systems. this also very, very frequently gets treated as well, you know, once we can explain to come it will work, people will trust it,
10:05 pm
will be adopted. but one of my favorite lines from thing is explain a ability is not a panacea. there's a lot of things people might explain to you which would cause you to reject resident endorse their position. so with that, again, i'm delighted to be here. and i think my turn is up. host: thanks, david. i've got a handful of questions i'm going to throw at the panel to try to drive some discussion and then we'll open up for audience questions after that. i want to start with what i consider some of the unfinished work for project there still a lot more to be done. we kind of started our project with the perspective of if we look at how i was actually being used in reality in the commercial space and covet space, it would give us a lot of insight into the areas where progress was going to grow fastest. i think there is a lot to that but honestly it was frustrating , trying to do that. one of my assumptions before really digging too deeply into
10:06 pm
this project was that ai was going to be really good and useful at doing things that humans really struggle with. the things humans struggle with art making decisions in microseconds. i'm thinking of missile defense and some other areas. another thing human struggle with is dealing with absolutely fast unknowable, millions, data point sets of knowledge. we do see ai making a substantial contribution on that evolved in question. but my own perspective is we don't see a island as much assistance yet on visually time critical kinds of things. you see them in the commercial sector in terms of the financial industry but in terms of defense, not so much because it turns out a lot of face time critical things are also really high consequence things and we run into this problem and we don't really understand how these algorithms achieve the solutions we are after and we're , not highly confident that they
10:07 pm
will make the right call. so you would think that there are lots of ideas where we could make progress with ai, but not as many as you would expect. i would like to challenge the panel to discuss the contributions you see ai making -- making in that national security missions, and where the momentum is most likely to be in the near term? so obviously, we invest across a lot of different areas and in our labs, we have labs that focus on everything from audio data to cyber security. i am inclined to focus more towards geospatial applications, mainly focused on computer vision. but i think from our experience so far, we do see in the next five to ten years these sorts of technologies to have a fundamental impact on what the industry has called the t.c.
10:08 pm
,rocess, the tasking exploitation and dissemination. and what we mean by that is that it's no longer just a process about finding things in an image and then reporting that out, but these technologies offer, albeit very early stage, offer a chance to quantify and systematically explore each part of that chain. so going to beginning part with the tasking piece, do we know what we're asking for specifically, do we know what has a high enough value. if we think about it from an artificial intelligence perspective, think about a machine-learning model, and you want to find building footprints, something we focus on a lot. you want to know early on what sort of resolution do you need, what sort of spectral coverage do you need, and also what sort of temporal collection do you need. it's one thing to
10:09 pm
ask a person that who has looked at this particular application for years. it's another to have specific models tuned for those different types of data. that for us is really exciting. one of the ways we try to explore this with industry and with the government is an initiative that we have launched in coordination with digital solutions, andl with hosting services from amazon web services called spacenet, modeled after image net, and the intent there is we have open sourced a large amount of curated data and i agree with , david's comment, the data curation piece is the most painful part. we host machine learning competitions and also work with others to post open source tools from that data set. i think one of the things that we have been continuously surprised upon with every competition and every data set we release is that some of our assumptions are always
10:10 pm
challenged. what we think makes the most logical sense isn't always the case, depending on the model. or what's even more exciting and sometimes frustrating is that the results will vary greatly between different models. we recently just released a blog post that highlighted just the difference in machine learning performance between the same but atssentially, different nadir angles. so looking at building footprint detection from one angle, then at the exact same data angle just on the other side, but you will have a shadow effect now, performance varies greatly. and this is very subtle. this is one input, so you are looking for building footprints in one resolution type, and you have two different images, and the same area, yet your performance is very different. this extrapolates out with more
10:11 pm
sources, greater search area. it's these sorts of things that we want to explore across each part of the chain, so when we think about long-term implications in the geospatial demand for something like a.i., it's allowing us to say what is most valuable for this specific problem and does it help us answer the question in the most impactful way? and from our view, whether you're a startup or you're an incumbent providing services, what is kind of most compelling right now is still being in this experiment station -- in this experimentation stage, to figure out what is best, what isn't. there's a lot of lessons learned that will serve whether we know it or not, will probably serve as a foundation for a lot of our decision-making going forward. i think one thing, a couple of things we would highlight, though, that are critical to shape that outcome is -- one of the first of which is we already mentioned data and i won't belabor that point, but when we think about key applications in the national security environment, think of a specific question that we want, whether it's foundational mapping or
10:12 pm
finding a very specific object. having a strategic focus and dedicated focus around building a core data set, that is a nontrivial task, and ask anyone on our team or anyone on our investment team, they will tell step one, whether you're building a coordinated set from real-life information or trying to use synthetic information, that is a critical first step. one of the other things that we've seen, without getting too much in the nuts and bolts, is focusing on core tools and some standardization of data formats. just being able to search across different file formats to say, what is in this image, is still a very tough task. if you look at amazon's open data repository, which is really rich in terms of mostly government provided satellite
10:13 pm
data for nasa and from noaa, and our space and data is also hosted there, right now one cannot seamlessly search across all those different repositories to say i want an image of atlanta, georgia, which is where one of our competitions cities currently for spacenet. the fact is an end user can't do that, means that right out of the gate, an analyst or end user, regardless of their technical skill, is going to have to step through multiple functions just to put a data set together, to then start answering questions or use tools to then figure out which models. so if we think about what's key, whether it be data sets or tool standardization, or just having some basic evaluation metrics that we agree upon for certain questions, the core focus should be around, how can we have sort of these fundamental building blocks that we can start asking more complex questions and then , have even more complex
10:14 pm
analytical techniques as things mature. host: erin? erin: thank you. so i would agree on the geospatial side. an example of something we have been able to work with right now, you think the government is not as far ahead as they are but there are specific areas where i think we are seeing really interesting applications. one would be in the idea of the geospatial. for instance, we have a lot of information about isil holdouts as an example. we have been able to take that data and if you understand historically, because machine learning is about two things, it's about training the model and actually scoring or predicting on whatever it is that model that you choose. so in a geospatial example, we were able to take isil holdout locations that we knew about historically in order to help protect the war fighter and to be able to identify future isil holdouts. what we might not have been able to do that, if we hadn't collected this information and
10:15 pm
built out machine learning models so that we could more , accurately predict where a war fighter may be going that might be a sensitive area to go. that information is available to us. the government has massive amounts of data available. we need to use that data and build out really strong machine learning models. as far as applications that we can see now and in the next five to ten years, i know for me, this is a hot button for me, but the fact that there's a queue of 740,000 people waiting for security clearance is mind-boggling to me. so why is it that indeed.com, which is the world's largest search engine, if we were all applying for a job right now, we would probably go to the internet. if you go to indeed, your resume is filtered through, it's quickly mined for the information that's gathered from it and they quickly identify the , organizations that are the best fit for you. they also throw out the resumes that do not make sense for that organization. so why is that same application not being used in something as critical as national security
10:16 pm
and clearances? you could still bring in, there's three areas you could work in. you could bring in all the applications and immediately, those that have not been in the past, using historical, those that have not been thrown out immediately make a good indicator of the current applications as they are coming through of folks who we might , want to go ahead and say we don't need to have a senior investigator work as much time on these. these 740,000, there's at least 200,000 that are good citizens, they have not been arrested, they are not at risk. we take that same idea with those applications that do need to be spent and have more information identified in them, where your investigators would spend a greater amount of time. we should not have a queue of 740,000 applications, when commercial today is able to do what they're doing across the board with machine learning. we also see it within fraud. we in the commercial industry, and our customers today are shaving off tens of millions of dollars by being able to identify fraudulent claims the minute that they come in the door,
10:17 pm
because of being able to use machine learning and artificial intelligence in their process. that same idea could be involved inside of, like a medicare or medicaid environment. lastly, an example that is going on right now is something that we're very proud of with homeland security. they are very definitely addressing how can they bring in artificial intelligence in certain areas and an example , would be better safety and passenger security. so there's a program, the global travel assessment screening system which we are part of, and being able to identify high risk passengers based on machine learning has been something that we have been working on with them for just the past, i would say 69 months and the results , are pretty outstanding. we are going to be able to share that information with those countries that don't have the capabilities that perhaps the united states government does, so that we can provide those
10:18 pm
same models so that we can make sure that passengers screening globally is more easily benefits , the world. i think in general, what we're able to see in the next five to ten years is across a variety of spectrums, but i think that there is this big fear. at most of the agencies, they think they have to have all of their data ready to go today. that is just not going to happen. instead of trial to boil the so ocean, take data you historically have information on, build out strong models in minutes instead of weeks and months, then go out and make some good, strong, accurate predictions. it's something that's absolutely relevant and available to do today. that's what we are seeing some of our agencies doing. across commercial, we have thousands of use cases. i think federal, we can see more if we ran across some of those. host: dave? david: so i have a narrower
10:19 pm
interpretation of the question. i think the obvious area in which the microseconds matter is cyber security. i think that's not just a national security issue, it's a national economic security issue. and then that feeds back into the national security as well. the industrial espionage is a substantial threat to national well-being. i think that there is a belief that this kind of rapid time scale thing can also work in combat situations in electronic warfare. i'm not sure we're quite as ready for that, partly because that goes back to this issue of , when are we going to be ready for the human in the loop? and i would add a cautionary remark about that, back to -- related to your question about managing artificial
10:20 pm
intelligence. there's a whole lot of artificial intelligence which is lost because of decisions made by 23-year-old programmers in the middle of the night, who have not been in the strategy meetings and are making decisions imbedded deeply in code, often tacit, often based and theressumptions, is an issue there which i think ties back to the experimentation issue, there's an issue there of how you get coherance from top to bottom, because it's even harder in these software-dominated things than in hardware. one of the other issues, and i will reverse course for a minute, there are places where there's a lot of data, and it's pretty good or at least plenty good enough, and for the department of defense, one of those is in the personnel management arena using machine , learning or other techniques on the vast amount of current and past data of the behavior of uniformed military. when do they leave, how often do they leave, what are the
10:21 pm
predictors, not necessarily on individuals but on the , what don as a whole they need as incentives? you've got the right number of doctors and not too many lawyers and all that sort of thing. that's a field which is rife for exploitation and it's an area in which for other reasons, we're already investing in the data duration. so those were the two thoughts that i had. have we haveve to , to keep the data good anyway. those are the opportunities i had in mind. >> so we're actually seeing that same thing. i think personnel management and human capital management is a number one use case for us. we thought it would probably be in the cyber security space but , that's a lot like saying artificial intelligence.
10:22 pm
there are so many things we won't go down this separate arena but work force analystun analytics is fastening to us. the number of agencies asking us to help them identify who is going to retire and when, there is one agency in particular i will not name who helped early , retire an entire group of folks, then they realized five years later that was unfortunate because they were the russian linguists or whatever it was they happened to know. it wasn't necessarily russia. then they ended up having to go out and hire a number of contractors in order to fill those roles, so it sort of backfired on them. now they are taking this approach of let's understand , what are all the factors. it is really fascinating. it comes down to, in many cases, they were losing a lot of folks in this one agency not based on age and they wanted to retire, it was that they didn't have any flexibility. they weren't allowed to work from home. their commute was too long, or
10:23 pm
their boss we looked at the , division and said this division doesn't have anybody leaving and this division does, and it ended up coming back where they were able to put in some environments and some changes that helped. we also find in the department of defense, to your point, we have been able to help a group in the military understand who are the best individuals for certain special ops roles. why spend that person's months and years of their life going through something they perhaps might not be strong at? and how do we understand who are the best candidates for special operations versus wasting a lot of time and taxpayer dollars trying to go through those processes? so those are examples of current customers that we're working with today and it is because we have troves of historical information that helps us pinpoint the best special ops person looks like x, y and z so we can better define what we are going to do in future requirements. host: i'm going to hit the panel with one more question, then we will open up to audience after that. i will get to you, i promise. i'm going to ask you to talk about our big thing, the ecosystem.
10:24 pm
this is something that i think it came up, the idea we should talk about an a.i. ecosystem in our first session, but it didn't necessarily translate or impact my brain until we got to the fourth or fifth workshop. and it ended up becoming an overarching thing to me that connected our findings on the international competitiveness, investment, adoption, all these issues we were able to, i think, anchor on this idea of the a.i. ecosystem. you don't have to buy into that framing necessarily, but my question is what do you think needs to happen in the a.i. ecosystem as we defined it or if you have a modification to , that, feel free to highlight that, in order for the use of a.i. to become something really compelling for people making decisions that yes, this is a use case i want to invest in, i want to implement in my agency, in my command, in my mission area, where do you see, we get inabout this startup
10:25 pm
the ecosystem where do you see , the critical elements of that , or where would you dispute that framing? >> i would say that question kind of brings me back to almost over four years ago when we , first started seeing it at the lab. and it kind of comes to a central question with anyone, whether it's government or commercial customers that have a very high consequence emission, which is what is good enough? more specifically, or put in a different way, what are you trying to do? i remember one of our first meetings meetings, and i'll rephrase it so you can have the same sort of general confusion the end user did, too. we just released and open sourced one of our first computer vision models and met with a government end we walked up to him and said, what f-1 score is sufficient for you with an intersection over union variable
10:26 pm
between .25 and .5? and the customer looked at us and goes i have no idea what you are talking about. and we sat back and said we didn't know what to say, either. and the reality is, all joking aside, why is that a good story, it's a good story because at the time and even now, so much of work that is really compelling in model development is still in what one may call non-applied. so if you are going to write a paper or even do early testing, you want to know, you are deeply involved in your metric, in this case we were using an f-1 score, but if you are an end-user, particularly in high consequence, maybe you care about that, the explainability component, but what you more care about is, does it answer my question? the reality is that different questions require very different fidelity in models, thus everything that's
10:27 pm
associated with that, all the way down to the data set. so for truly compelling examples of derivatives from machine machine-learning models. i think that's the first place we always want to start. we have already highlighted some examples of that occurring, and a really good way to illustrate this is if one is just , interested in general building counts after a natural disaster, and we are trying to figure out generally what could be the level of impact, not how many exact buildings, not whether it's the material damaged on the buildings, just give me a count. that problem seems fairly error bars.ith some if something is higher consequence with a very, very low acceptability for error, then that's somethingee to work on. and i think perhaps one of the most exciting pieces in the next two to three years is going to be fleshing out entire work flow for applications that have
10:28 pm
pretty good models built for them. so a really good example of this would be in some of the folks we have worked with at a couple different organizations, one including a company called development seed, if you look at what they're doing with humanitarian open-street maps, thinking about how to integrate general projections into a, in this case, just tell me what the most complex tiles are to label after a disaster. it's still early days and all that's still in the prototype stage, but as that work flow matures, that is a great use case of highlighting not only how a machine learning model is deployed, in this case all open source, but also how humans interact with it. if what is the feedback rate those if the severity rating in , those chips are wrong, or what if they are right? throughout that entire cycle, it will then lay the groundwork for
10:29 pm
equally compelling work in more complex scenarios where maybe , the error rates or acceptability of error is lower. host: erin? erin: what we're finding is the most important thing to do is just to understand at a high level that an agency needs to have senior sponsorship. they have to, when you talk about the people that are part of this equation, if you do not have senior sponsorship and you don't have the person at the highest level who is embracing the fact that you're trying to endeavor on some sort of journey with a.i., you are going to fail. so you need to have that senior leadership. we spend a lot of time doing workshops just to sort of lay the groundwork that a.i. is the big bubble and there's machine learning, there's deep learning, neural. no -- there is we are going to try to focus in on what can you do in what we call supervised machine learning. how can you take something and not try to boil the ocean but
10:30 pm
take a small subset of something you are trying to do that you really believe that you want to get the answer to? you have that senior sponsorship, it's incredibly incredibly important. from there, having a business owner at the next level who understands the data, we are never going to take the people out of this equation. that's the most important thing. you have to have somebody who has domain expertise and understands the data better than anybody else, and who has their -- and who, as their senior leader, has their back. they want them to go off and try to accomplish something. then you have your technical folks, your folks, your data analysts, they are really strong in tableau, in visualization tools, but they do not have a degree in computer science and in math and stats. trying to find that unicorn is incredibly hard. what you do have, is you have people with domain expertise, then you bring in the capabilities, whatever tools it is you're using and it could be from the data management side, all the way through the consumption, where you are actually doing your
10:31 pm
visualization. what you want to be able to accomplish in our view is having that senior sponsorship through your business and down to your technical level and ultimately, when you're building out supervised machine learning models, you need to have full transparency behind that. because you need to be able to have answers to how did you get to this answer, how did you understand that this group of patients are at risk for infection in your hospital because of these factors that you developed. you need to have somebody who has a domain expertise that can read behind the algorithms and machine learning that's created and be able to really decipher it. that helps you with your people piece, your transparency and having that full open communication plan i think is really important, as well thinking of some of the other ecosystem pieces. of course, the policies that enable it, you need to make sure the right people have access to the information, and that the insights they are trying to gather have the right, the guidance behind it. we find a lot of times, especially in the intelligence community, there's this big fear that if you have a machine do everything, that there's going to be this cross-population between secret and top-secret data. so you need to make sure the
10:32 pm
policies and the governance their -- arerough through there. i completely support everything in your a.i. ecosystem and what you labeled out is really important but it starts at the people level, and i think for us, then going through the trust and transparency with what you're creating and what you are actually going to produce as your results, and having the people tied to it is very important. david: so i'm inclined to, because of the senior sponsorship, to tell a story from 20 years ago, when john hamre was deputy secretary of defense and i was assigned to one of the organizations, and i was one of the people who was the advisers on modeling and simulation for j. ward at the time, which was an attempt to include religious sticks in combat modeling.
10:33 pm
i was at the front of this horseshoe table, like i'm important, and these two kids come in to talk about the configuration control of the software. so they talk for 20 minutes, lights come back on and they say any questions. so i look around, at the advisory group, and all i'm seeing his deer in headlights. and i think to myself i'm the , only one sitting at this table that ever wrote code for a living, and i stopped doing it 10 years ago. now, fortunately, the guy who was handling the meeting saw the same thing i did, picked up the gavel, banged it down, with no questions we move on to the next speaker. [laughter] but nothing that i have seen and none of the people that you've shared this story with indicate that he has gotten any better. it has gotten any better. so the senior sponsorship is important, but with the department of defense, we don't have a mechanism to get the
10:34 pm
people, even with my level of experience, which is now not 10 years out of date, but 30 years out of date, into these positions. and i don't see a solution, and nobody has told me one, but you need people who are informed about this in ways, in the same way they are informed about budgeting or about combat or about aviation issues. in terms of the ecosystem, which is not the natural way to think about the problem, but i thought about it in terms of american strength historically at integration issues, and i think since we're moving into an era of -- probably moving into an era of great power competition, we want to think about this ecosystem in terms of what is , the nature of an ecosystem that would support artificial intelligence? what are the elements of an ecosystem that would support liberal democracies? i don't have an answer, i'm not
10:35 pm
a political scientist. equations, iti do is way easier. and i don't know where we have the advantage there. i think it ought to be an intellectual international leadership role that we try to take, and i think within our own nation, our own -- in our own community, we have to encourage broader literacy and we have to try to tighten the terminology down so that the non-experts can , actually grapple with the problems, especially well. but i think a critical issue is this issue that we want to find an ecosystem in which the liberal democracies are competitive. ryan: i would inject one more level of complexity in your comment, which is especially in the computer vision domain, but machine learning writ large, unlike historical analysis of the defense industrial base, if you look at a lot of the work
10:36 pm
that's occurring in the machine learning domain, so much of it both on the tools and framework side and algorithm side are in the open source, which is a very different environment than what we're used to dealing with historically in terms of how we think about national power and national assets. and it's something that we thought, it comes up continuously even just in our purview of our lab, in the sense of what makes sense to open source and what does not. and we continuously come back on the side of being more open, simply because there is still so much early work to be done, it's hard necessarily, at least from our view, it's hard necessarily to determine where we have surpassed a foundational capability, and now it's left to go into the realm of proprietary. i know both of you have to deal with that. i'm just curious your thoughts on that. david: well, to go back to the
10:37 pm
international aspect, the openness is natural to our country in ways that it is not natural to others, and there may be a way in which to capitalize on that, and make that a strength rather than a weakness. but again, i'm punching above my weight class talking about it in a national political sphere. right, let me turn to audience questions. you have been waiting very patiently. we have a microphone that will be brought. keep itk one question, brief, make it a question, and tell us who you are. i see one hand here. >> steve winters, independent consultant. i think i will direct it to david. it's just a minor point, but i think you made the remark comparing to self-driving cars, where people could be hurt in an accident, to the sort of the , case of alpha-go, where nobody is going to be hurt. i isn't there an argument to be
10:38 pm
made that alpha-go is so much more dangerous, because what everybody drew from their hype over that was that my gosh, this , is how you win wars. i mean, new tactics were coming out that the go players hadn't seen in the whole history of humanity. of course, that's a game, -- ofative course, that's a determinative game, but then you have the a.i. people having a very good result so can you say something about the danger there and maybe the openness is the danger. >> so i accept your suggestion that what i was talking about was physical risk, not intellectual risk. and i think there were elements of hype about it. i think go is an intensely digital game with rules, and in
10:39 pm
fact, the rules are the same on which frequently is not the case in warfare. i don't know, i work at an institute that to some extent was invented 60 years ago to counter hype, so i'm constantly with the dangers. i tend to be self-regulating over time, but yeah, i think there was a tremendous amount of enthusiasm that, you know, all we need, i mean, one connotation made to people was, all we need is curated data on 30 million wars, and we are ready to go. so the scale was very different. that said, it was very powerful accomplishment and one that was not expected, even by many in the field. shortly after it happened, my wife and i were driving out in shenandoah, and there was some npr thing we heard for a few minutes, but they were talking about the fact, they were talking about this as the
10:40 pm
computer beat the world's best player of go. that is one way to look at it, but i think the right way to look at it is, 300 of the best computer scientists with unlimited budget and unlimited access to computer power, most of whom were decent go players, were able to pool their resources and beat the best single individual at go. and when you describe it that way, then the hype is stamped out. but you made an interesting distinction between intellectual risk and physical risk that i had not made before. thank you. ok, i will come here in the blue blazer, three rows up from the. there you go. thanks. >> jennifer sims. i know virtually nothing about a.i. and i haven't had time to learn it to report, but i heard the mention that every country has a different purpose in developing ai.
10:41 pm
i heard that china is more advanced than america in this, so i wonder what do you think is china's purpose in developing this, and where are they now, and how is that going to impact or affect the united states? thank you. >> well, the work we did in the report, i would say it was pretty extensive application of ai projected, or in the strategy that china has been discussing as part of broader efforts they have towards kind of seizing the technological high ground in a range of industries. so a.i. sort of complements or supports their efforts along a number of dimensions in the plan, you know, made in china 2025 is one of the documents that describes that. there are others as well. so where i would say there is tremendous strength there, they have invested in a number of
10:42 pm
institutes focused on a.i. they have recruited heavily an a.i. work force, some of it folks who have come to study in the united states, gone back to china, been recruited back, and others generated right there in china. they produce literally hundreds of thousands of engineers every year out of their graduate schools and universities, so they have some real advantages there. they have advantages in the quantity of data that they gather through constant surveillance of the population, and there are very low limits to aggregating and sharing an exploiting that data that we don't have here, so there is real strength there. one element that i think can be sometimes overblown is the amount of money they're putting into it. the truth is, we really don't know the amount of money. there is this, you know, $150 billion figure that's out there . that's a multi-year number and that's a projection of the size of the a.i. industry that is their goal to achieve, so it's a little less clear exactly how
10:43 pm
much in terms of real currency is being invested in ai, but there's little doubt that it's substantial and that it's at least comparable to our investment and perhaps stronger. the way we kind of came down in thinking about it was to focus less on specific dollar investments and more on the health of the ecosystem, because our view is that what may be applicable at doing facial recognition at airports, allowing them to monitor people in the population, that problem may not be at all or equally applicable to other warfare applications that we would consider more important in a battlefield scenario. so it's not clear that there's a transferrable advantage from one to the other. but we do think there's a transferrable advantage to having a really robust a.i. ecosystem that you can apply people and infrastructure and policy to multiple different kinds of problems, and carry over some advantage there. i --
10:44 pm
i still think that silicon valley represents the most robust a.i. ecosystem that we see today. that's a good and important advantage for the united states. it's a perishable advantage, so it is not to be sat upon. other thoughts from the panel on that? >> i have one thought, which is most of these companies think of themselves as international companies. i'm not sure silicon valley is american. so it's located here, that confers some advantages, but it is striking to me that the google employees seem much more squeamish about project maven than they do about the massive surveillance state that's going -- that is growing up in china. well, is that ecosystem on our side? not immediately apparent to me.
10:45 pm
host: other questions? here in the middle. >> thank you. federation of american scientists. kind of a segue to your last comment, with regard to artificial intelligence and national security, the talent acquisition problem, how is that being addressed when technology is not more secretive as many of the other programs, yet government contracting doesn't account for the fact that the salary figures being paid in silicon valley are outrageously high by at least some of them, for top talent? i got this from a recent "new york times" article and discussions with a venture capitalist well-known in the valley. how are we going to switch that to bring that talent into the national security arena? thank you. sorry it is a long question.
10:46 pm
erin: so we see this challenge all of the time. i would agree that silicon valley, we are in jeopardy there , because there is a massive, massive push from china to gather as much information as they possibly can, from whether they're getting our technology or having their folks, like you said, study in this country, and then go back. one of the things that we're looking at is the fact that it is possible, it's not that far farfetched farfetched, the idea of making and creating an environment of citizen data scientists. i'm not a data scientist. i work with four of the number one data scientists in the world at data robot. we are a company of about 500 people. we are the ones who are trying to hire those folks, just like google is, just like amazon and facebook, but the idea behind it is that you shouldn't make the technology as one piece of the whole ecosystem, you shouldn't make it so difficult that people like you and i, who are not data scientists, can't leverage the
10:47 pm
benefits from it. you want to be able to have that area, like i said, the unicorn where data scientists are so , hard to get and so expensive to get, they are especially not going to be hired in the federal government because they can get much higher salaries in commercial. so what we are trying to do is create an environment of citizen data scientists where you have the domain expertise, you understand your data better than anybody, but you don't need to have the computer science background and the math and stats background as part of the ability to get really actionable intelligence from your data. so if you think about it now, the way you're operating with the internet every day, you're on your phone, you're not a trained expert in coding. you didn't need to know computer science in order to log into your social media account this morning. that same idea and that same movement is happening within the environment of artificial intelligence. we need to make the tools and the capabilities and this entire ecosystem much more easier to understand through education and , the ability for all of us who have the understanding of our
10:48 pm
data to be able to gather the , information and turn out actionable intelligence from it without having to have these massive degrees and very expensive people within your workforce. so we call it a citizen data scientist, just bringing the power down to the common people, if you will. >> to extend that thought, think about capabilities on a spectrum. spectrum. erin highlighted very well. if the intent is for u.s. government to be hiring individuals who want to build foundational networks from the ground up, then yes, that is a monumental task for anybody, doesn't matter what organization. but what's been particularly compelling for us, both to invest in as well as participate in open source as well as this experience from other partners, is that the evolution of tools has been drastic just in the data robot's a great example of last couple of years. data robot's a great example of this. a step further back, not fully a product, but what we have seen is tools that are entry level
10:49 pm
, that help end users who are perhaps not skilled or not familiar with building out their own models but still learn how to work with a model. a great example of this is if you were to look online at what aws called stage maker. it's one example of a cloud service offering, but essentially these are tools that allow end users to quickly spin up a model and look at some results. now it does require some scripting skills, but it is something that we have seen government end users start to work with pretty aggressively. this is compelling, because to erin's point, it starts increasing that literacy drastically. when we started in cosmiq, none of us except for one was a geospatial expert. and the reason i bring that up, we started where everyone else started, looking at open cv.
10:50 pm
this was before tensor flow was open source. and we learned through experimentation. i think what's so great about a lot of these new tools is that -- and the reason we contribute and others contribute to open source, is it allows for those tools to be more robust and for entry-level people or folks who are interested in learning more to start with experimentation and thus, then become maybe a stronger end user, a tool like data robot, or maybe built their own model as they have greater familiarity. so one of my colleagues who actually comes out of the machine learning business and spent some time in the pentagon, while the joint advanced intelligence center was being set up, basically came back and said, everybody's worried about how are you going to get the very best a.i. people in the jaic. i said i don't need the best,
10:51 pm
second tier is plenty good enough. they need the best contracting officers and the best lawyers in the jaic. and this fits into my mantra which as a physical scientist i , keep reminding myself of, the united states government is primarily a resource allocation organization. when you think of its role in the ecosystem, that's a big part of it. and that's contracting and law and ethics and those things, and i think that's an element of the ecosystem that's worth bearing in mind. it goes to this point that, you know, you don't need your government users to be power users. so i think there's an element there in terms of building the ecosystem about which pieces of it the government needs to be the best at, and which pieces they can be good enough that because you're not going to be , able to be the best at everything.
10:52 pm
dave -- >> i would just add to that from the perspective of our report, to tie it to dave's earlier comment about silicon valley, maybe the robust silicon a.i. ecosystem, but it's a private sector entity and doesn't necessarily report to any nation per se. that's true. that's one reason why in the report we talk about the government needing an a.i. ecosystem that is organic, not to compete with or to out-strip silicon valley by any stretch of the imagination, but enough of one to be an intelligent user to push forward military critical applications, to work with those in the industry who want to take on the burden of security and the threshold of trust and explain ability to do the kinds , of high-consequence work the government needs. i will say from our perspective here at csis, we do have a data team that works for me and the defense industrial group, we do a lot of work on contract data trying to draw policy , conclusions implications from that, and we have seen in the
10:53 pm
really two years i would say, a dramatic increase in the availability of young people coming out of college or coming out of graduate programs with really significant, serious data analytic skills. so the academic world is out there, they are responding to the call, and from what i've seen, there's a pretty robust market for those folks out there in the private sector, so there is some room for hope. i'm going to make room for one last question, then we will have to stop. i would like to balance the room, so let me head to the right side i haven't touched yet. >> i am from usdi. i have a question about ethics. given the background and the backdrop of a number of different private sector companies kind of leading the way on a lot of ethics writing, i think of deep mind having their own group specifically dedicated for that. what do you think are some of the main ethics a.i. principles for the national security community, specifically the dod?
10:54 pm
>> i'm going to dodge your question. there was a reference to the national a.i. r & d strategy and as it turns out while you were , putting this report together, they actually put out a request for comments on an update which is under way, which was just closed a week or so ago. but one of our remarks was that this area particularly was one that required additional attention and an even greater u.s. focus for international leadership. i would tie it back to the values of what we described as those of the liberal democracies. i think we need to run our country and prevail as needed on that basis.
10:55 pm
>> just one thing to add on that is just, the model sensitivity and model bias is something that regardless of the application, , is a critical issue for any end user team. just even in the geospatial domain, one thing we have to think about is how do we, whether it's internal work or work in collaboration with our partners through spacenet, how do we incorporate enough geospatial diversity that we, or the models that are released, can operate in different domains. it's a really niche example but arguably, that example can translate to a variety of applications. that, as datag sets grow, whether they be open source or in-house or as algorithms become increasingly benchmark, that's an important factor that should always be kept in mind, whether it's for
10:56 pm
that generation all the way to , the deployment of end use application. erin: we look at it the same way. one of the reasons that's so important is that when you look at the models and transparency bind that, and being able to see how the result was provided. so for instance, we can tell you in this jurisdiction of ohio that opioid crisis looks like it will cause x amount of deaths next year and they ask you why is that? not limiting it to just one model. our platform allows for whether it's tensor flow by google, python, doesn't really matter. what you really want to be able to understand is that your data scientist community and the folks who are building out platforms like data robot are looking at it as you still own your data and you have to have the understanding of what it is that you're providing to the
10:57 pm
system to go off and build models too. to.odels but being able to have it not limited, and we have a cio for one of the intelligence agencies say you all are a lot like switzerland you don't go out and , pick a particular model because you are the company who developed that model. tensor flow is developed by google but we have that inside , of our platform so when we turn out our models, we are spinning up hundreds of models and sometimes thousands of models whichs of , are ensemble models and then you can open up the blueprint which is complete transparency , behind every step in the process, that you can see what's happened. as far as the ethics and governance behind that, that's really very dependent on the organization that you're working with and the experts in the data field there. so if you look at tools like ours, whatever you provide the
10:58 pm
system to develop and to have the models built from, it's something that your organization hopefully has vetted out and has approved before we're turning out a result for you. at the end your data scientists or senior executive or whoever it might be needs to look at the data results and the intelligence that we provided to say yes or no, good or bad, to that answer. andrew: i would just add to that, there's -- we have a lot of ethical policy that we already have, so we have rules of engagement in a military context, we have requirements that our personnel system generate outcomes relating to diversity or non-bias across a range of dimensions, so we have a lot of ethical policy in place across national security world. the question to me is, how do you translate that into something that the machine can meaningfully comply with, comply with is probably not the right word, but can meaningfully address? and we in many ways right now are challenged to measure and say is this algorithmic output complying with our ethical policies? we have to do some translation, what does that mean in the context of what that algorithm is actually being tasked to do and what that machine , intelligence is being tasked to do. it gets addressed in that other report i mentioned on the national strategy for a.i. that
10:59 pm
came out of our strategic technology program. i would ask you to look there. but this is really one of the central challenges, and it's not so much a lack of ethical policy or guidance, as how do we translate that into something that the machine intelligence address.ngfully and then to the point that i think david has really brought home for me on a couple of occasions, if we don't know how the ai is doing, what it is doing or we aren't able to say , if we see one outcome in one instance, can we assume that with the exact same inputs in to have towe are going stop there. i want to really thank our audience for sticking with us for a long but hopefully interesting discussion. i really enjoyed the discussion. especially with the panel.
11:00 pm
if you didn't get a hard copy or you're watching online, which is available on the website, you can download it electronically. it will look just like this. or you can order one for yourself. we have the video we showed on our website. i should have mentioned there is a second video. we did in earlier video in this project that was released a couple months ago that is more of an introduction to the concept. i'm in the video you saw today which summarizes the work of our report. i want to thank alan for joining us this morning. please join me in thanking our panel for a great discussion. [applause]
11:01 pm
announcer: tomorrow is this antennae all of the armistice that ended world war i. and first lady melania trump will attend a ceremony in paris to mark the occasion. french president emmanuel macron will speak. live coverage starts at 4:30 a.m. eastern on c-span. i thought about the president before the book. it occurred to me there might be something all these presidents might have in common. perhaps they were significant in some way. announcer: this week on q&a coming university of north carolina constitutional law professor michael gerhardt talks about two of his books, "the forgotten presidents" and "impeachment." clinton did al lot to merit his own impeachment. i think he knew members of
11:02 pm
congress were looking for him to make mistakes, and when he made those mistakes, and later testified under oath in a way that was false, for which he was later held in contempt by a judge for perjury, blip -- bill clinton made his impeachment inevitable. announcer: sunday night at 8:00 eastern on c-span's q&a. now c-span's interview with u.s. senator orin hatch of utah. he is retiring after 42 years in office. he talks about his childhood, his love of music and his friendship with the late senator ted kennedy. this is about 30 minutes. host: senator hatch, longest-serving republican senator in u.s. history. 42 years in the senate. what has kept you here in the senate all of these years? sen. hatch: i am one of the few

53 Views

info Stream Only

Uploaded by TV Archive on