tv Artificial Intelligence CSPAN3 June 26, 2018 6:03pm-7:53pm EDT
people to take you seriously. a man looking at a picture of a topless woman is not going to say, oh, look at that fantastic athlete. isn't it wonderful that she doesn't have any problems with body image? no. he's going to think about sex. and he's not going to think of her in a respectful way either. that's why i said, you know, angela merkel, the chancellor of germany, would not take off her blouse to prove she doesn't have body image issues. she wants to be respected. and if women want to be respected, they have to behave in a way that will elicit that. >> sunday night at 8:00 eastern on c-span's q & a. in 1969, c-span was created by america's cable companies and we continue to bring you
unfiltered coverage of congress, the white house, the supreme court and public policy events in washington, d.c. and around the country. c-span is brought to you by your capable or satellite provider. next on c-span 3, house members discuss the criminal dangers of artificial intelligence. this hearing runs an hour, 45 minute minutes. >> the committee will come to order. without objection, the chair is authorized to declare recesses of the committee at any time. good morning and welcome to today's hearing entitled artificial intelligence, with
great power comes great responsibility. i now recognize myself for five minutes for an opening statement. first i would like to know that one of our witnesses, dr. carbonell from carnegie medical is unable to be here due to a medical emergency. we will ensure his written testimony is made part of the hearing record without objection. one of the reasons i've been looking forward to today's hearing is to get a better understanding about the nuances of artificial intelligence and implications in a society where the ai is ubiquitous. the term was first coined in the 1950s, we've made huge advances in the field of artificial narrow intelligence, which has been applied to familiar,
everyday items such as technology, underlying siri and alexa. called ani for shurt, such systems design specific and usually limited tasks. for example, a machine that is good at playing poker wouldn't be able to park a car. if you enjoy science fiction movies, this definition may con your up scenes from any number of classics such as "blade runner" "the matrix" or "the terminator." agi invokes images of robots. we are decades away from realizing such agi systems. nevertheless, discussions about agi and a future in which it's commonplace lead to interesting questions worthy of analysis. for example, elon musk has been
quoted as saying that ai, quote, is a fundamental risk to the existence of human civilization, imposes vastly more risk than north korea. does that mean that agi may evolve to a point one day when we will lose control over machines of our own creation? as farfetched as that sounds, scientists are certainly discussing such questions. for the short term, however, my constituents are concerned about evolving technologies, cyber security, protecting our privacy and impacts to our nation's economy and to jobs. >> to help our workplace prepare for the ways ai will shape the economy of the futu. i'll also introduce legislation today to reauthorize the national institute of standards and technology, which includes
language directing development of artificial intelligence and data science. there's immense potential for agi to help humans and to help our economy and all of the issues we're dealing with today. that potential is also accompanied by some of the concerns we will discuss today. i look forward to what our panel has to share with us about the bright as well as the challenging sides of the future with agi. i now recognize mr. lipinski for his opening statement. >> thank you, chairwoman comstock. thank you for holding this hearing to understand the current state of artificial intelligence technology. capacity to perform new and more complicated tasks is quickly advancing.
ai is the stuff of dreams or nightmares, depending on who you ask. i believe it's definitely the former and i strongly fear it could also be the latter. science fiction fantasy world depicted on hollywood big and small screens alike capture imaginations about what the world might be like if humans and highly intelligent robots shared the earth. today is an opportunity to move forward. this is a hearing we may remember years from now hopefully as a bright beginning of a new era. manufacturing, transportation, energy, health care and many others. artificial intelligence can be classified as artificial general intelligence or artificial
narrow intelligence. from my understanding, application of the latter, such as machine learning, that are underlying technologies that support services and devices widely used by americans today. these include siri, alexa, google translate and autonomous vehicles. while technology developers in the industry look forward to making great strides in ai, i wanted to make sure my colleagues and i am congress are asking the tough questions and carefully considering the most crucial roles that the federal government may have in shaping the future of ai. federal investments are longstanding. we must consider the appropriate balance of scope in federal involvement as we began to better understand the various
roles ai will play in our society. in 2016 t white house issued the national artificial intelligence research and development strategic plan that outlines seven priorities for federally funded ai research. these include making long-term investments in ai, developing effective methods for human ai collaboration and addressing the ethical, legal and societal implications of ai. additional issues to address are safety and security, public data sets, standards and workforce needs. earlier this year, the government accountability office issued a technology assessment report led by one of our witnesses titled artificial intelligence, emerging opportunities, challenges and
implications. finance, transportation and cyber security, the report also noted areas where research is still needed, including how to optimally regulate ai,ns availabili high-quality data. understanding ai's effects on unemployment and education and the development of computational effort efforts these are all critical issues but more and more i hear concerns about ai's impact on jumps. they can make some workplaces safer and more efficient but also replace others. what are the long-term projections as ai grows? in this context we a need tell compare to those of other countries. what education, skills and
retraining of the workforce of the future need? these are very important questions as we think about ensuring a skilled workforce for the future that helps solidify u.s. leadership in ai as other countries vie for dominance in the field. if it threatens some careers, it likely creates others. we need to make sure americans are ready for it and the benefits of ai are distributed widely. one other obvious issue of major concern when it comes to ai is ethics. there are many places where this becomes relevant. currently we need to grapple with issues regarding the data to machines. biased data will lead to biased results. many difficult queions are being raised about a world of humans and intelligent robots.
these are questions we'll likely be called on to deal with in congress and we needo be ready. i want to thank all of our witnesses for being here today and i look forward to your testimony. i yield back. >> thank you, mr. lipinski. i now recognize the chairman of the sub committee, the gentleman from texas, mr. weber, for his opening statement. >> madam chair, can i defer to hairman of the full committee for his statement? >> yes, you may. >> thank you. >> thank you, madam chair. thank you, mr. chairman. didn't know you were going to do that. often unknown to us, advances in artificial intelligence or ai touch many aspects of our lives. in the area of cyber security, ai reduces our reaction times to security threats. in the field of agriculture, soil moisture and targets crop watering. self driving cars and manages intelligent traffic systems.
multiple technical disciplines, including quantityum computing science converge to form ai. tomorrow the science committee will mark up the national quantityum initiative act, to accelerate development. ranking member and i and others will introduce that today. my hope is that every member of the committee will sponsor it, or at least a majority. transforming our research will create scientific, technological studies, especially in the field of artificial intelligence. these discoveries will stimulate economic growth and improve our global competitiveness. important considerations in light of china's advances in artificial intelligence and quantum computing. by some accounts china is investing $7 billion in ai. the european union has issue aid
preliminary plan outlining a $24 billion public/private investment in ai between 2018 and 2020. and russian president putin has noted, quote, the leader in ai will rule the world, end quote. no doubt that's appealing to him. yet the department of defense's unclassified investment in ai was only 600 million in 2016 while federal spending on quantum totals only about $250 million a year. the committee will mark up a second piece of legislation to reauthorize a national institute of standards and technology. to continue supporting the development of artificial intelligence and data science, including the development of machine learning and other artificial intelligence applications. it is simply vital to our nation's future that we accelete quantum computing and artificial intelligence efforts. thank you, madam chair.
i yield back. >> thank you. and i now recognize -- let's see where we are. the gentleman from texas, mr. veesy, for an opening statement. >> thank you for holding this hearing today and thank you for all the witnesses for providing expertise on this topic. i'm looking forward to hearing what everyone has to say today. america, of course, is a country of innovation. in the digital world of today, more and more industries are relying on advanced technologies and connective to overcome challenges. impacting every production of commerce. ai has the ability to mimic cognitive function such as problem solving and learning, making it a critical resource as we encounter never-before-seen problems. those in the energy sector have seen improvements in productivity a efficiency and can expect to see even more advancement in the coming years.
ai can be used to process and analyze data in previously unexplored ways. technologies such as seor equipped lomotives and wind turbines are able to follow wear and tear. detect failures before they occur and saving money, time and lives. through the use of analytics, ai can be used to manage, maintain and optimize systems from energy storage components to power plants to the electric grid. as digital technologies rev lushlize the sector, we must have safe systems. continuous modeling of interaction and data feedback. production must be put in place to guarantee the integrity of these mechanisms as they
evaluate mass quantities of machine and user data. with americans right to privacy under threat, security of these connected systems is of the utmost importance. nefrl i'm excited to learn about the value ai may be able to provide for our economy and well-being alike with a research study reporting that ai will generate 2.3 million jobs by 2020. that's a lot of jobs. the growth ai will bring to health care and so many others, to help ensure the prosperity of our nation. i look forward to see what we can do to promote these technologies. i yield back the balance of my time. >> thank you. i now recognize mr. weber for his opening statement. >> thank you, madam chair. today, we will hear from a panel of experts on next generation
artificial intelligence, ai as we've all heard it describes. it's likely everyone in this room benefits from artificial intelligence. for example, users of voice and music recommendation services are already utilizing aspects of this technology in their day-to-day life. and computer xoouting hardware that allows for the powerful parallel processing of this data. the field of ai has broadened to
include other advance neural networks, deep learning and natural language processing to name a few. these learning techniques are key to the development of ai technologies and can be used to explore complex relationships and produce previously unseen results on unprecedented time scales. department of energy is the nation's largest federal supporter of basic research in the physical sciences with expertise in big data science, high performance computing, advanced algorithms and is uniquely positioned to enable fundamental research in ai and machine learning. doe's office of science advanced scientific computing research program or oscar, as we call it, program develops next generation super computing systems that can achieve the computational power
needed for this type of critical research. this includes the department's newest and most powerful super computer called summit, which just yesterday -- just yesterday was ranked as the fastest computing system in the entire world. helps to discover new compounds faster than ever before. in physics, ai helps find particle collisions previously unseen by scientists. infusion energy research, ai modeling predicts plasma behavior that will assist in building reactors, making the best of our investments in space. even in fos fuelnergy production, ai systems will optimize efficiency and predict needed maintenance at power-generating facilities.
ai technology has the potential to improve computational science methods for any big data problem. any big data problem. and with the next generation of super computers, the computing systems that d. e is expected to field by 2021, american researchers utilizing ai technology will be able to track even the bigger challenges. we cannot afford to fall behind in this compelling area of resear research. and big investments in ai by china and europe already threaten u.s. dominance in this field. with the immense potential for ai technology to answer fundamental scientific challenges it's quite clear we should prioritize this research. we should maintain, i will add, american competitive edge and american exceptionalism. this will help us to do that. i want to thank our accomplished panel of witnesses for their testimony today.
and i look forward to hearing what role congress can play and should play in advancing this critical application i yield back. >> thank you. i will now introduce today's witnesses. our first witness today is dr. tim persons, chief scientist at the u.s. government of accountability office. he also serves as the director for gao center for technology and scientific engineering. bachelor of science in physics from james madison university and a master of science in nuclear physics from emory university. he also earned a master of science in computer science and phd in biomedical engineering both from wake forest university. next we have mr. greg brockman, our second witness, who is co-founder and chief technology officer of open ai, a nonprofit artificial intelligence research company. mr. brockman is an investor in over 30 start-ups and a board
member of the stell digital currency system. he was previously the cto of stripe, a payments start-up now valued at over 9 billion. and he studied mathematics at harvard and computer science at m.i.t. and our final witness is dr. li, chairperson of the board and co-founder of ai for all. in addition, dr. li is a professor in the computer science department at stanford and director of the stanford artificial intelligence lab. in 2017, dr. li joined google cloud. bachelor of arts degree in physics from princeton and phd in electrical engineering from the california institute of technology. i now recognize dr. persons for five minutes to present his testimony. >> good morning. thank you, chairwoman comstock, ranking members lipinski and
veasy. artificial intelligence, to ensure the u.s. remains a leader in ai innovation, special attention will be needed for our education and training systems, regulatory structures, frameworks for privacy and civil liberties and our understanding of risk management in general. ai holds substantial promise for improving human life, increasing the nation's economic competitiveness and solving some of society's most pressing challenges. yet as a disruptive technology, ai poses risks that could have far-reaching effects on, for example, the future labor force, economic inclusion and privacy and civil liberties, among others. today i'll summarize three key insights arising from our recent work. first, the distinction between narrow versus general ai. second, the expected impact of ai on jobs, competitiveness and workforce training and, third, the role the federal government
can play in research, standards development, new regulatory approaches and education. regarding narrow versus general ai, narrow ai refers to applications that are task specific, such as tax preparation software, voice and face recognition systems and algorithms that classify -- general ai refers to a system on par with or possibly exceeding that of humans. while science fiction has helped general ai capture our collective imaginations for some time, it is unlikely to be fully achieved for decades, if at all. even so, considerable progress has been made in developing narrow ai applications that outperform humans in scific tasks and economic policy and research considerations. regarding jobs, competition and the workforce, there is considerable uncertainty about the extent to which jobs will be displaced by ai and how many --
how much any losses will be offset by job creation. in the near term, displacement to certain jobs such as call center or retail workers may be particularly vulnerable to automation. however in the long term, demand for skills that are complimentary to ai is expected to increase, resulting in greater productivity. to better understand the impact of ai on employment moving forward, several experts underscored the need for new data and methods to enable greater insight into this issue. regarding the role of the federal government it will continue into research and data sharing, contributions to standards development, regulatory approaches and education. one important research area the federal government could support is enhancing the explainability of ai, which could help establish trust in the bafr of ai systems. the federal government could also incentivize data sharing, including federal data that is subject for limitations as to
how they could be used as well as frameworks to improve the safety and security of ai systems. such efforts may include supporting standards for explainability, data labeling and safety, including risk assessment and benchmarking of ai performance against the status quo. it's always risk versus risk. related to this, new regulatory approaches are needed, including the development of regulatory sand boxes for testing ai products, services and business models, especially in industries like transportation, financial services and health care. gao's recent report found, for example, that regulators use sand boxes to gain insight into key questions, issues and unexpected risks that may arise out of the emerging technologies. new rules governing intellectual property and data privacy may also be needed to manage the deployment of ai. finally, education and training will need to be reimagined so workers have the skills to work
with and alongside emerging ai technologies. for the u.s. to remain competitive globally and effectively manage ai systems, its workers will need a deeper understanding of probability and statistics across most, if not all, economic disciplines. that is not just the physical engineering and biological sciences. as well as competency and ethics, and risk management n conclusion, the emergence of what some have called the fourth industrial revolution and ai's key role in driving it will require new frameworks for business models and value propositions for the public and private sectors alike. even if ai technologies were to cease advancing today, no part of society or the economy would be directly or indirectly untouched by its transformative effects. thanks to the leadership of the committee and hold a hearing on this very important topic today.
madam chairwoman, ranking members, this conclude myself opening remarks. i would be happy to answer any questions that you or the subcommittee members have at this time. >> thank you i now recognize mr. brockman for five minutes. >> chairwoman, ranking members, i'm greg brockman, san francisco based nonprofit with a mission to ensure that artificial general intelligence, which we define as highly autonomous systems benefits all of humanity. now i'm here to tell you about the generality of modern ai. why agi may be in reach sooner than commonly expected and what action policymakers can take today. yesterday we announced major progress toward a milestone that we have been trying to reach, which is solving complex
strategy games which start to capture many aspects of the real world not seen in board games. opening i-5, long-term plans and navigate scenarios far too complex to be programmed in by a human to solve a massively popular game. in the past ai technology was written by humans in order to solve one specific problem at a time. it was not capable of adapting to solving new problems. today's ai is based on the artificial neural network. it's proving to match surprising amount of human capability, something that was shown by my fellow witnesses, dr. li's work. artificial neural networks, depends on the data that they're shown. further along the spectrum of generality is agi. rather than being developed for any one use case, it would be developed for a wide range of important tasks and also useful for noncommercial applications,
thinking through complex disputdispute s or city planning. how should we think about the timeline? all the ai systems are built on three systems. data, computational power and algorithms. they're starting to rely less on conventional data sets where a human has provided the right answer. for example, one of our recent neural networks learned by reading 7,000 books. we released a study showing the largest training runs has been doubling every 3 1/2 months since 2012. we expect this to continue for the next five years, using only today's proven hardware technologies and not assume negative break through technologies. to put that in perspective, your phone battery that lasts for a day started to last for 800 years and five years later started to last for 100 million
years. it's this torrent of compute. we've never seen anything like this. will this massive increase in computational power combined with algorithmic understanding we can't confidently rule it out given the progress we're seeing. what should we be thinking about today? what can policymakers be doing today? first thing to recognize is the core danger of agi, that it has fundamentally the potential to cause rapid change through machines misspecified through their operator, malicious humans or an economy that grows in an out-of-control way for its own sake rather than to improve human lives. we spent two years to develop this. this contains three sections on our ideas to develop safe agi
development. and race to the bottom on safety in order to reach agi first. the second i to ensure people at large rather than any one small group receive the benefits of this transformative technology and third is working together as a community in order to solve safety and policy challenges. our primary recommendation to policymakers is to start measuring progress in this field. we need to understand how fast the field is moving, what capabilities are likely to arrive then in order to plan for agi's challenges, forecasts rather than intuition. measurements placed for international coordinations actually viable. this is important if we want to spread safety and ethical standards globally. thank you for your time and i look forward to your questions. >> thank you. we now recognize dr. li. >> thank you for the invitation, congresswoman and congressman. my name is fay-fay li. i'm here as chairwoman of a
national ai research to high school students that have been traditionally underrepresented in the field -- in the stem fields, such as girls, people of color and members of low-income communities. our program began at stanford university in 2015. this year, ai for all are expanded across north america to six university campuses. i often like to share with my students that there's nothing artificial about artificial intelligence. it's inspired by people. it's created by people. and, most importantly, it has an impact on people. it's a powerful tool we're only just beginning to understand. and that's a profound responsibility. i'm here today because the time has come to have an informed public conversation about that responsibility. with proper guidance, ai will make life better.
but without it, it stands to widen the wealth divide even further, make technology even more exclusive and forced buy as we have spent generations trying to overcome. this will be an ethical, philosophical and humanistic challenge. and it will require a diverse community of contributors. it's an approach i call human centered ai. and it's made up of three pillars that i believe will help ensure ai plays a positive role in the world. the first is that the next generation may reflect more of the qualities that make us human, such as deeper understanding of the context we rely on to make sense of the world. progress on this front will make ai much better in understanding our needs but will require a deeper relationship between ai and fields like neuroscience,
cognitive science and the behavior sciences. the second is that emphasis on enhancing augmenting human skills, not replacing them. machines are unlikely to replace nurses and doctors, for example, but learning and assist in diagnosis will help their job tremendously. similar opportunities to augment human capability as bound from health care to education, from manufacturing to agriculture. finally, ai must be guided by a concern for its impact. we must address challenges of machine buy as, security as well as at the society level now is the time to prepare for the effect of ai on laws, ethics and even culture. to put these ideas in practice, governments, academia and industry will have to work together. this will require better understanding of ai in all three
branches of government. ai is simply too important to be owned by private interests alone and publicly funded research and education can provide a more transparent foundation for its development. next, academia has a unique opportunity to elevate our understanding and development of this technology. universities are a perfect environment for studying its effect on our world as well as supporting cross disciplinary, next generation ai research. finally, businesses must develop a better balance between their responsibility to shareholders and their obligations to their users. commercial ai products have the potential to change the world rapidly and the time has come to compliment this ambition with ethical socially conscious policies. human center ai means keeping humans at the heart of this technologies development.
unfortunately, lack of diverse recommendation remains a crisis in ai. women hold a fraction of high-tech positions and even fewer at executive level. and this is even worse for people of color. we have good reasons to worry about bias in our algorithms. a lack of diversity will be among its primary causes. one of my favorite quotes comes from technology who says that there is no independent machine values. machine values are human values. however, autonomous it becomes, its impact on the world will always be our responsibility. with a human centered approach we can make sure it's an impact we'll be proud of. thank you.
>> thank you. and i now recognize myself for five minutes of questions. dr. li, there is a general accepted potential for ai enabled teaching. provide a backup for traditional classroom education. so as ai technology advances, it seems reasonable to assume that traditional education, vocational training, home schooling and even college coursework will need to change and adapt. could you maybe comment about how education in general and for specific groups and individuals might be transformed by ai and how we can make that positive and really have sort of a more democraticizati democraticization? >> thank you for the question. of course i feel passionate about education. i want to address this question from two dimensions. one is how could we improve the
education of ai and stem in general to more students and general community. second is what can ai as a technology do to help education itself? on the first dimension, as our work in ai 4 all, we really believe it's simultaneously a crisis and important opportunity that we involve more people in the development of ai technology. ai represents -- humanity has never created a technology so similar or trying to resemble who we are. and we need ai to - we need technologists and leaders of tomorrow to represent this technology. personally, we need to democratize ai, girls, women,
and minorities. at ai 4 all we've created an aalumni population of more than 100 students and through their own community and outreach effort we have been touching lives of more than 1400 youth, ranging from middle schoolers to high schoolers in disseminating this ai knowledge. and we need more of that in higher education. the second dimension i want to answer quour question is ai as a technology itself can help improve education itself in the machine learning community, i'm sure, greg, you also agree with me. there's an increasing recognition of life -- the opportunity for lifelong learning, using technology as assistive technology. i have colleagues at stanford who focus on research in reinforcement learning and education, how to bring more technological assistance into
the teaching and tutorialization of the education itself. i think this could become a huge tool, as i was saying, to augment human teachers and educators so our knowledge can reach more students and a wider community. >> excellent. and for other witnesses, could you maybe comment on how academic institutions and industry could work with government on ai? >> so recommendation is really starting about measurement, to understand what's happening in the field. i think it's really about, for example, the study that we did showing the 300,000 times increase. we need more of that. we need to understand where things are going, where we are. the government is in a unique position to set the goal posts as well. the work that's happening at gao
and dux has had some success with this. it's about starting in a low-touch way for the dialogue to start happening because i think right now the dialogue is not happening to the extent that it should. >> thank you for the question. as the committee has pointed out it's a whole of society issue. it's going to be government and partnership with the private sector, with academia to look at things. so i think there is room for thought about how to learn by doing, creating internships and ways to try to solve real-world problems so that you have a mix of the classroom experience as well as making, building. you'll fail a lot, of course, with these things but learning in a safe environment and being able to grow expertise in that way. >> thank you. dr. li, did you have anything you wanted to add to that also? okay. well, thank you. and i now will recognize mr.
lipinski for five minutes. >> thank you. this is a fascinating topic. i'm going to try to move through some things quickly but i'll get some good answers here. it seems to me that, mr. brockman, you have a different view of agi and how quickly it can come than the gao report. is there -- is there a reason forthis? is there something you think that gao is missing? and if dr. persons could respond to that. >> so, i don't know if i can comment directly on the report just not being familiar enough with all the details in there. i can certainly comment on our perspective on agi and its possibility. a lot of it comes down to rather than -- i think there's been a lot more emotion or intuition-based argument. to your opening remarks, i think
that science-based reasoning in order to project what's happening in this field is extremely important. that's something we spent quite a lot of effort on since starting three years ago. looking at the barriers to progress as compute, data, algorithms, data is changing rapidly, the computation, the power there is growing at a rate we've never seen over the course of this decade. we're going to be talking about magnitude and that's something if you were to compare that to the typical growth of compute, like morzolla where we saw 3,000 10x. we're being projected into the future a lot faster than we realize. it means we can'te it out. for the next five years as long as this hardware growth is happening we're in a fog and it's hard to make confident projections and so my position is that we can't rule it out.
we know this is talking about a technological revolution on the scale of the evolution, something that could be so beneficial to everyone in this world. and if we aren't careful in terms of thinking ahead, in trying to be prepared, it could -- we could be caught unaware. >> thank you. dr. persons, do you have a response in that? >> sure. with all respect for our silicon valley innovators who are upstarts, i think it's great that we have the system that -- the key thing that we're seeing is the convergence of these technologies, as mentioned by my panelists of the exponential power of computing, the sophistication of algorithms are all coming in. that said, many folks in the community are mildly skeptical about the rate at which general ai may come in this area because -- for several reasons. one is just the way we think about the problem now, the super
complexity that is manifest in addressing the various challenge, looking at large data sets and all the facets of them. it's much easier to say than to do. and, again, i thi a lot of the -- as you pointed out, the driving force here is the concern about general ai and taking over the world kind of thing. and it's just much harder to mimic human intelligence, especially in an environment where intelligence isn't even really defined or understood. as dr. li pointed out, a lot of this is about augmentation. it wasn't a replacement of humans. it was how can we become better humans, more functional humans in doing these things? so a lot of it just gets down to the -- >> i have a short time. sorry. >> thank you. >> i wanted to throw out quickly, there have been very different -- vastly different opinions about the replacement of jobs and the disappearance of
jobs and what the impact is going to be. mr. brockman, what do you think the impact will be? >> so i think that with new technologies in the short term, we always overestimate the degreeto which they can make rapid change. but i think in the long term they do -- the technology has changed and things like the internet there has been a lot of job displacement and i think ai will be no different. i think the question of which jobs and when, i don't we have enough information yet and i think that is where measurement starts to come in. to review this as an open question and a very important one. >> and cos are, as a bottom line, no one really knows the effect on this. and our experts were saying that to know more, we might need to to encourage, for example, a datatype agency out of the federal government to help provide more data for
different data to help try to answer the question of what the impact is that as this technology continues to appropriate there is also a history -- the era of british industrialization and the concern of destroying the machines and the concern about loss of jobs, but in many times throughout history it has happened in an array of technologies where net jobs actually increased, they were just more sophisticated jobs and work toward more productivity. so there is hope in this technology as well. >> if the chairman will allow, i'd like to hear from dr. li. >> technology inevitably has the impact to change the landscape of jobs, but it is really critical, that we need to invest in the research of how to assess this change.
it is not a simple picture of replacement. especially when this technology has a greater potential empowered to augment it. i just spent days in the hospital icu with my mother in the past couple ofs. and with ai research you recognize that a nurse in a single shift is doing hundreds of different tasks in the icu unit, where they are fighting for life and death, and these are not a simple question of replacing jobs, but creating better technology to assist them and to make their jobs better and make their lives better for everyone. and that is what i hope will focus on using this technology.>> thank you. >> thank you, dr. li. that is a wonderful example of vividly explaining to us how that can be used. if -- for the aging population, that is the challenge we are all facing. and helping them be able to do
a better job. thank you for mentioning that. i now recognize mr. weber. >> dr. li, is your mother okay? >> thank you. i am here. that means she is better.>> otherwise we would be witnessing two witnesses. >> she is watching me right now. >> you are doing an excellent job and she is a proud mother. we are glad for that. dr. brockman, you said in your statement that your mission was to make sure that artificial intelligence benefited people and was better for the most economic valuable work. do remember that? it is a written statement. >> our definition of what agi will be, whether created by us or anyone else, the milestone is a system that can outperform
humans. >> let me read it to you real quick. the mission is to ensure that artificial intelligence by which -- that outperform humans most economically valuable work ,", benefits all humanity. how would you defined most economical valuable work?>> i think that, first of all, the question is agi is something that the whole field has been working toward since the beginning of the field 50 years ago. so the question of how to define it is something that is not entirely agreed-upon. you think about high intellectual work like that, and also to things like going
in cleaning up disaster sites that humans would be unable to do today.>> i noticed that mr. lipinski -- you call them silicon valley upstarts. that is an advantage. thank you for doing that. but you are literally looking at a new industry that even though the shift is changing, you are creating jobs for another industry. and going back to dr. li's example and how much the nurses do, how do you train for those jobs if you -- if it is moving as fast as you think it is? >> one thing i think that is very important is that we don't have the ability to change the timeline and there are a lot of different pieces of the ecosystem. and what we do is we step back and look at the trends and we
say what will be possible when. and the question of how to train, that is something we are not the only ones that is going to have to answer that question. but i think that the place to start, it comes that to measurement. if we do not know it is coming and we cannot project well, we will be taken by surprise. i think there are a lot of jobs that are surprising in terms of you think about with autonomous vehicles. we need to make sure that the systems to what we expect and that there are humans that will help make the systems. >> we would all agree, i hope, thatthe jobs that they're going to create are well worth the transformation into all that technology. dr. persons, would you agree with that? >> let me give you a quick example if i may. speaking with a former secretary of transportation recently, a simple example of tollbooth collectors. you have the easy pass and you have less of a workforce there
that did -- could've had an impact for short period on the loss of jobs, and yet it freed them up and enabled them to do other things that were needed. >> you were shaking your head. you agree with that? >> i think the purpose of technology is to improve people's lives. >> dri see you shaking your head as well. >> in addition to that example, i think about the jobs that are currently dangerous and harmful for humans from firefighters to search and rescue to natural disaster recovery. not only should we not put humans in harms way, but we don't have enough help in these situations and this is where technology should be of tremendous help. >> very quickly, i'm out of time, but just yes or no. if we lose dominance in ai,
that puts us in a very bad spot. would you agree? >> yes. >> yes. >> thank you. >> thank you. good question. >> thank you, madam chair. we have heard about from your testimony some of the advantages of ai and how it can help humankind and advance us as a nation and country. but, as you know, there are people that have concerns about ai. there have been a lot of doomsday like comparisons about ai and what the future of ai can actually mean. to what extent do you think this scenario, this worst case scenario that people have pointed out about ai is actually something that we should be concerned about?
and if there is a legitimate concern, what can we do to help establish a more ethical responsible way to develop ai? and this is for anyone on the panel.>> so i thinking -- thinking about that is thinking about the internet in the late 50s. if someone was going to describe it to you and how all these weird things were happening, you would be very confused and it would be hard to understand what that looks like and we would be talking about security, and imagine that whole story that played out over the last 60 years was going to play out on a more compressed timescale. that is the perspective that i have two agi. it can cause this rapid change and it is already hard for us to cope with -- is going to be a disaster, is the technology
itself was not built in a safeway, or the deployment of who owns it and the value it was given is not something we are happy with. all of those, i think, are real risks. and those i think are things you want to think about today.>> thank you, sir. i think we need to be clear eyed about what the risks are. and not necessarily being driven by the entertaining narrative by these things. or going to extremes and assuming more than where we actually are in the technology. is understanding the risks as they are. and there are risks in automated vehicles will be different than in financial services. so it is working symbiotically with the community and practice in identifying what are the things there and what are the opportunities, and they're going to be opportunities, and what undesirable things to we want to focus on and optimize
on how to deal with them. thank you. >> mr. brockman, in your testimony referenced a report outlining some malicious actors in this area. could you elaborate on some of your findings?>> so i was a collaborator on the research report projecting not necessarily today but but looking forward to what are the malicious activities that people could use ai for. and so that reports -- let's see. i think maybe the most important things here is information, privacy, the question of how do we ensure the systems and do with the operator intends. think about autonomous systems that are taking action on behalf of humans that are subverted and this report focuses on -- you
think about autonomous vehicles and some of the bad things that can happen. i think this report should be reviewed as things we think about today before these are a problem because a lot of the systems will deploy in a large- scale way -- all of the problems we have seen to date will have a different flavor, where it is not just privacy more, it is systems deployed in the real world to factor in well-being.>> i yield back. >> thank you. i now recognize mr. rohrbacher. >> thank you very much. as and all advances in technology that can be seen as the great hope for making things better for the new idea
that there might be new dangers involved and/or the new technologies will help certain people, but be very damaging to others. i think that where thfear would be most recognizable is in terms of employment and how in a free society people earn a living. are we talking about here about the development of technology that will help get the tedious and remedial -- or the lower skilled jobs that can be done by machine, or are we talking about a loss of employment by machines that are designed to really perform better than human beings perform in high- level jobs? what are we talking about here?
>> so i can use healthcare as an example because i am familiar with that area of research. if you're looking at recent by employment nai, there is a recognition that we need to talk a little more nuanced than just an entire job, but the tasks under each job. the technology has the potential to change the nature of different tasks. again, for example, take the job of a nurse as an example. no matter how rapidly we developed the technology in the most optimistic assessment, it is hard to imagine that an entire profession of nursing would be replaced, yet there are many opportunities that certain tasks can be assisted
by ai technology. for example, a simple one that costs a lot of time and effort in nursing jobs is charting. nurses spend a lot of time charting into a system into a computer and that is time away from patients. so these are the kind of tasks under a bigger job description that we can hope to use technology to assist and augment. >> are we talking about robots or a box that is able to make decisions? >> so ai technology is many different aspects. in this particular case, understanding speech recognition. possibly in the form of a boys
assistant would help charting. but maybe delivering a simple tool on a factory floor would be in the form of a simple delivery robot. so there are different forms of machines.>> there are many dangerous jobs that i could see that we would prefer not having human lives put at risk in order to accomplish the goal. for example, at nuclear power plants it would be a wondrous thing to have a robotic response to something that could cause great damage to the overall community, but would kill someone if they actually went into try and solve a problem. i understand that and also maybe with communicable diseases where people need to be treated, but you are putting people at great risk for doing that. however, with that said, what
people are seeking profit in a free and open society i would hate to think that we are putting out of work people with lower skills, and we need the dignity of work and of earning your own way once we know that when you take that away it has a major impact -- a negative impact on people's lives. so i want to thank you for giving us a better understanding of what we are facing and let's hope that we can develop this technology in a way that helps the widest variety of people, and not just perhaps a small group that keep their jobs and keep the money. thank you very much. thank you. >> first i want to note that our nation has some of the best scientists and researchers and engineers in the world, but
without stronger investments in research and developing, especially long-term foundational research, we risk falling behind, especially in this important area. i hope that the research continues to acknowledge the socioeconomic aspect as well as integrating ai technologies. in my home state at the university of oregon we have the urbanism next to center. they are bring together interdisciplinary perspectives including planning and urban design and public administration. they bring that together with academic sectors to discuss how leveraging technology will shape the future of our communities and talking about emerging technologies like autonomous vehicles and the implications for equity, the economy and environment dr. persons, can you discuss the value of -- to help identify
and address the consequences intended and unintended as it becomes more prevalent?>> quickly, the short answer is our experts and what we are seeing is the value in public- private partnerships and would be a mistake to look at this technology and sort of isolate -- it needs to be an integrated approach to things. the federal government has its various roles, but like you were mentioning, at the university of oregon, key research questions, there are many things to research and questions to answer. and industry that has an incredible amount of innovation and thinking and power to drive things for.>> dr. li, i have a few questions. you discuss the labor disruption. and the need for retraining. we have a tool skullcap issue here because we want to make sure there are enough people who have the education needed
for the ai industries, but we are also talking about wos, like you mentioned, the workers and tollbooths will be displaced. but with the rapid development of technologies and the changes in this field, what knowledge and skills are the most important for workforce capable of addressing the opportunities and the barriers to the development? this is an important issue and how do we educate people to be prepared for such rapid changes.>> it is a mentally scientific discipline. and as an educator i believe more investment in stem education from early age on, we look at -- in our experience at ai4all, when we invited high school students from the ages of 14 to 16 in ai research, their capabilities and potential is amazing. we have high school students
who have worked in my lab at this countries best ai conferences. so i believe passionately that stem education is critical for the future for preparing -- >> i always talk about -- arts and education students tend to be more creative. also, you talk about how ai engineers need to work with neuroscientists and cognitive scientists to help ai systems develop a more human feel. i know that in this testimony he wrote that ai is the ability to create machines who perform tasks normally associated with human intelligence. i am sure that was an intentional choice to humanize the machines, but i wanted to ask you, dr. li, in your
testimony you talk about the laws to codify ethics. how is this going to be done? can you go into more depth about how these laws would be done, who would determine what is ethical? would be a combination of industry, government determining standards? how are we going to set the stage for an ethical development of ai?>> thank you for the question. i think for the technology as impactful as ai is to human society, it is critical that we have ethical guidelines. different institutions from government to academia to industry will have to participate in this dialogue together and also by themselves. are they already doing that? >> is someone convening -- >> there are efforts, and i'm sure there industries in the
silicon valley we are seeing companies start to roll out ai ethical principles and responsible ai practices in academia we see the emphasis coming together with technologists holding seminars and symposiums and classes to discuss the ethical impact of ai. and, hopefully, government will participate in this and support and invest in this kind of effort.>> thank you. >> the gentle lady from arizona is recognize. >> thank you, mr. chair. i want to thank the tested fires today. very interesting subject. and something that spurs the imagination about science fiction shows and those types of things. i do have a question on what
countries are the major players in ai and where does the u.s. ranked in competition with them? that is to any or all of the panelists. >> today, i think that the u.s. actually ranks possibly at the top of the list. i think that there are a lot of other countries that are investing heavily. i think that is very clear that ai is something that has a global impact in the more we can understand what is happening everywhere and coordinate on safety and ethics in particular the better it will go.>> yes, thank you for the question. i think wherever there are large amounts of computing or large amounts of data and a strong desire to innovate and continue to develop in this
fourth industrial revolution, it drives toward certainly china and our allies and colleagues in western europe. thank you. >> if i could just add, the most important thing to continue to lead in the field is the talent. and right now we are doing a great job of bringing the town in. we have a wide mix of backgrounds and origins. will be in very good shape if we can keep that up.>> mr. chair, i have one more question. what steps -- i think this has been asked in different ways -- what steps are we guarding against espionage from, let's take, china is involved in this? that's basically my question. espionage, hacking, those types
of things. who is preventing this? is it the private companies themselves? is government involved? >> one thing that is atypical about this feels is because it grew out of an academic -- the overarching -- all of the core research and development is being shared pretty widely, and so i think that as we start to build these more powerful systems and this is one of the parts of our charter that we need to think about safety and think about things that should not be shared. so this is being built and it is up to each company, and that is something that we are starting to develop. but having a dialogue of what is okay to share and what things are to powerful, that is the dialogue that is starting now.>> certainly intellectual
property protection is critical. we are -- at the time it was unprecedented theft of intellectual property just because it is the blessing and curse of the internet. the blessing is it is open, and the curse is it is open. i think that category, in terms of what is being done, it is something that experts pointed out and said it is an issue. as this committee one is, it is easier said than done and who has jurisdictions in the u.s. federal system about a private company and protection of that, the role of the federal government versus the company itself in an era where the big data air where data are the new oil, we want to be open so we can innovate. so managing that dialect be a critical issue and there's no easy answer.
>> mr. chair, i yield back. >> thank you. i want to thank the witnesses for this extremely informative and important conversation that we are having here today. i hail from the state of connecticut where we see a lot of innovation at yale and lots of spinoffs on a sort of narrow ai question. but for us, the issue is more about the general ai. and, mr. brockman, your discussion of the advances is really where i want to take this conversation. your discussion, which i think is incredibly important about diversity. you saw what happened to lehman brothers about not being diverse. i am concerned about what the implications are that if it's a very narrow set of parameters
and thought patterns and life experiences that go into ai, we will get very narrow results out. so i want to get your thoughts on that. and on this broader ethical question, we have looked for many years -- i remember when i was a young liar -- lawyer, we began to look at these issues and this committee has been grasping with crisp >> reporter:. -- if you can opine on both of those questions? whether we need centers to really bring in emphasis as well as technologists and the importance of diversity on the technology side so that we get the full range of human experience represented.
>> where just now -- when someone is using the term doomsday scenario, i think we wake up 20 years from now and we see the lack of diversity in our technology and leaders and practitioners, that would be my doomsday scenario. so it is important critical to have this for these reasons. what is jobs. this is a technology that can create jobs and improve quality of life. and we need all talents to participate in that. the second is innovation and creativity. like you mentioned, we need that kind of talent to add into the force of ai development. and the third is moral values
that if we do not have this wide representation of humanity representing this technology, we could have face recognition algorithms that are more accurate in recognizing white, male faces, and we could have dangers of biased algorithms making unfair loan application decisions. there are many potential pitfalls of ai technology that is biased and not diverse enough, which brings us to a conversation about the dialogue of ethics and ethical ai. you are right. previous disciplines like nuclear physics and biology have shown us the importance of this. i don't know if there is a single recipe, but i think the
need for centers, institutions, boards, and government committees, are potential ways to create an openness in this dialogue and we are starting to see that, but i think you are right. they are critical issues. >> i agree completely with my fellow witness. the diversity is crucial to our success. we have a program called open ai scholars, where we brought in people from unrepresented backgrounds into the field and provided mentorship. one thing we found that is encouraging that it is -- to make people who have no background and make them into first-class researchers and engineers very quickly. technology being so new. e in some ways we are all discovering and there is not that high babar.
everyone putting effort in and is up to them to make sure that they are bringing in the rest of the world. on the ethical front, that is core to my organization. who owns this technology and where the dollars go. we think it belongs to everyone. so one of the reasons that i am here is that the should not be a decision just made in silicon valley. should not be in the hands of people just like me. it is important to have a dialogue, and again, that is -- i hope that will be one of the outcomes of this hearing. >> thank you very much.>> i think -- i think the witnesses have given very interesting testimony. one of the things that is important here is how does the
government react to ai? do we need to create a specific agency? does that agency report to congress or the administration? i think those things are important. dr. brockman, you said that we need a measure of ai progress. do you have a model or some description of what that would look like?>> first of all, i don't think we need to create new agencies for this. i think that existing agencies are well set up. again, gao and diux are starting to work on this -- diux had a satellite imagery -- the kinds of things we think would be great for the government to do where
academics and private sector contest robotic approaches and set up competitions toward specific problems that various agencies want to have saw. i think that can be done without a new agency and i think you can get benefits directly to the relevant agencies and build ties between private and public sector. >> i am one of the founders of the great innovation caucus. will we see positive -- >> i think one of the ways that gao has done a good deal of work on this issue, but protection of the electrical grid in the cyber security dimension but what our experts and based on the leadership of
this committee and the importance of cyber is that ai will be a part of cyber moving forward. so protection of the grid and cyber dimension is there. also the word optimization. so how we optimize things and how algorithms might be able to compute and find optimum's faster and better than humans is an opportunity for grid management and production. >> so ai will be used as a cyber weapon against infrastructures or potentially used as weapon. >> there are concerns now, and when you look at a broad definition of ai and you look at bots and things like that and that exists now and unfortunately, you're going to assume that as ai becomes more sophisticated, the black hat
side of things, the bad guys will also become more sophisticated. so that will be the cat and mouse game moving forward.>> another question. in your testimony you mentioned that there is considerable uncertainty in the jobs impact. what would you do to improve that situation?>> our experts were encouraging specific data collected on this. again, we have important federal agencies like the bureau of labor statistics that work on what is going on in the labor market and it may just be an update to what we collect and what questions we ask avenue -- as a government. and that is very important to our understanding of unemployment metrics and so on. so there are economists that have thoughts about this and we had input on that.
there is no easy answer at this time, but the idea that there is an existing agency doing that sort of thing is there. the key question is how do we ask more or better questions on this particular issue on artificial systems. >> thank you. dr. li, you gave three conditions for progress in ai being positive. do you see any acceptance or why the acceptance of those conditions -- how can we spread the word of that so the industry is aware of them and the government is where of them and they follow those sorts of guidelines?>> yes, i would love to spread the word. so i think i do see the emergence of efforts in all three conditions. the first one is about more interdisciplinary approach to ai and ranging from universities to industry we see the recognition of neuroscience, cognitive science to cross
pollinate with ai search. i want to add that we are all very excited by this technology, but as a scientist i am humbled by --itis only 60 years old as compared to traditionally classic science that is making human lives better every day, physics, chemistry, and biology. there is a long way to go for ai to realize its full potential to help that recognition is important and we need to get more research into that. second is augmenting into human. and a lot of academic research was there looking at this technology from disability to helping humans. and the third is what many of us focus on today and is the social impact from having the dialogue to working
together through different industries and government agencies. so all three are the elements and i see that happening more more. >> thank you. i yield back.>> the chair recognizes the gentleman from your. >> the chair recognizes the gentleman -- mr. palmer. >> i would like to know if ai can help people who are geography challenged. >> the gentleman's time has expired. >> i do have some questions. in my district we have an institute that deals with cybercrime. what i am wondering about is with the emergence and
evolution of ai, what are you putting in place because of the potential for committing crimes and solving crimes? do you have any thoughts on that?>> in one of the areas we did look at in general was just criminal justice. so does the risks that are there in terms of the social risks and making sure that the scales are balanced exactly as they ought to be. that was the focus of that. in terms of criminal forensics, it could be a tool that helps us out with what happens. but it is an augmentation that is helping the frantic analysts that would know what things look like, and the algorithm would need to know what the risks are going forward so that you could identify things more proactively and at -- in near
or real-time. so that was a key message that we heard moving forward. >> so today we are already starting to see some of the security problems with the methods that we are creating. for example, there is a new -- adversarial examples, were researchers can craft a physical patch that you can print out and make a computer think it is whatever you want it to be. you can put on a stop sign and confuses self driving car. so these ways of subverting these powerful systems are things will have to solve, just like we've been working on six -- security. if you could successfully build and deploy that, and in many ways it is like the internet in terms of being very deeply
integrated in people's lives, but having this autonomy and representation and you have this question of how do you make sure, for sure, that is something great for security if the systems are well built and have safety in their core and are hard to subvert. but is it possible for people to hack them or cause them to do things the nonaligned, then you can have very large-scale disruption. >> it also concerns me in the context of it was announced a few weeks ago that the u.s. plans to form a space core. do you have any thoughts on that discussion about how artificial intelligence will be used in terms of space. communication systems are highly vulnerable. i think that there is some
additional vulnerability that would be created. any thoughts on that? >> in terms of the risks in space, obviously, one of the key concerns is weaponization, which i think is part of that, some -- i know that our defense department has key leadership thinking on this and is working strategically on how to we operate in an environment we have to assume there will be the adversary and might not operate in the ethical framework that we do and to defeat that. there is no simple answer at this time other than our defense department is inking about and working on it.>> and he is not here, but dr. carbinol made a statement that we need to produce more ai researchers
and i think that plays in to that issue of how do we deal with ai it's space. that is why i have been pushing for college program to recruit people into the space corps in these areas and start identifying students when they are maybe even in junior high and give them scholarships through college to get them into these positions. any thoughts on that? >> i will answer quickly and say that as i think dr. li pointed out, it is an interdisciplinary thing. there will be a need for the stem theme specialists that focus on this. but any particular vocation will be impacted one way or the other. just like you could imagine rewinding a few decades, but when the advent of the personal computer came in and how that
affected now, this is not unusual, but at the time you had to learn how to augment your tasks with that.>> and one final thought. you had to deal with major hacks of government systems that are hacked and what we are faced with, we are competing with the private sector and we are going to find yourselves in the same situation with ai experts. the truly skilled people. and that is why i'm suggesting we might need to think about how to recruit these people and get them as employees of the federal government. and that was my thought on setting up an rotc type program where we recruit people in and we scholarship them, whether it
is for cyber security or ai. and they would have a 45 commitment -- a four or five year commitment to work with the federal government. and we have a very hard time competing for those type of people. >> now the chair recognizes the gentleman from new york.>> i think our respective chairs and ranking members for this very important hearing. and thanks to our witnesses. i am proud to represent new york's 20th district were our universities are leading the way. where the home of groundbreaking research developing neuroleptic circuits that could be used for deep learning such as pattern recognition but also useful for ai or machine learning. in addition the institute has established an ongoing research program. rpi is pushing the
boundaries of artificial intelligence in a few different areas. there focusing on improving people's lives and patient outcomes by collaborating with albany medical center to improve the performance of their emergency department by using ai in analytics to reduce the recurrence of your visits by patients. rpi researchers are collaborating with ibm to use the watson platform with prediabetes to avoid developing the disease. in our fight to combat climate change and protect our environment, researchers at rpi are working with computer science and machine learning researchers to apply cutting- edge ai to climate issues. in the education spates they are exploring ways to improve teaching as well as new precious to teaching ai and data science to every student at rpi.
all that being said, there are tremendous universities across our country's that are excelling . what are some of the keys to helping ai institutions like them to excel? what do we need to do? what would be the most important? >> i think just like we recognize ai really is a widespread technology, that i think one thing to recognize is that it is still so critical to support basic science research and education in our universities. this technology is far from being done. of course, the industry is making tremendous investment and effort into ai, but it is a masons science and we have many unanswered questions including the social development ai, including ai
for good, for healthcare and many other areas. one of the biggest things i see would be investment into the basic science research into our universities. and to encourage more students thinking in interdisciplinary terms, taking courses, and it is not just for engineers and scientists. it could be for students with policymaking mind and i hope to see the universities participate in this in a tremendous way. >> do you have thoughts? >> i agree with dr. li, but i would also point out that it is also becoming increasingly hard to compete as an academic
institution because if you look at what is happening, the industry is doing fundamental research and that is different from most scientific field. and the salary disparity between what you can get in academia and in industrial is very large. and in order to do the research you need massive computational resources. the work that we just did -- that required basically a giant cluster of something around 10,000 machines. and that is something where in an academic setting it is not clear how you can access those resources. and for the playing field to be successful, there has to be a story for how people in academic institutions can get access to that, and the question of where is the best research going to be done, that is something that is playing out right now in the industry's favor. >> i would just add the fact
that our experts have said that -- you don't know what you don't know, so not only in addition to adding access to data, but being able to test. one thing for sure, is that a lot of these times things come out with surprising results, so that is the whole reason of creating safe environments to try things out and the risk those technologies. and that will be important to enable that -- possibly into the market and to hopefully solve critical complex, real- world problems rick -- >> i yield back.>> the chair recognizes the gentleman from illinois.>> thank you for coming to testify. i have been interested in ai for a long time.
back in the 1990s working in particle physics we were using neural network classifiers to trying -- when i couldn't stand it during the government shutdown, i'd work through part of the tutorial on -- the algorithms are not different than what we were doing in the 1990s. but the computing power difference is breathtaking, and i resonated with your comments on the huge increase in dedicated computer power for deep learning. and that is likely to be transformative given -- and we have to think through that. because with no new brilliantly fun algorithms there will be a huge leap forward. i am the cochair of the future of work task force where we
have been trying to think through what this means for the workplace of the future. i would like to ask -- i would like to submit for the record, a white paper. i will be asking for the record if you can take a look at this and see what sort of coverage you think this document has for the near term policy responses because this is coming at us faster than i think a lot of people in politics really understand. and i would be asking for the record, where the best sources of information on how quickly this will be coming at us. the are conferences here and there that you will attend. i would be interested in where you think you come together to get the technical experts and economic experts and labor economists and people like that
on the same room. i think it is something we should be putting more effort into. on another -- i have been involved in congress and trying to resurrect the office of technology assessment. what the jl did was very good and that is to bring the conference of experts in. you brought in a good set of experts and now we are getting a report on this. you need more bandwidth in congress than that on all technological issues, but this is a perfect example. a group of experts that their opinions are-year-old. and so the office of technology assessment for decades provided immediate high-bandwidth advice to congress on all sorts of technological issues. and so we are coming closer and closer every year to get
refunded after it was defunded in the 1990s. so -- to ask you a question, is there anyone on the panel that thinks that congress has enough technological capacity as it currently stands to deal with issues like this? >> i can answer that. 's huge problem. it has been aggravated by the fact that people have decided in their wisdom to cut back on the size and salaries available for congressional staff. one of the previous members that talked about the difficulty the federal government will have in getting real professionals, top-of-the- line professionals in here, and we are seeing members of congress will do anything but give them the salaries that will be necessary to actually compete for those jobs. >> mr. brockman, i would
advocate that everyone take a look at your reference five in your testimony. i stayed up way too late last night reading that. members of congress have access to the classified version of a national academies of science study on the implication of autonomous drones. and this is something that i think has to be understood by the military. we are about to mark up a military authorization bill, an appropriations bill that is spending way too much money fighting the last war and not enough fighting the wars of the future. and then, finally, dr. li, in the educational aspects of this, one thing i struggle -- if you look through the bios of
people who are the heroes of artificial intelligence, a tent to come from physics, math, places like that. and a huge fraction of -- is a i like that? are there small number of heroes that really do most of the work and everyone else sort of fills in? >> like i said, it is a very nascent field. is collecting -- as a science is still very young and as a young science it starts with a few people. i was also trained as a physics major and i think about early days of newtonian physics and that was a smallish group of people as well. it would be too much to compare
directly, but what i really do want to say is that even pre- newtonian days of ai, we are still developing this. so the number of people are still small. having said that there are many people who have contributed to ai. their names may not have made it to the news, but these are the names that as students and experts of this field we remember them. and i want to say that many of them are members of the underrepresented minority group. there were many women in the first generation of ai experts.>> two or three clicks down in the reference cited by her testimony, you look at papers and the author lists, it is clear that our dominance is due to immigrants. okay? and, dr. li, i expect that you did not come to this
>> what i appreciate about c- span, is that it is not partisan. you watch the sparring that takes place and you watch delegations talk back and forth. is extremely informative and very educational, but i am a tech geek, so i hope that they take me with them on their tour because i would spend hours in that class. but if you look at the video screenings, people can learn and kids can learn about government. and a government doesn't have to be a bad word. >> be sure to join us when we will feature our visit to alaska. watch alaska weekend on c-span or listen with the free radio