tv Washington Journal Gopal Ratnam CSPAN June 27, 2019 6:57pm-7:29pm EDT
the leader. >> and at 9:00 p.m. on after words, george sorial discusses his time as the executive in the trump organization. he's interviewed by cnn anchor and senior political analyst. >> in my view, you know, we have issues with china, we have issues with russia. we can see that these are the big emerging, you know, forces that we have to deal with. ok. though i want my president, isn't it pretty smart for him to have good relations with those leaders? ok? isn't that a smart thing? ok? i take comfort in the fact that these people are all talking to each other. >> this weekend on book tv. on c-span2. >> "washington journal" continues. host: it is our spotlight on magazine segment here on "washington journal."
gopal ratnam,by senior staff writer with cq "roll call." he has a visa in the june edition of the cq weekly. as artificial intelligence takes hold, the government struggles for the upper hand. in this substantive piece of yours, you have a description of what is ai, but let's start there. when we talk about artificial intelligence, what are we talking about? guest:. well, thanks for having me. about, like i explained, the two basic things that go to making an artificial intelligence system is large amounts of data and then a software program that can look at patterns in that data and or makeher extrapolate on that data,d similar to what human beings might do.
host: your piece looks at several different areas, data, algorithms, machine learning, and deep learning. what do the terms "machine learning" and "deep learning" mean? where machine learning is computer scientists and engineers training computer to understand what is in the data. thatne of the examples in is there are two kinds of learning. what is where these computer scientists are actively showing image, forr and hi example, and helping you learn whether what they are showing could be a cat or a dog and make that decision. learning, the computer tries to find it had on its own. i offer an example of that, where if you feed the computer with millions of images, bits and pieces, for example, are cat, over 7 million pieces of
that data, the computer will figure out ok, so that is a cat, and this is how i can distinguish it from, let's say, some other animal. these things happen to human beings, a small child will show a computer millions of pieces before can understand what is a cap. >> artificial intelligence transport, you're at advantageant including voice and text recognition and to learn about achieving cognitive tasks once reserved for humans.
in your reporting on this, is happening faster than experts predicted? >> i would say it is happening slower.e that is it has taken a lot longer for computer sciences to get to this point because of varying factors. machinesvery advanced and obviously, a lot of data which was not available years a bitd now you are seeing of acceleration. overhe most obvious area
devices for mobile cyber and public, and law enforcement agencies are trying to use that to identify potential suspects. there is even a possibility that commercial enterprises, companies could use your face, you're ready have an iphone x, your i.d. is your face. the next development index could use your face as you are walking around to send you advertisements and so on. so there is some concern about whether that is getting out of control, so there is some legislation to address that. how can your face be used as data who can send you messages, and civil rights and privacy advocates are also asking for legislation to control how law-enforcement agencies use that kind of information. host: gopal ratnam is our guest. the piece in cq weekly looking at artificial intelligence. we want to hear your thoughts, particularly the federal role, the regulation of artificial intelligence, ai. republicans, use (202) 748-8001 call in. democrats, (202) 748-8000. and for independents and all others, (202) 748-8002. in terms of technology in general, how prepared is congress for artificial
intelligence? how up to speed are their staffs and members themselves? they were criticized recently for not knowing all facebook -- not knowing how facebook functions, for example. guest: i would say they are still c ranked, i would imagine, in terms of understanding the complexities. at the same time, i will point out there is an artificial intelligence caucus in congress, and more knowledgeable members are trying to educate the less knowledgeable members on the technology and the application. but it is still a steep climb, and as i point out in the article, the congress kind of did not get its arms around the internet era, all the things we know in times of google searches, twitter, facebook, and now on top of it, we are now overlaying all of these advanced technologies that can do decision-making on its own. so i think members of congress
are kind of grappling with both at the same time. they are trying to get up to speed on the last set of technologies while at the same time trying to stay on top of what is emerging right now. host: back to facial recognition for a second. one of the states, and i think it is california, has beaten the federal government to the punch in banning the use of it entirely? guest: san francisco, as i point out in the article, is the one who has used the -- is the one that has banned the use of facial recognition software for police department. for example, they would take a video of a client in which there is is an individual's face that i think is a suspect, then they would run it through the database to see if there is a match. civil rights advocates are saying that is being done without a warrant. it is almost like getting your
fingerprint without your permission and doing a search based on that. host: have we, as consumers, kind of given that up in terms of not just facial but fingerprints? you use your thumb to open some iphones and other technology, as you pointed out, facial recognition in the latest version of iphones. have we, as consumers, already sort of ok'd that technology? guest: that is the heart of the debate. because executives like mark zuckerberg, the ceo and founder of facebook, have argued that the old policy is a relic of the old era. consumers are willing to give us some of the privacy for greater security, so that is the argument of the other side. and that is kind of where members of congress and lawmakers are trying to draw a line on -- have we crossed the line too far? do we need to scale it back a little bit? host: we are talking artificial
intelligence. let's hear what our viewers have to say. we go to hot springs national park in arkansas. j.d., good morning. you are on the air. caller: good morning. i am not a religious person, but i hear these christian people talking about the antichrist, the antichrist. i do not know if there is such a thing, but i think if there is, it must certainly be the internet. i mean, there is more crime going on, more on the internet than anything i could ever imagine, and i am an old man, and my lifetime. you hear about these cities being held for random. you are an expert on this. explain to me how these people can hold a city for ransom, like atlanta, georgia, make them pay millions of dollars together internet files back. can you explain that? for ransom, like atlanta, georgia, make them pay millions of dollars together internet files back. can you explain that? host: in the city of baltimore, and maryland. guest: thanks for that question. is talking about instances, atlanta, and more
recently baltimore, where hackers have taken over control of the city's computers system and have encrypted all the data on that computer system and have demanded a ransom from the city give theto, you know, date of that, and is happening now, it seems, with some frequency. of course that is not quite what we are talking about in the article, but it is a concern in terms of this happening. and a couple of reasons why. i think cities are not as well as bigger agencies are in terms of protecting their computer systems from that kind of an attack. and number two, there has been some debate on whether the tools that the united states government is developing to essentially attacked adversaries outside the country are somehow leaking and then getting into the hands of the bad guys, and attack being used to american cities, so it is as if
we develop a master key to unlock, you know, the bad guy's door, and now the copies of those keys are around, and the bad guys are now coming around to attack and open our door. that is kind of the analogy. host: let me give you a piece of the legislation reporting on, specifically the algorithmic accountability act. what could that do in terms of technology and ai? what are they proposing? guest: we talked about the importance of data in developing artificial intelligence systems. look atexample, you housing data and loans being made for housing, if you look at 50 years of data, you would find there is a lot of bias. there is a history of the country of redlining and declining loans to minorities and african-americans in particular. if you now overlaying a system artificial intelligence making decisions based on this data, it is likely to continue those biases. human biases will get transferred to machines.
if a machine skin starts doing very rapid decision-making, we might not be able to find those discrepancies. so what we are trying to do is make sure the underlying machine is free of bias. let's hear from jack in salem, oregon. republican line. caller: good morning, c-span. good morning, america. are,estions and comments that, my comment would be can you expand upon -- well, i guess it is a question that but how markexpand upon zuckerberg became so involved with the google project? and what is happening with china massiveecting massiv
amounts of medical data from us? i just read about that recently, that they have been collecting massive amounts of our medical data, so that it can be used in their, um, i don't know what it is called, where they have a system of, uh -- we willl right, jack, let you go, if you get the first part, in terms of the medical data issue with china, mark zuckerberg. guest: i am not sure about china getting american medical data. may be referring to the mass amount of data that china is collecting on his owns this is due mass amounts of surveillance, and that is something lawmakers in his country and privacy advocates are worried about, china is doing this mass surveillance of his citizens to make sure there is no movement
in that country, and they are using things like facial recognition, artificial intelligence. and that is, again, from a lot of data that has on its own citizens, and so people here are worried that if we do not address those issues, we could in a situation somewhat similar to that where law enforcement agents use the technology for those purposes. host: are there other law enforcement agencies or other entities? guest: they are pushing to make sure that her hands are not tied in terms of being able to use the data. but at the same time, i think there is concern that, you know, as some of the studies have pointed out, there is a potential -- one or two americans of potentially being
scanned by the police department for one situation or another. host: and your piece points out the rise of all sorts of interested parties in terms of legislation, artificial intelligence, the influencing the future artificial intelligence outside official service as more industry-lobbying players seem to shape the federal debate over at ai. you can graphically see the rise in that. we go to sue next in syracuse, new york. independent line. caller: yes, sir, frankly, i tend to think of artificial intelligence as one of the major existential threats to the human species, basically because what has beenone is, as pointed out, the machines are learning themselves, ok?they are teaching themselves rapid do things at a rate, and the things that they are teaching themselves to do, again, increasingly, are going to be out of the purview or the ability of human beings to
control, and the problem is the chargeat we put a i in of so many things, the more we are taking human beings out of the equation, and as the gentleman pointed out, the algorithms that are being introduced into the process, again, the people who formed these out rhythms have an enormous amount of power, ok, and the ability to, as he already pointed out, the ability to really alter so many aspects of our life. i really think it is an existential threat, along with climate change, and it is ofething that, instead cheering on, we should be limiting severely. host: a number of technology leaders have used that very same phrase. guest: that is right. for example, even the founder of microsoft, bill gates, has said that the technology needs to be controlled, and so as the
founder of tesla, elon musk, has said the same thing. so that is kind of the heart of the debate. aam not sure if we are at point where we are losing control of the technology, and the machines are taking over. that is kind of like the fear. but we're not at that point yet. the other point she is making is making it the people writing the algorithms have a lot of power. talked toving technologists for this story, it is not a they are seeking this type of power, i mean, these are technologists that are just trying to enhance their capacity if theability, and outcome of that is large powered machines, that is seen as an unintended consequence. host: you write in the piece that in the absence of federal law, microsoft has offered six voluntary principles in developing facial recognition technologies. our other technology companies following suit?
not just on facial recognition from other ai practices? clear.it is not there is a broad, sort of industrywide movement to do that , self-regulation, self policing, but microsoft is kind of an outlier on that score. they realize, for example, that facial recognition technology, the algorithm, is written on faces that are mostly caucasian faces, that system may not be able to recognize a doctor, a minority person's face, and also the same thing with gender, too, so that is one of the things they are trying to fight. host: do you think u.s. officials are learning a lot from the chinese experience, however much we may disagree with it? guest: i am not sure they are from percent accounts, but by media reporting, yes. california,go to democrats line. good morning. i do not have a whole
but itsay about ai, makes a lot of guesses about things. there was a book written by kaifu lee talking about the superpowers in at ai. they said it was going to eliminate about the one million jobs over the next 20 years or so, in addition to all of the information, the massive amounts of information that is being collated and elected and ,anaged, and use in ways that it is more like something you would read out of a george orwell novel or something. in machines truly are charge, whether we would like to believe it or not, and it is an unfortunate thing, because there are a lot of people that need to work. and the smarter we get, seemingly, the fewer people were. so i thank you very much for
listening. host: thanks. guest: lots of interesting points there. as far as making a lot of guesses, that is perhaps true, because, if you are doing a google search today, the moment you start typing in the first couple letters or words, the machine is already guessing what it might be based on the kinds of things to have been searching along those lines. so you can actually see weather is specific news breaking out, the computer is able to guess what people might be potentially searching for. the caller also mentioned the superpowers. obviously there is a global race to be, like, the world leader in artificial intelligence. china has said it wants to be and they are 2-- leading the effort, and that is why there is a push in the united states to get ahead of these advanced technologies through regulation.
you also mentioned jobs, and this is also a concern. i would point out that lednological advances have to losses in manufacturing jobs, it is likely that artificial intelligence advancements loss ofead to white-collar jobs, loan officers, insurance specialists, legal analysts, because it could have a heck of a lot more cases a lot faster, so those are obviously concerns. host: let's hear from pittsburgh. independent line. good morning. caller: good morning. i have is we learn nature versus nurture and such, and algorithm presented to these computers, ai, at what point do we actually consider it to be sentient, a mind of
its own? and i will hang up and listen. guest: um, that is a great question. again, like i point out in the article right at the beginning, when computers were just kind of making their appearance on the world stage, scientists and technologists have been looking and hoping that there would be that kind of a senstient machine, but i think we are very far from that, because, for example, we are still -- computers are at a point, while they're still trying to recognize a cat from a wolf or a dog from a wolf, and so i think the capacity, although it is potentially possible, i think we are far from the time where a as ane is as sentient normal human being is, although that is the fear. host: we talked about this a little bit. in september, ai now issued a report documenting technologies
where governments are decision-making, such as teacher assessments, andinal risk assessments, eligibility for medicaid and other government benefits. in detroit, this is james. you are on the air. caller: is this me? host: i'm sorry, william, you are on the air. questions or want a specific, and one is general. i am spitting in a website -- i am participating in a website called cola. i guess it is social media. there is no reason people know who i am or want my opinion, so doing assumed that i was a lot of communications with ai, and then there was evidence that they were original people, like me, so my question is how much of this interaction is with
artificial intelligence and this ol icular website, and the other question, the general question, i get you noticed -- what is your opinion about this, what is your opinion on this? host: i am going to let you go, because you are coming in and out. did you catch most of that. guest: he talked about posting questions on a site called cora, and are they coming from ai -- ost: war machine, yeah. guest: i think today it is possibleguest: that if you are on any commercial website and a little box pops up that said we the right thing for you, tell us what you are buying, and you have a system with a box that pops up and there is somebody typing, that does not have to be a human being. it can be a machine.
both in terms of being able to respond in a text fashion and in an oral fashion, select with a voice, we have gotten to a point where it is hard to tell if you are talking to a machine or a human. tweet in preparation for our this morning, this man tweets, "it is a big day today. i had a scientific manuscript rejected by a robot. levelread detected a high of overlap with previous literature." guest: that is one of the cases some machines are doing decision-making, and also, you are seeing on social media where companies are trying to remove objectionable content, they , they'ree out, i mean machine might flag or respond to objectionable content. host: the and of some of your timeline of ai, you put
face theare now, fac fraser recognition, amazon and engine, giving a certain amount of leeway to these pieces of technology to make those decisions for us or at least guide us along to making the decision. guest: this has been a struggle not just with this piece of technology but as with any other piece of technology, we will always want more convenience, and in order to get convenience, we have to trade some amount of freedom and privacy in salon, so we are at the point where we are evaluating how much of privacy and what we are trading away, how much for convenience. host: the question i guess for you is -- are lawmakers hearing from their constituents saying, "no, we want more privacy. ease of want more access, information, etc."?
we were talking about teacher evaluations, law enforcement, so one, lawmakers are heard from advocacy groups and say if you are doing this in order to create efficiency for government agencies, that could cause harm for certain communities of people. so we are hearing that. host: does your article get into productive technology? in terms of likelihood, because reportedly the chinese get into that as well, in terms of the likelihood of individuals to be up to nefarious activities. guest: yes, we do. story, when in the the criminal justice system now, certain kinds of artificial intelligence systems are being used to decide whether someone should be granted a bail, based on certain characteristics that system has developed. out that are pointing that unfairly targets minorities and african-americans who do not spendingand end up time behind bars, and they do not have a way to respond to it,
because the decision has been a by a machine, which is seen somehow as something that cannot be challenged. host: appropriately, we are receiving a call from fort meade, james in for me, maryland, independent line. go ahead. caller: thanks for taking my call. when thing i wanted to ask your guest is where do you think this whole thing where self driving, autonomous vehicles have become the holy grail? my question is -- it is not a societal need. it is not an economic need. where did this thing where everyone wants to do this -- where did it come from? where was the drive for this? so that is the question. i would really appreciate your best's insights on this. host: thank you. an interesting question. i would agree that there is probably not an economic need, but i think people would argue that there is a social need, eecause it is part of, likk
further along the line of, convenience if you think about the effort involved in driving a one can imagine a situation where a group of elderly senior citizens who cannot drive and you have the capacity for a car to drive itself, that could be of huge benefit to the community of senior citizens, for example. i think it is debatable whether there is no social or economic need. >> autonomous vehicles have to be the subject of another washington journal. thanks so much. morean find out