Skip to main content

tv   Sen. Marco Rubio on Deep Fake Technology  CSPAN  July 27, 2018 4:42am-5:46am EDT

4:42 am
4:43 am
good morning and welcome to the heritage foundation. it is my privilege to lead it here and we are glad you came. what happens when seeing is no longer believing? when public figures are recorded saying and doing things that they never said or did, companies have fake and altered content being placed on their platforms. significant and descriptive content of socially and economically and politically and what happens to a society where the foundations are truth and are continuing to be averted by our own lying eyes. this is what we will talk about today.
4:44 am
the looming challenge of so- called deep take media that portrays things that never happened. work don't correspond with reality. joining us for this conversation is an amazing group of people. please allow me to briefly introduce all of the participants. we can quickly move through the formalities and into the heart of the discussion.>> first we have senator marco rubio who i trust most of you know. he is first elected to the senate in 2010. he represents the great state of florida. among other committees senator rubio serves on the committee on intelligence and foreign relations. thank you for being here. >> after the comments we will hold a discussion with the other guests. we will be joined by the professor. she is a law
4:45 am
professor at the university of maryland where she teaches and writes about privacy, speech and civil rights. her book, hate crimes in cyberspace tackles cyber civil rights and another project continuing the exploration of privacy in the context of the internet. she worked with our next guest. her partner in crime is bobby chesney. bobby is the james baker chair at the university of texas where he is a member of the law faculty and the director of the interdisciplinary research center that focuses on the integration of security, technology, policy and law. he is one of the cofounders of welfare and cohosts a weekly show and podcast.
4:46 am
finally, we have our incredibly qualified technologist. chris is a senior staff scientist and an engineering manager at google ai. he has been a professor at new york university and stanford university and among his many awards chris has an academy award. the special effects and visual effects in movies entertainment including star was, star trek, the avengers and others. please join me in welcoming and thanking our guests.[ applause ] i appreciate this opportunity to be here. thank you for coming and i want to thank the heritage foundation . we have hired so
4:47 am
many people we like to call it the rubio training center. i want to thank you. this is an interesting issue. in our culture today we react to things after they happen pick something bad happens and we react. see what the capabilities are and what the trends are and put them together and anticipate how bad the actors could utilize technology and advances in the years to come. i am grateful to talk about it. it is the beginning of both thinking about what we can do on policy but also to be aware at every level from the media to academia to us as individual citizens. we are here to talk about deep fakes pick 99 % of the american population does not
4:48 am
know what it is. frankly for years they have watched deep fakes in science fiction and special effects are as realistic as they have ever been but never before have we seen the capability become so available right off of the shelf. right now if you go online you will find comedy sites and places that put up videos of individuals doing things and they range from bad lips thanks -- all the way to other things that are designed to look real but you would not know if they work and you look at the trend in the 21st century with the weaponization of information. has always been propaganda and information has always been a powerful tool.
4:49 am
we have never had in human history is the ability to have information so institutionalist -- instantaneously before you are capable of reacting. long ago you had to pay for it and put it on television and hope it reached people. now you can reach millions of people in seconds. if it is not true it can take weeks or months. what does this mean in the 21st century? it means a lot of things. the fact that you are an individual. you are up for a job or someone is unhappy with you and someone wants to cost you a job or opportunity or wants revenge and they find a way to post a video of you doing for saying something that you never did or said and it is highly realistic and no one can tell that it is not real and it is embarrassing
4:50 am
and you have no way to track down who did it and no way to disprove it and people will say i saw it with my own eyes. one thing is to say you never wrote something but when someone hears you and sees you say it you have no way to fight back. it leaves doubt could be problematic. imagine that apply to a business . you are on the verge of a public offering or a competitor has a reason to knock down your share value or destroy your business and suddenly they post videos about your business or maybe the ceo of the company saying something they never did and this is a real opportunity for those who want to damage the business community. in the case of politics, imagine for a moment, i thought about this, if someone were to, any media outlet, cnn or fox or cbs
4:51 am
or nbc. any of them and someone sent them a video of saying something outrageous like patriots beat miami or something like that and they sent it to them of a public figure taking a bribe for saying a racially insensitive or outrageous, in a private setting. things that are leaked of a secretly recorded video. what was the number met romney used? 47 %? imagine something like that except it did not happen. that information is given to the media outlet. they will call you for a reaction and you tell them you never said that and it did not happen and they said listen, we have a video and it is your voice and face pick it was you
4:52 am
and we are running with it you tell them it is not me and i was never there maybe i was there but i have 15 people that will tell me that did not happen. i think that is a difficult decision for the media to make in our current environment. the outcome would be they would run it with a quotation saying we contacted the senator and they denied it was them. the vast majority of people watching will believe it. if that happens two days before an election or the night before an election, it could influence the outcome of the race. the capability to do that exists now. a culture that will perpetuate and instinctively want to believe that stuff exists now because the nature of the coverage is driven by conflict. every single morning starting at 6 am or 7 am every news outlet begins the day by pointing to
4:53 am
some average they want you to be fired up about and proceed throughout the day to put people together to fight about on- screen. that is the cycle we have in the media coverage. find an outrage and hire a couple of commentators to fight over and get people to react to it. it is fine. we are a free country and free press but imagine in a culture like that that manufacturing the outrage spreads like wildfire as it would online. i don't believe any individual or any campaign organization in america with the capability to knock down the spread of the false information fast enough. i think over all that it is a very attractive weapon for someone who seeks to interfere in our politics. traditionally we will looked at and say that would be something someone else
4:54 am
would used. one of these political groups but what about the nation? a nation with capabilities that exceed that of any political party whose agencies decided to weapon eyes an instrument of that usage. i don't believe but i know for a fact that the russian command of try to ensue chaos. not necessarily as some might report electing one over the other but the next president of the united states would face a cloud of controversy for weeks and years to come to weaken us. we would deal with the russian issue right now. no matter who was elected. i think vladimir putin sits there and said it worked.
4:55 am
we have a society at each other's throat. it was happening but i was able to pull -- poor fuel on the fire and weaken them further. they did that from twitter and through a couple of other measures that will come to light. they did not use this. imagine using this. imagine injecting this in an election. i want people to understand this is these threats that go beyond annoyance. i want to take you back to florida in the year 2000 pick we had a race in florida in the year 2000 and that was decided by less than 600 votes in one state. it had an additional complexity and that is the candidate for president, his brother was the governor. imagine for a moment that at some point on the day of the election in some county of the
4:56 am
state, 600 democrats went to vote they were not allowed to vote because they no longer appeared on the registration rolls of the county and that information was fed to cable news outlets or groups who jumped all over it as an effort by the republican-controlled state to deny them the right to vote. 600 democrats did not vote because they were not allowed to vote and the candidate one by less than 600 votes. this is an intelligent dish illegitimate election. the courts would decide and prove that the people got provisional ballots. that would get lost in the broader to bait -- debate. a significant percentage of the population and the armed forces who had a minimum doubts if the person that one actually wanted.
4:57 am
that is what it would mean if someone could change registration. the ability to influence the outcome by putting out a video of a candidate on the eve before the election. doing or saying something strategically placed and altered in a way to drive some narrative that could flip enough votes to cost someone the election. put that together and what is not a threat to our election but the public. crisis unlike we have faced in the modern history of the country. this sounds fantastic and exaggerate and hyperbolic but the capabilities to do all of this is real and exists now. the willingness exists now and all that is missing is the execution and we are not ready for it. not as a people, not as political branches and not as media or country. we are not ready for this threat
4:58 am
maybe it will be russia. they are the likeliest culprit but it could be and nobody -- anybody. taking people on and it could be anyone because one of the ironies of the 21st century is that technology makes it cheaper to be bad. in the old days if you wanted to threaten the united states, you needed aircraft carriers and missiles but today you need access to our internet. increasingly all you need is the ability to produce a realistic fake video that could undermine the election and through the country into crisis. internally and weaken us deeply and i am grateful you provided us this form to have this conversation. you all look scared but that is good. it is a threat we should be
4:59 am
aware of and one that i don't have every answer. awareness is part of it. educating people and political figures. awareness that this exists and from it we will talk about how we can balance the right to privacy and free speech and all of the things that comes with constitutional protections with our obligation to protect our country and our republic. it is a 21st century threat that no one has ever been presented with and we have a lot of work and i hope today is the beginning. thank you for giving me the form to do it thank you very much.[ applause ]
5:00 am
i would love to tell you it would get lighter. it is not. we do think we can provide helpful context that will enable all of us to think more deeply and hopefully profitably about the challenge. chris, i want to begin with you from a pure technology standpoint to give us context and to flush it out a little bit. fake videos and altered media has been around for a while. there does seem to be a perception that the challenge of deep fakes is coming into its own. i wonder if you could help us understand what the drivers are behind it.>> thank you for
5:01 am
inviting me to the panel. as senator rubio mentioned it is nothing new to generate fake faces. it was very hard to do. you need visual effect artists and complicated systems. recently several universities and other entities started publishing systems and some of them are called puppeteering systems. you take lots of video of somebody and you use the machine to change the lips or other parts of the face. it looks like the person said something different. last year there was an entire session that the computer graphic conference had a leap forward. also, you have heard about deep
5:02 am
fakes, deep means deep learning . neural networks are as old as 50 years. they used to be very small number of units with a snow -- small number of connections and 10 years ago people figured out because the computers got more powerful and there was so much more memory available that you can build the deep that comes from hundreds of letters and in some cases billions of connections and they become powerful's -- powerful. this happened and academics start using that for generating better faces and images of faces but still the effects are better in generating them. visual effects to a good job if
5:03 am
you cannot detect it as a visual effect. you have seen lots of movies where it is fake. what happened with this face generator is they were not that accessible. they were written by graduate students at universities and hard to use. you could debunk them really easily. don't want to put down the groups. there is a lot of progress. just recently, almost in december of last year somebody posted on reddit, it was actually discussed a few months before but somebody posted codes that can do that. a deep fake code. a deep network and if you have some software engineering skills , you can download it and turn it into an application and collects applications of the
5:04 am
faces that you want to replace it and you by a graphics card that cost less then $1000 and let the system run on the home computer or laptop for sometimes several days or one day and it creates a deep fake and more recently an entire community evolved out of the spirit they are generating more and more techniques of deep fakes. networks usually can do very good in detecting this is a face, this is a house, this is a car. the recent advances are now i can generate an image of a face. something completely new that we did not know how to do five years ago.
5:05 am
they generated this one thing called fake app. you just downloaded on your pc and run at. that changed the game but it has a lot of parallels with what happened 20 years ago when photoshop came out. people photoshop photos and people were believing it. now we are on to it. this is a photoshop image but with the deep fakes there is arguably a lot more awareness. this might be a deep fake. we don't trust videos anymore.>> you mentioned it was not real but how difficult is it currently to detect a deep fake video and is it safe to
5:06 am
assume the capability will evolve as quickly as the deep fake itself. our ability to find these out as fakes will always be parallel ? >> that is a good question. most of the deep fakes out there are easy to detect. even with the untrained eye pick what happens is the faces are flickering a little bit. the i planks are inconsistent. when i speak i have had motions and deep fake does not do that there is amazing stuff like mit involved something that you can enhance the face. if you have a real video you can detect just by the change of redness what the polls great of the person is.
5:07 am
>> they come up with techniques that can detect the area of pixels that are wrong and as soon as deep fakes came out they started building systems that can detect if it is a deep fake and tell you where the deep fake is an another interesting research direction is where it originates from. i think you can say it is a cat and mouse game. depending on the expert you talk to, the algorithms are ahead of the fake algorithm
5:08 am
right now. >> in addition to those types of things, we see several companies coming up with digital solutions like watermarks and better data in the videos. will that's all that? >> no. the reason it will not automatically solve it is it depends on the uptake on the technology we see picked the main thing we need to be concerned about is deep fakes that can propagate quickly and widely and spread as misinformation and have an effect that the senator described. that will be a function of the platforms. whether we are talking about what the nightly news carries or what is on facebook or twitter etc.
5:09 am
if it has a gatekeeper and they embed as a form of filter, the imagery has the right watermarks of a providence confirming validity type system, great but there is no reason to think that will have and immediately -- happen immediately. a lot of entities developing the watermarking and validity solutions. which one gets to be the winner. there is got to be a lot of variability and insofar as, let's say there is a coordinated action that presents its own issues, we will all settle on this new thing that chris has invented and we will all use this. if it is cumbersome and acts as a friction point for users like
5:10 am
all of us that is putting this content up on the sites, and unless they all do it, you might find it to be a bit of a pain. it is easier to download this other app. myspace does not have the filters and it is more fun and interesting. it could be the case that that protection will be built into the platform that spreads this but i am skeptical that that will happen anytime soon.>> you can imagine the emergence of -- you mentioned this in your paper.>> the emergence of embedded verification and personal tracking to your location where you have been in the past. if you of some -- are someone of public importance and this video comes up, you might want to demonstratively show that i
5:11 am
was never in the building. if you consider those types of solutions, what privacy concerns come up? are you concerned? >> that motivated us to write this paper. the idea that we would worry that we would lack accountability for where we are and we might unravel our own privacy to protect ourselves from deep fakes. we will see market desire essentially logging where we are all the time and engaging in all sorts of activities where we are tracked and traced and categorized in all sorts of way. the idea we would need to do it as a matter of self-defense and if we didn't do it it was suggest that we were hiding something. for bobby and i were troubled us was we know there is
5:12 am
incredibly and we can talk about the capabilities for individuals and societies, the harm is real and palpable and significant. there is longer-term concerns that we might unravel our own privacy in the longer run and it is incredibly troubling and gives an extraordinary amount of power to companies and governments that have the data. that is worrisome.>> talk a little bit because i want to come back to the privacy and broader implications. you raised a point in terms of the personal implications. if you are an individual citizen or anyone in this room and this capability to build one of these videos is generated, what are the implications? you word about cyber stalking. what happens next >> this comes up and grabs our
5:13 am
attention because we saw this with celebrities and pornography pick you are exploiting someone's image in ways that look like they are having sex which they are not. reduced to a sex object is something you never decided that especially for people that are not public figures, imagine the damage you do. it is not that you shared the nude image where you allowed the exit to take a picture, victims are punished for having done so even though they have done nothing wrong. now that we have the apps that we can easily download the capability. it creates the harm now that you don't have to take -- the
5:14 am
selfie or you agreed to share. we can create sex videos about anyone and you can see it being misused in abuse. the narrative on the comments when you write about cyber stalking. the comments i am constantly reading. comments among people that said you have the ex-girlfriend with 500 pictures that i can harvest. help me make a deep fake tape. there was so many comments that people were saying they can't wait to use the technology. there is a desire unfortunately. the damages profound. it is a deep fakes and it is prominent in a google search.
5:15 am
90 % of employers use google. it is not that they believe the person made the sex video. it is cheaper and easier to hire someone without the damage. it is psychologically damaged and you are put into a sex video and it is ways that you could never imagine and it hasn't -- has an impact on your life. they lack the resources. it is almost impossible for the everyday person. we saw google in the summer of 2014 announce they were going to the index nude photos and searches of people's names and revenge porn. that is and incredibly important move at the time.
5:16 am
because what you want is not to have it searchable. it does not solve the problem but at least the employer, the clients, the friends will see it. >> i will take those implications and pull up a little bit. think about the contacts -- context and frame the question. senator rubio mentioned russia for good reason. a couple of contacts -- context for the audience. in the past the government has used manipulated information and falsified data to marginalize and constrain political opponents. just recently they were deliberately targeting the cell phones of nato soldiers in deliberately feeding them fonts -- false information as a means
5:17 am
of directing and shaping a battlefield. do you think it is realistic to think that a hostile competitor like russia or someone else might leverage this type of capability for its own domestic consumption and purposes but even into the international environment? where are we then? >> clearly yes. miss information warfare is not unique to the russians but they are masters of it. imagine if you will and this will sound far-fetched. imagine the president of the united states going into the one-on-one meeting with the head of russia and no one there to take notes other than the president. things were said and everyone wonders what was said. low and behold there is audio that is hard to debunk that sounds like the president
5:18 am
saying don't worry we will never lift a finger. fill in the blank with the trial of national interests. the ability to go beyond the fakery that is already possible and your ears tell you what you heard makes it more powerful. there is many ways to disrupt international relations with technology. there is other ways. the cofounder as many people know is it term that originates with charlie dunlap talking about the ways someone make use of legal rules to hamstring an apartment. -- opponent. it appears the forces have done this or that saying it is a
5:19 am
claim about killing civilians or harm to civilian populations. you can have actors play the role or impersonate but how much better if you can use the technology of deep fakes or inflame tensions or perhaps you would like to domestically and inflame tensions between the police and local communities are running high and at the chief of police saying racist things. the potential for mayhem is off of the charts. we can generally debunk but the truth does not ever quite catch up with the initial light if it is emotional and juicy enough .>> it is important to think of this outside of what happens inside all of our borders. we will use russia because it is realistic.
5:20 am
you can imagine how they might use that in developing regions to shape the political situations where it does not land on it specifically geographically but it is a situation where we are engaging that may influence a decision as to what type of support or aid or deployment might be forthcoming in the wake of that. we are talking about a capability in an age of the media that i don't think is an overstatement to say that it could fundamentally shape the environment in which we are doing policy both domestic and foreign.>> it makes me think of a line we have in the paper. we have a concern about trust decay and truth decay. a lack of trust of each other
5:21 am
.>> i think senator rubio did not have an opportunity. is there any challenges with greater awareness? as we educate and have this conversation, i read raising the challenge alongside that? >> the more successful we are at getting people to be aware of the nature of the problem and cultivate the sex of -- skepticism, the more it captures it. the image where the video is real and the audio is real and it exposes something wrong and the person has the shamelessness to deny it. this already happened. how much more room is there to deny the evidence when people have been pounded with the message by us? beware of deep fakes because
5:22 am
the picture and video can be manipulated. it is the cry of deep fake news. getting people to be on guard .>> we will push a little bit more and release. let's get this. >> how bad could this get as you have thought about this? >> i think you have got two sides. it is total opposite but it exists together. on the one hand, nothing is believable. the liars dividend and i believe on keeping it. is incredibly hangul -- humble but nothing is believable and everything we get to say.
5:23 am
there is no truth. we have really done incredible description -- disruption to each other. deep fakes is damaging. that is the kind of thing that we say i believe that. gender stereotypes. for a woman you see the sex tape and you think she is available and i am not going to hire her. it is costly but at the same time culturally and politically we lose faith in the public discourse. here we are and i think to meet that is the nightmare scenario. let's add and what drove us to write the paper which is the audit trail we created.>> as
5:24 am
you said earlier, it will be for everybody trying to purchase a service from a third party. you could be a trusted -- you've got a token on your jacket that is recording everything. maybe only a few people running for office that are insensitive positions. employers want employees to where it because lord knows what they could get up to. you run the risk of accelerating surveillance trends and an omni recording of everything trend that is disturbing and it is not a world that would be accosting -- accustomed to or want to live in.>> in a world that we don't trust, who gets
5:25 am
to defined -- define what is true? if we have created a lifelock for ourselves, imagine the way we can be controlled. >> that is disturbing.>> it is pretty ugly.>> the robots are in charge.>> every conference i go to.>> chris, bail us out.>> that is the worst. the worst case scenario. i can pile on but there is no need to. how is this technology likely to present itself. what is the next evolution of this? >> i said it before. the current deep fakes are easy to debunk. what happens next is to fix a certain shortcoming of the deep
5:26 am
fakes, people are making things more compatible that were not compatible like head motions in sync with what they are saying. when we talk, we engage with the community and other platforms. there is optimism out there and i don't want to go into theory in front of this audience but there is a lot of discussion like this arms race like generators or detectors getting better, there is some proof that the detector will always win. that is what some people say. >> that is true with photoshop. you had us go back 20 years.>> there is some great work out there. with the deep fakes
5:27 am
some need to know the software that created it. it is more like an antivirus program sonora that as as soon as the new deep fakes come out you download to your detector. there is ambition. you can have a general deep fake detector where you don't need to know what is coming next. they built the system for the photoshop images. the networks are usually trained. the examples of real and fake and collect a lot. if you have a database of fakes and the new fake that comes out that is not part of the database, you might have a hard time detecting it. the group system was trained on
5:28 am
real images. if there is something coming up that does not belong it detects it. it works pretty well. there is photoshop detectors out there where you will see the area. another thing that i want to mention and senator rubio started out with rumors and all of the general discussion about fake news. to debunk them, you need fact checkers that might have bias that people don't want to believe it. if we keep going with the speech and become up with the right interface, it is like we talked to the scientists and it is a more objective way to debunk it.
5:29 am
. here is the original video. a lot of journalism already uses it. the use of the platform. you have an image but you don't know where comes from. they drag it in and the search and the search results comes up. that also appeared here and here and a slightly modified image here. it is a standard practice. it is a convincing way to convince the general public fast that this is objectively false. >> the good news is there is reason to believe that it is not good. things are still going but journalists before they publish
5:30 am
can run it through the system. unfortunately when we talk in terms of content,, we should -- we all understand that rumors persist regardless of what has been demonstrated to be false. >> the google search is not a right to reply. if there is a deep fake tape, there is not right to have a response.
5:31 am
especially for the individual, it is incredibly harmful.>> we have malicious intent or other things.
5:32 am
5:33 am
5:34 am
5:35 am
5:36 am
5:37 am
5:38 am
5:39 am
5:40 am
5:41 am
5:42 am
5:43 am
5:44 am
5:45 am

26 Views

info Stream Only

Uploaded by TV Archive on