Skip to main content

tv   Sen. Marco Rubio on Deep Fake Technology  CSPAN  July 20, 2018 3:00am-4:04am EDT

3:00 am
3:01 am
foundation in washington d.c. ran one hour. >> good morning. welcome to the heritage foundation. it is my privilege to lead policy appear we are glad you came. what happens when seeing is no longer believe in? when public figures are recorded saying and doing things that they never said or did. when companies have fake and altered content being placed on their platforms. content with significant and potentially disruptive content both socially, economically and even politically. what happens to a society where the foundations of truth are continuing to be eroded by our
3:02 am
own lying eyes? this is what we are going to talk about today. the looming challenge of so-called deep fake media. media that portrays things that never happened. or that do not at least correspond with reality. joining us for this conversation is an amazing group of people. please allow me to briefly introduce all of our participants. so that we can quickly move through the formalities and into the heart of our discussion. first, we have senator marco rubio that i trust most of you know. senator rubio was first elected to the u.s. senate in 2010. where he represents the great state of florida. among other committees, senator rubio serves on the senate select community on intelligence and the committee on foreign relations. thank you for being here. after his comments will hold a
3:03 am
panel discussion with our other guests. before that, will be joined by professor danielle citroen. a professor at university of maryland where she talks about privacy, speech and civil rights. her book, hate crimes in cyberspace, tackles the phenomenon of cyber civil rights. cyber stalking and she has another book project that will continue the exploration of privacy in the context. she worked with our next guest to write an outstanding analysis of the deep fake problem. which is the backbone of this event. her partner in crime is professor, robert chesney. he is the james baker chair at the university of texas austin. where he is a member of the law faculty as well as director of the strauss center. which is an interdisciplinary research center that includes focus on the integration of security, technology, policy and law. he is also one of the cofounders of law fair and
3:04 am
cohost a weekly show, the national security law podcast. finally, we have our incredibly qualified technologist. mr. chris -- a senior staff scientist and engineering manager at google ai. he has been a professor at both new york university and at stanford university. and among his many awards, chris has an academy award. for his special effects and visual effects work in movies and entertainment. including things you may have heard of like star wars, star trek, the avengers, and others. please join me in welcoming and thanking our guests. [applause] senator rubio. >> i appreciate the opportunity to be here.
3:05 am
thank you all for coming. i want to thank the heritage foundation for we have hired some people from here. -- [laughter] i want to thank you all for doing this. it is an interesting issue to talk about. in our political culture they were largely reactive. we react to things when something bad happens react to it. this is when you try to get ahead of something. to see what the capabilities are, see what the trend is and put them together and anticipate how bad actors could utilize technology and technological advances in years to come. i'm grateful we have this form to talk about it. because it is the beginning of what i hope will be both thinking about what we can do on public policy but also society to be aware. at every level from the media, academia and us as individual citizens. about this reality. what we are here to talk about something called the deep fake. if you say that i would say
3:06 am
that 99 percent of the american population does not know what it is. even though frankly, for years ever watching deep fakes in science fiction movies and the like. these incredible special effects are is great as there's ever been. but never before have we seen this capability become so apparent or so available right off the shelf. and so, even if you go online to certain sites you will find comedy sites in place that put a funny video. they range from a bad lip-synch one which is not really a deep fake although they are funny and they can be. all of it other things that are just designed to look real and you really could not tell that they are not unless you or at least unless an expert to do the work. they look at the 21st century and the weaponization of information. i'm here to say there's always propaganda in the world and
3:07 am
information has always been a powerful tool to use against a competitor or adversary. but wait we have never had is the ability to submit information so rapidly and instantaneously for to have an impact on so many people. before you are capable of reacting to it. it wasn't long ago that if you wanted to get word out you had to pay for it, put on television or put on paper and hope it reached people. they can reach millions of people within seconds. and if it isn't true, by the time you knock it down it could take weeks, months or maybe never. what does that mean the 21st century? it means a lot of things. not just a political topic. let's begin with the fact that you are an individual. and you are up for a job or someone is unhappy with you. and someone who wants to cost to the job, the opportunity or just wants revenge, finds a way to post a video of you saying or doing something that you never did or said. and it is highly realistic. no one will be able to tell it is not real. deeply embarrassing, whatever
3:08 am
they put up. and you, as an everyday individual have no way to track down who did it and no way to disprove it. and people say i saw it with my own eyes. one thing is to claim he never said or wrote something. another thing is when someone actually hears you do it and say it. and you have no way as an individual to fight back against it. and the fact that it leaves doubt could be problematic. imagine that applied to a business. you are on the verge of initial public offerings. some competitor has a reason to knock down your share value on any given day. or they just want to destroy your business all the way around. and suddenly, they are posting videos about your business or maybe the ceo of your company saying something that they never said or did. and the business damages -- this is a real opportunity for those that want to damage the business community. in the case of politics, imagine for a moment, i thought about this not long ago.
3:09 am
if someone were to -- any media outlet. i do not want to pick on anyone today. but cnn -- [laughter] or msnbc or fox or any of them. someone were to send them a video of me saying something outrageous like go new england patriots, beat my enemy. something crazy like that. and they were to send the video to them. of a public figure taking bribes or saying a racially sensitive or outrageous comments and what appears to be a private setting. the sort of things that they are leaks, videos of someone secretly recording a meeting. a member of the infamous 49 percent or whatever, what was the number mitt romney used? 47 percent. i knew it was in the 40s. a video type or someone fundraising. imagine something like that except it actually didn't
3:10 am
happen? and that information is given to a media outlet. and the media outlet will call you for reaction. and you tell the media outlet, never said that, it didn't happen. and they say, we have a video, it is your voice, it is your face. it was you, we are running with it. and all you tell them is i swear that is not me was not me, i was never there, it never occurred. or maybe i was there but there were 15 people in the room that will tell you that did not happen. i think that is a very difficult editorial decision for the media to make in our current environment. i think the likely outcome would be they will run the video and run it with a quotation at the end saying by the way, we contacted so-and-so and they denied that was them. but the vast majority people watching the image on television are going to believe it. and if that happens two days before an election, or the night before an election, it could influence the outcome of your race. the capability to do that exists now. and a culture would perpetuate and instinctively want to believe that stuff exists now. because the nature of our political cover today is driven by conflict.
3:11 am
every single morning, starting at six or 7:00 a.m., every news cable outlet in america begins their day by pointing to some outrage that they want you to be fired up about and still proceed throughout the day to put two people to fight about it in two little boxes on the screen. that is the cycle that we now have on media coverage. found in outrage and a couple of commentators to fight over it. then let's get people in politics to react to it. that's fine. we're free country, free press. they can have any editorial decision they want to make. but imagine in a culture like that, manufacturing outrage. and i assure you, it would spread like wildfire. as it would online. i do not believe any individual, political campaign, organization america with the bandwidth capability to knock down the spread of that false information. and i think overall, that is a very attractive weapon for someone who seeks to interfere in our politics and create chaos or defeat a candidate in
3:12 am
the race. traditionally, we would look at that is saying it is something in the candidate we use against you or a money group or one of these political groups. but what about a nation? a nationstate with capabilities that exceed that of you know, any political party. his intelligence agency decided to open eyes an instrument of that nature and use it to create and sell chaos in a country. well, i can tell you, do not believe, i know for a fact that the russian federation of commander vladimir putin tried to so instability and chaos in american politics in 2016. not necessarily as some might report for purposes of electing one candidate over another. the primary goal was to ensure the next president of the united states, whoever won, would be facing a cloud of controversy for weeks and years to come. in order to weaken them and in order to weaken us. no matter who won the election
3:13 am
will be dealing with a russia issue right now. a be a different one but we would be dealing with one. and i think vladimir putin says there year and half later and says by and large, it worked. we have a society at each other's throats. it was already happening but i was able to poor fuel on the fire to make it more and weaken them further. and they did that with twitter and with a couple of other measures that will come to light. but they did not use this. imagine using this. imagine injecting this into the election. and i want people to understand that this goes, these sorts of threats go well beyond an annoyance. i want to take you back for a moment to florida in the year 2000. we had a race in florida in the year 2000. it was decided by less than 600 votes in one state. a big state but one state.it had an additional complexity for the race and that is that the republican candidate for president, his brother was the governor of florida at the time. in a way to imagine for a moment, if at some point on the
3:14 am
day of the election, and some county in the state, 600 democrats went to vote and they were not allowed to vote. because they no longer appeared on the registration role for the county. and that information in this environment were fed to cable news outlets or parties and media groups. they would immediately jump all over it. as an effort by the republican-controlled state to deny democrats the right to vote. 600 democrats didn't vote because they were not allowed to vote. and the candidate won by less than 600 votes. this is an illegitimate election. that will be the argument that would be made. and the truth is, courts would decide and in hindsight, election officials would prove the people got provisional ballots. and they were allowed ultimately, to vote. all of that would get lost in the broader debate. if you have a commander-in-chief with the most powerful armed forces in the world and i would dare say
3:15 am
a significant percentage of the armed forces who at a minimum, had doubts about whether the person who won the election actually won. that is what disruption would mean if someone can get into our electoral system and change registration. add to that the ability to influence the outcome by putting out a video of a candidate. on the eve before the election. doing or saying something strategically placed, strategically altered in such a way that drives some narrative that could flip enough votes in the right place to call someone and election. put all that together and what you have is not a threat to the elections. but a threat to our republic. a constitutional crisis unlike we've ever faced in modern history of this country. this all sounds fantastic, it all sounds exaggerated, we all sound hyperbolic. the capability to do all of this is real. it exists now. the willingness exists now. all that's missing is the execution. and we are not ready for it. we are not ready for it. not as a people, not as a political branch, not as
3:16 am
immediate. not as a country. we are not ready for this threat. and maybe it will be russia. they are the likeliest culprit but they could be anybody. from a transnational group to cyber hoodlums and vigilantes and thinks they will take people on. it could be anyone. because one of the ironies of the 21st century that technology has made it cheaper than ever to be bad. and so, in the old days, if you wanted to threaten united states he needed 10 aircraft carriers and nuclear weapons and long-range missiles. today, you just need to our internet system, the banking system, the electrical grid and infrastructure.and increasingly, all you need is the ability to produce a very realistic, fake video. that could undermine elections, throw our country into tremendous crisis. internally and deeply. and so i am grateful you provide us this form to begin this conversation.you all a very scared now. good!
3:17 am
[laughter] it is a threat and when we should be aware of. and when the, i do not have every answer. i know awareness is part of it. i know educating people, including producers and editors. but also, political figures and others about this capacity is important. i know that is one of them, is just awareness that the threat exists. and from it, we will have to talk about how we can balance the right to privacy, free speech and all the other things that come with our constitutional protections. with their obligation to protect our country, our constitution and our republic. from mayhem, chaos, madness instability. and it is 1/21 century threat that no one is ever presented with. we have a lot of work to do in a hope today is the beginning of it. thank you for this form. [applause] >> thank you.
3:18 am
>> i hope we are not a let down for that is a tough act to follow peer. >> i would love to say that it will get -- it's not. we do think we can provide helpful context that will enable all of us to think more deeply and hopefully, profitably. about the challenge peer chris, i want to begin with you from a pure technology standpoint. help give us context and to flush this out a little bit. fake videos, altered media, it's been around for a while. there does seem to be however, a perception that the challenge of deep fakes, is coming into its own. i wonder if you can tell us what is behind that technology?
3:19 am
>> thank you for inviting me to the panel. as senator rubio mentioned, is nothing new to generate fake faces. it was in the hands of visual effects studios. it was very hard to do. you need armies of artists and very complicated systems but what happened recently, is, several universities and other entities started publishing systems, some of them are called puppet tearing systems. that means you take lots of video of somebody and then you use machine learning to change the lips or change some other parts of the face.and then it looks like the person said something different. last year, there was an entire session that the computer graphics conference, there was a big leap forward how this
3:20 am
works. and then also, you have heard about deep fakes, what does it mean? deep means deep learning or deep neural networks. so neural networks usually are also as old as 50 years. they used to be like very small number of units with full number of connections and inspired by the brain. then 10 years ago, people figured out because the computers got so much more powerful and there was so much more memory available, you can now build no deep comes from you have hundreds of layers and in some cases, billions of connections and they become very powerful. the connections can be learned. this happened in ai. and academics started using that for generating better faces.images of faces. but still, the visual effects industry is still better in
3:21 am
generating that. the great thing about visual effect is, you do a good visual effect if he cannot detect it was a visual effect. you've seen lots of movies and it was fake. now what happened with this deep neural network based face generators, they were not accessible. they were very hard to use. and you could debunk them really easily. i do not want to put down the group. there's a lot of progress. but then, just recently, actually, it was like in december last year. somebody posted on redit. it was actually discussed a few months before. someone posted code that can do that. a deep fake code. was like a deep network and then if you have some software engineering skills, you can
3:22 am
download the code, turn into an application, collect a bunch of examples from faces of the person that is there in the video and faces that you want to replace it by. and then you buy a graphics card that cost less than $1000. it is called a cheap -- card to let it run on your home computer or laptop for sometimes, several days or one day. and then it creates a deep fake. then more recently, an entire community evolved out of this. and they are generating more and more techniques of deep fakes. it is often parallel that neural networks usually -- they can do very good in detecting this is a face, this is a house, this is a car. sort of like -- recent advances in neural networks, now i can
3:23 am
generate an image of a face. that is something completely new that we did not know how to do five years ago. and so, they generated this one thing called fake app. don't have to have software engineers anymore. you just download it on your pc and run it. and so, that changed the game. but has a lot of parallels with what happened 20 years ago when photoshop came out.people photoshop photos and at the beginning people were believing it. and now, we are sort of sensitive to it. this is a photoshop image, maybe. and now with the deep fakes, already there is arguably, a lot more awareness. like, this might be a deep fake. we don't trust that much anymore. >> just on that, how hard is it. you mentioned previously, you could look and say that is not real. but, how difficult is it currently to detect a deep fake
3:24 am
video? and is it safe to assume that the forensic capability will evolve as quickly as the deep fake capability itself? in other words, will our ability to find these out as fake always be a parallel with the generation of the fake? >> that's a good question. most of the deep fakes out there are actually very easy to detect. even if it is an untrained eye. it happens a lot is the faces flickering a little bit.the eye blinks are inconsistent. then there are all these things. when i speak i have had motions and the lips are correlated to my had motions. a deep fake does not do that. there is also amazing stuff like mit, freeman's lab has something we can increase the channels in the face. and if you have a real video, you can actually detect, just by the change of retina, what the pulse rate of the person
3:25 am
is. -- [audio lost] comes up with techniques that can detect, this sequence of pixels is wrong. this area of pixels is wrong and so on. as soon as deep fakes came out, they also started building systems that can detect if it is a deep fake. a container where the deep fake is. and also, another very interesting research direction is where it originates from. so, i think you can say is a cat and mouse game. or an arms race. depending to what expert you
3:26 am
talk to, the detection algorithms are actually ahead of the fake algorithms right now. >> bobby, in addition to those types of things, we see several companies coming up with digital solutions like digital watermarks and other persistent metadata and better data in these videos. won't that solve it? ... such as the senator described that is primarily going to be the function of the major platforms. whether we are talking about with the nightly news carries or what can be circulated on
3:27 am
facebook and insta grammar etc.. if those platforms act as gatekeepers to embed as they fulfill to bear the right watermarks were hallmarks of a providence confirming the validity, great but there is no particular reason to think that is going to happen immediately. there's a and indian variables and entities trying to develop these solutions. which one gets to be the winner. there's going to be a lot of variability. let's say there's some kind of coordinated action everybody who needs to decide if we are all going to settle on this new thing and we are all going to use this if it is cumbersome and
3:28 am
acts as a friction point for all of us that are putting up this user generated content unless they all do it you might find it to be a bit of a pain and it easier for this other platform. they are easy to use and it's fun and interesting what goes on there so it could be the case that sort of protection will be built into the platforms but i'm a little skeptica skeptical butl happen anytime soon. >> you can imagine the emergence of a of some type of verification and personal tracking where if you are someone of public import in and one of these videos thumbs up,
3:29 am
you might want to demonstrably show i wasn't even in the building. when you consider those type of solutions, the privacy concerns that start to come up are you concerned about that? >> that's what motivated us and the idea that we would lack accountability for where we are. though we might unravel our own privacy to protect ourselves but the market desire they are essentially blogging where we are all the time engaging in all sorts of activities where we are often traced and categorized in all sorts of ways. it would suggest that we are hiding something.
3:30 am
the. it's significant and there is a longer-term concern that we might unravel our own privacy in the longer run is incredibly troubling with an extraordinary amount of power to companies and governments that have access to this imagine those reservoirs of data. >> i want to come back to the privacy and broad implications but you raise a good point on the personal implications. for the capability to build one of these videos all of a sudden some ex-boyfriend or someone generates these are the implications?
3:31 am
superintendent of cyber stalking. what happens? >> this issue grabs our attention because what we saw they were devoted to fakes so putting a celebrity's head -- pornography so you are exploiting somebody's image to make it seem like they are having sex which of course they are not. the peril especially for people that are now in public figures, imagine the kind of damage. even though they've done nothing wrong, now that we have these apps you can easily download the capability of creating these kind of harms now you don't have
3:32 am
to take that stealthy that you were coerced into sharing with someone. now we can literally create videos about anyone and you can see it being misused in the domestic abuse. it popped up about these things in the narrativ a narrative on t about cyber it is an ugly world like the comments. i want to embarrass the living daylights out of her and so there were so many comments where people were saying so and so, i can't wait to use this technology so they are the denier and the damage is so profound and it is in searc isnh of your name and a providenc pr.
3:33 am
it's not that they believe the person necessarily need the video. it's so much easier and cheaper to hire someone who doesn't come with the package. you are essentially ineffectively put into a video in ways that you would never imagine and it's incredibly violent and terrifying but it also has an impact on your economic life. so, the problem with those scenarios, they lack the resources. it's almost impossible for the everyday person. and so part worships with platforms as we saw in the summer of 2014 announced that he would have people's names with the non- cons and shall --
3:34 am
nonconsensual. when you want of the victim is to not have it searchable so it doesn't fault the problem, but at least the employer from the clients, the friends see it in the search of your name. >> i'm going to take those implications and pull up a little bit and think of the context of national security for a moment and frame the question. the senator mentioned russia i think for good reason. a couple of context for the audience in the past the russian government used manipulated information, falsified e-mails, overcoming of data to marginalize and constrain political opponents. just recently they were targeting the cell phones of
3:35 am
soldiers as a means of conducting and shaping the battlefield. do you think that it's realistic to think a hostile competitor like russia or someone else might leverage this type of capability or its own domestic consumption but then even the broad international environment. >> as you say they come from other techniques of information warfare and it doesn't get into them but they are passed masters and practitioners of it. imagine if you build this will sounwill this willsound far-fete the president going into a one-on-one meeting and no one is there to take notes or record it. things are said and everyone wonders what was said. then low and behold, there's
3:36 am
audio that sounds like the president saying don't worry about the politics, or they fill it with some of their nightmare betrayal of interest in my opinion. the ability to go beyond what is already possible, your eyes or ears tell you what you heard and it makes it much more powerful. so, there are many ways to disrupt international relations making use of this technology. there's other ways it can be made. i think as most people know it's determined to originate the way someone might make a strategic use of the legal rules in an effort to hamstring an opponent. think of the ways that insurgents were others already make use of the information to make it appear the forces or the
3:37 am
allied forces have done this or that about killing civilians were harming the civilian population. and you can have actors play the role and impersonate and so forth about how much better if you can use the instances of the inflamed palestinian tensions or perhaps you would like to go domestically to the tensions and pick an american city where the tension between the police and local community are running especially high" just have the chief of police captured saying racist things. the potential is up the charts and it's true we can generally speak and eventually debunk, but the truth is and are quite catch up with the initial lie. >> it's important to think about this kind of what happens within our borders. you can imagine again as a
3:38 am
realistic and excellent foil you could imagine how they might use that in the developing regions to shape the political situations where it doesn't land on us specifically or geographically but it then becomes a situation that we are engaging that may significantly influence a decision as to what type of support or aid or deployment might be forthcoming in the wake of that. so we are talking about the capability in the age of the media that i don't think it's an overstatement to say it could fundamentally shape the environment in which we are doing policy both domestic and foreign policy. >> it makes me think of a line we have in the paper where we both have concerns about trust decay altogether that geopolitical nightmare of a lack
3:39 am
of trust with each other with countries and then also personally and culturally. >> i think that he hinted at this but is there any other challenge with greater awareness as we educate and have this conversation are we simultaneously raising a challenge inside that? >> the more successful we are ao cultivate skepticism about audio and video, the more space we are creating for the latest dividend. i can't decide if i really like that, but it's kind of captures it. the situation where the video and audio ease real and that person has the shamelessness to deny it. this already happens. but how much more room is there to deny the video evidence when people have been pounded with the message body us imagery can
3:40 am
be manipulated. so it becomes a cry of deep fake news and is much more residents because of the success of getting people to be on guard. >> we are going to push into this a little bit more. how bad can this get as few thought about this? >> i think you've got the two side claimed together on the one hand the thing is believable we have the lion's dividend but i vote for keeping because it was your idea and i like it. but nothing is believable.
3:41 am
so we all get to say there are no truths and we've done this to each other and at the time nothing is believable politically for individuals, the likes are incredibly damaging because the kind of thing we say i believe that's because it will reaffirm the gender stereotypes. for a woman it's like she's incredibly available so i'm not going to hire her and at the same time for individuals it is believable and worse being costly at the same time culturally and politically we lose faith in our public discourse. and here we are and i think to me that is the nightmare scenario.
3:42 am
it won't be for everybody trying to purchase service from a third-party it can be those that are confirmed for some kind of reliable if you want to go full throttle with video and audio you've got some token on your jacket that is just recording everything and only maybe a few people running for office an ina sensitive position like a chief of police may be only they got into it but also some employers do during work hours on their employees to wear this because lord knows what they might get up to see you do run the risk of accelerating the surveillance trend and recording of everything that is pretty disturbing and bought a world that would be accustomed.
3:43 am
>> imagined a world in which we don't trust anything who gets to define what's true for the totalitarian leaders and if we created a sort of lifelock for ourselves imagine maybe that is as dystopian as we get. >> that is the worst-case scenario i could pile on but there is no need to. hell is this technology likely to present itself as the most likely evolution of this as we go forward? >> i said before the current are easy to debunk and what happens
3:44 am
next is to fix the certain shortcomings people are working on making things more compatible but were not. but we are engaged with researchers at other platforms and there is played some optimism out there and i don't want to get into a theory in front of this audience, but there's a lot of discussion like generating gets better, protecting gets better and there are some truths out there that the detector will always win. that is what some people say. >> is that true with photoshop?
3:45 am
>> there is great work out there where some of the techniques need to know the software that created it and then it's more like an antivirus program scenario like as soon as the fake scum up, then you download it automatically to the detector and there are very ambitious people, and i believe it is possible at some point you can have a general detector where you don't need to know what is coming next. so on this group they actually built a system for the images that like the networks are trained, these are examples if you collected the fake and real. if you have a database that comes up that isn' is simply paf that database you might have a
3:46 am
hard time connecting advocates trained on these real images and then if there's something coming up that doesn't belong to the space of the images it detects it so there are detectors out there in this area. another thing i want to mention them and senator rubio started out with the general discussion about fake news to debunk them you need fact checkers and people may not believe it but if we keep going with that for the research of detectors and we come up with the right interface it's like we talk to psychologists and social scientists.
3:47 am
it's a more objective way is this reason and of video. if they use our platform, you have an image and you don't know where it comes from, you use the image search and the entire result comes up that appeared here, here and here and it's a standard practice already and it is a very convincing way to convince the general public very fast this is objectively false false >> from the forensics standpoint there is reason to have optimism and to know definitively if things are working well the
3:48 am
journalists before they publish something run it through that system. when we talk through the general user content, the lie allies ard the world -- churchill sounds better. [laughter] >> one of the challenges becomes in one sense, yes and perhaps public figures and that kind of thing that even then, we'll understand the rumors persist. >> if there is a deep state or
3:49 am
whatever it might be committed isn't the right to have a response so that especially for the individual. >> we have the terms of use if it is like a malicious intent, but it's a policy question i don't want to get too much into that. the community is thinking about how to respond faster. >> once we figured out he could have a database where you match it going forward and you do get your first shot unless you said the journalist decided not to publish it.
3:50 am
>> the last question befor befoe turn to q-and-a for the audience. we have a good number of folks from the hill so the last question why don't we just make these illegal, defined the category. >> digital manipulation can't be of sufficient condition to the trade category because of course we manipulate video to make pictures look good or make the sound with clear. there's all sorts of ways that you have to allow for some amount of manipulation. can you fix it with the intent i guess we kind of have a lot of them already for the same thing. so you could struggle to define the category just criminalize or take the regulation of this and the only way that you could
3:51 am
address it and by the way if you want to actually read the paper, the quickest and easiest way, deep fake ssri and. and. what you can talk about the possibility in a way that could place more pressure on the platforms to these sort of things. >> how many people know what they are talking about? >> -if you. i won't call on you i promise. we have something called the communications decency act that was a law passed in 1996 which was about how to rid the internet of this whic which is e same proposition and much of that struck down on constitutional grounds.
3:52 am
but what it means is it was a sectiothat as asection that waso encourage self-monitoring because the lawmakers knew they couldn't possibly read the internet and they included this in the stalking and harassment plus a speech that is just offensive they wanted to encourage self-regulation. and so there were early cases that found they were filtering. because they had done some editing, they found that it was liable so what the case suggested they shouldn't do any filtering because if they did anything, they wouldn't be publishers. in light of that, then the representatives drafted at the section that says if you catch
3:53 am
too much or too little in the filtering and monitoring, you won't be responsible for the user generated content. when you are overly aggressive you have to have a good-faith basis. but the part of the law that talks about when you don't catch enough is written in such a way as interactive consumer product provider won't be treated for someone else's content. it isn't limited to the good samaritan monitors that is why we been sure photographers get to say i am immune from liability. i know i solicited the photos of women who never said yes you can post this. but i enjoyed immunity from liability because the law says i'm not going to be considered a speaker for someone else and could have construed that position. so it leads to that leads to at
3:54 am
things about the internet. we've got what i think of as virtuous actors like twitter and facebook to do their bes who don certain circumstances to address the copyright violations and cyber stalking that you've got a lot of actors who encourage and solicited the locality and they get to be immune from liability so what we talk about in our paper is the possibility that the immunity should be unconditional. there should be some we don't want to tear down the immunity. keep the immunity premised upon the reasonable practices and responsible to the locality on your platform. so, if you have a site whose sole reason they know because they are soliciting it
3:55 am
absolutely not a. it's actually quite modest. we have been writing about how we just passed and exception but i have to say when you started first writing me if i ever thought we would come to an agreement about changing section 230 i would have thought you were joking me it's never what happened but congress recently got its act together and pass what is not it's not a terribll that is fighting the act that is narrow and broad and it will have platforms on their hands because of this we criminalize those that knowingly facilitate trafficking. so what does that mean? that could engage in filtering so that worries me and i think
3:56 am
that was a bad move and so we suggest one but i also really liked you want to talk about the ways we're going to curtail it. >> we are stuck on time and i want to have a little bit of public q-and-a but it's also helpful for those that are new to this issue to understand from the technical community standpoint, this is a deep worthwhile conversation. from the community perspective, there's actually surprising unanimity among these platforms and technology companies and they simply articulate we are sympathetic and however, this immunity is what has given rise to the internet that we enjoy and if we constrain and do a carveout, you will incentivize us to be more aggressive in our
3:57 am
censorship because we will be seeking to immunize ourselves from legal jeopardy. there's all kinds of strings that are worth pulling and maybe we will do a separate event but right now that is helpful and i highly recommend downloading this paper. i think we have a little bit of time for two or possibly three questions from the audience. >> they film ten different endings and only hbo knows which one is authentic. you live in a world in which generation is democratic and detection is concentrated.
3:58 am
in that type of world comment isn't that bad? in other words if it is concentrated in media outlets, doesn't that give them a dimension along which to see the mantle of credibility relative to other organizations or entities or platforms that are not capable of verifying information and not only did that cuts down on the model but it seems to minimize the dividend and describes a world that isn't that bad a relative to the mom and pop populist outfits to gain control over the inaccuracy of sourcing and that projects into a community that treats it that way. >> first of all, i was excited when you brought up game of
3:59 am
thrones is a thought for just going to ask about that in general. [laughter] but that's okay. so, the market currently doesn't give me as much hope as i would like to believe the larger market wants to see competition on accuracy and truth and objectivity. that said, certainly the idea behind the stuff that we are working on and this would've gotten tomatoes for the validity of news, i love seeing that and i sure hope it take off and become the case that some degree of variation on validity of the information can be brought back into the information space, that would be great. they don't buy that there is objectivity and they don't believe they would then end up being the highest scoring ones.
4:00 am
you see that's truth, that's your truth, i've got mine. >> a lot of this is time contingent so if it is the night before an election and there's too little time and it goes viral it can change the same with the intentional falsehood that is intended to manipulate the cause of the riot. >> if we have a brief question real quick. >> the discussion on the big event goes viral but then the
4:01 am
analysis is false. what will the culture look like after that when it is shown to be the case there's this technology out there, what does cultural look like after the first big effect on that. >> crossing the rubicon when it happens and then the realization of work what's going on. the floo plot will widen the plg field to deny the truth when confronted with embarrassing audio and video enabling them to say you heard about this other thing. >> i can't allow us to go over any further but i'm sure that he will be able to stick around of.
4:02 am
thank you for taking the time to be here and please join me in thanking our guest. [applause] [inaudible conversations]
4:03 am
4:04 am

30 Views

info Stream Only

Uploaded by TV Archive on