Skip to main content

tv   Hearing on Violence Extremism Digital Responsibility  CSPAN  September 18, 2019 10:49pm-1:02am EDT

10:49 pm
incorporated. she's interviewed by texas roy.blican congressman chip >> tim book and jeff -- have these de many of pocketed nonprofit organizations hat are crusading for illegal alien rights. you wonder how it is that they have instant representation in to sue over every last trump initiative, to enforce the law. so, big business and the u.s. chamber of commerce are for that.n watch book tv every weekend on c-span 2. to , a hearing on efforts identify and remove violent and web.mist content from the representatives from google, facebook, twitter and the anti-defamation league testified the senate commerce committee. the hearing is 2:10.
10:50 pm
>> we will come to order, please. together the cometty gathers to what the technology industry is doing to remove sleep. content from their platforms. this is a matter of serious mportance to the safety and well-being of our nation's communities. i sincerely hope we can engage clabtive discussion about what more can be done within the jurisdiction of this committee keep our communities safe from those wishing to do us harm. representatives social world's largest media companies.
10:51 pm
several r from representatives. >> over the past two decades the ups has led the world in the and opment of social media other services that allow people to connect with one another. open platform providers like google, twitser, and facebook and roducts like instagram youtube have dramatically changed the away we communicate have been used positively minded groupslike to come together and shed light of power and abuses matter ut the world no
10:52 pm
how great the service these latforms provide, they can be used for evil at home and abroad. on august 3, 2019, 20 people killed and more than two dozen were injured in a mass hooting at an el paso shopping center. police have said that they are easonably confident that the a pect posted a manifesto to website called eight chan. prior to the shooting. the moderators removed the users l post, though continued sharing copies. >> following the shooting trump called on social media companies to work in state ship with local, and federal agencies to develop ools that can detect mass shooters before they strike. about inly hope we talk
10:53 pm
that challenge today. sadly the el paso shooting representative. on march 15, 51 were killed and at shootings at two mosques in christ church new zealand. perpetrator filmed the attacks using the body camera and life streamed the footage to facebook followers. who began to reload the footage sites.book and other access to the footage quickly stated that cebook ofremoved 1.5 million videos the massacre within 24 hours of the attack. 1.2 million views of the videos blocked before they could be uploaded, like the el paso the christ church shooter also uploaded a to chan. the 2016 shooting at the pulse nightclub in orlando, florida, 49 and injured 53 more
10:54 pm
the orlando shooter was radicalized by isis and other jihadists propaganda sources.nline days after the attack the f.b.i. stated that investigators were highly confident that the shooter was self-radicalized through the internet. according to an official in the investigation, analysis of the shooter's revealed thatices he had consumed "a hell of a lot jihadists propaganda," videos,ng isis beheading shootings of family members of vems brought a federal lawsuit social media three platforms under the anti-terrorism act. circuit dismissed the lawsuit on grounds that this was an act of international
10:55 pm
terrorism. 3.2 billion internet users this committee recognizes he challenge facing social media companies and online platforms. act and remove content threatening violence from their sites. are questions about whether -- there are questions user's racking of the online activity. individual'sade an privacy? due process, or violate constitutional rights. removal of c hreatening content may also impact an online platform's ability to detect warning signs. offers t amendment strong protections against restricting certain speech. undeniably adds to the complexity of our task. will these witnesses speak to these challenges, and ow their companies are navigating these challenges.
10:56 pm
connected internet society, misinformation, fake viral eep fakes, and online conspiracy theories have become the norm. this hearing is an opportunity witnesses to discuss how their platforms go about content and material that threatens violence, and potentially and immediate danger to the public. also our witnesses will discuss how their content moderation processes work. this includes addressing how human review or technological are employed to remove or limit violence contest. with law ion enforcement officials at the federal, state and local levels protecting our neighborhoods and communities. we would like to know how coordinating with
10:57 pm
law enforcement, when violent or content is identified. and finally, i hope witnesses ill discuss how congress can assist in ongoing efforts to emove content promoting violence from online platforms and whether best practices or of conduct in this area would help increase line.y both online and off so i look forward to hearing testimonies from our witnesses. engage in a constructive discussion about otential solutions to a pressing issue. and i'm delighted at this point o recognize my friend and ranking member, senator cantwell. chairman and mr. for holding this hearing. across the country we're seeing a surge of cing hate. as a result we need to think much heard and the tools and to urce that is we have combat this problem online and offline. while the first amendment to the constitution protects free
10:58 pm
speech, speech that incites violence is not protected and should review and trengthen laws to make sure we stop the online behavior that does incite violence. in testimony before the senate committee in july the federal bureau investigator from f.b.i. said that white supremist violence is on the rise. he said the f.b.i. takes this "extremely seriously," and has made over a hundred year.s so far this we're seeing in my state over the last several years, we've shooting at the jewish community center in shooting of a sheikh in kent, washington, a bombing attempt at the martin luther parade in spokane and over the last year we've seen a rise cregs of both synagogues and mosques. the rise in hate across the also led to multiple life shootings, including the congregation in pittsburgh, the pulse nightclub
10:59 pm
n orlando and the wal-mart in el paso. the shooter at one high school postings said d the image of himself and guns instagram, prior to the attack on his fellow killer in el paso the published a white supremist and immigration manifesto my colleague just mentioned the streaming of live content church to the christ shooting. fihorrific incident that there.ed promoting violence against uslims -- these human lives were all cut short by deep hatred and extremism that we've more problem.e this is a particular problem on the dark web where we see websites, like the host
11:00 pm
f 24-7, 365 hate rallies, adding technology tools to mainstream websites to stop the spread of these dark websites is start but this needs to be i believe calling on theve and department of justice to make sure that we are working across the board on an international basis with companies as well to fight this issue is an important thing to be done. we don't want to push people off social media platforms only for them to then be on the dark web where we are finding less of them. we need to do more at the department of justice to shut and these dark web sites, social media companies need to work with us to make sure we're are doing this. i do want to mention, just last week, as there was discussion in washington about initiatives, the state of washington has passed three gun initiatives by
11:01 pm
closing of the people, loopholes related to background checks, gun sales and extreme person laws, voted by a majority in our state and successfully passed. representatives from various companies of all sizes in the technology industry sent a letter supporting passage of bills requiring background checks, so very much appreciate that and your support of extreme person laws to keep guns out of the hands of people who a court has determined are dangerous with the possession of that. this morning we look forward to asking you ways in which we can better fight these issues. i want us to think about ways in which we can all work together, to address these issues. i feel that working together, these are successful tools we can deploy in trying to fight extremism that exists online. thank you, mr. chairman, for the hearing. >> thank you, very much.
11:02 pm
oralwe will hear testimony from our four witnesses, and your entire statements will be submitted to the record without objection. we ask you to limit comments at this point to five minutes. ms. bickert, you are recognized. thank you for being here. ms. bickert: thank you, chairman wicker, ranking member cantwell, distinguished members of the committee. thank you for the opportunity to be here today, and to answer your questions and explain our efforts in these areas. my name is monika bickert, facebook vice president for global policy management and counterterrorism. i am responsible for our rules around content on facebook and our company's response to terrorists' would-be attempts to use our services. on behalf of everyone at facebook, i would like to begin by exposing my sympathy and
11:03 pm
solidarity with the victims, families, communities and everybody affected by the recent, terrible attacks across the country. in the face of such acts, we remain committed to assisting law enforcement and standing with the community against hate and violence. we are thankful to provide a way for those affected by this horrific violence to communicate with loved ones, organize events for people to gather and grieve, raise money to help support communities, and begin to heal. our mission is to give people the power to connect with one another and build community, but we know people need to be safe in order to build that community, and that is why we have rules in place against harmful conduct, including hate speech and inciting violence. our goal is to ensure facebook is both a place where people can access themselves, but where they are also safe -- express themselves, but where they are also safe. we are not aware of any
11:04 pm
connection between the recent attacks and our platform, but we certainly recognize we all have a role to play in keeping our community safe. that's why we remove content that encourages real-world harm. this includes content that is involving violence or incitement, promoting or publicizing crime, coordinating harmful activities or encouraging suicide or self injury. we don't allow any individuals or organizations who proclaim a violent mission, advocate for violence, or are engaged in violence to have any presence on facebook, even if they are talking about something unrelated. this includes organizations and individuals involved in or advocating for terror activities, domestic and international, organized hate, including white supremacy, white separatism or white nationalism, or other violence. we also don't allow any content posted by anyone that praises or
11:05 pm
supports these individuals or organizations or their actions. when we find content that violates our standards, we remove it promptly. we also disable accounts when we see severe or repeated violations, and we work with law enforcement directly when we believe there's a risk of physical harm or a direct threat to public safety. while there's always room for improvement, we already remove millions of pieces of content every year for violating our policies, and much of that is before anyone has reported it to us. our efforts to improve our enforcement of these policies are focused on three areas. first, building new technical solutions that allow us to proactively identify content that violates our policies. second, investing in people who can help us implement these policies. at facebook, we have over 30,000 people across the company working on safety and security efforts. this includes over 350 people
11:06 pm
whose primary focus is counter-hate and counterterrorism. third, building partnerships with other companies, civil society, researchers, and governments, so that together we can come up with shared solutions. of the work we have done to make facebook a hostile place for people advocating violence, but the work will never be complete. we know bad actors will continue to attempt to skirt detection with more sophisticated efforts, and we are dedicated to continuing to advance our progress. we look forward to working with the committee, regulators, others in the tech industry and civil society to continue this progress. again, i appreciate the opportunity to be here today, and look forward to your questions. thank you. >> thank you very much. wicker, ranking member cantwell, members of the committee, thank you for the
11:07 pm
opportunity to discuss these important issues. twitter has publicly committed to improving the collective health, openness and stability of public conversation on our platform. our policies are designed to keep people safe on twitter, and they continuously evolve to reflect realities of the world we operate in. we are working faster, investing to remove content distracting from a healthy conversation reported.is tackling terrorism and extremism and preventing attacks requires responses, including from social media companies. let me be clear, twitter is incentivized to keep terrorist and violent content from our service, both from a business standpoint and in current legal frameworks. such content doesn't serve our business interest, breaks are rules and is funded mentally contrary to our values. communities in america and around the world have been impacted by mass violence, terrorism and violent extremism with tragic frequency in recent years.
11:08 pm
these events demand a robust public policy response from every quarter. we acknowledge technology companies have a role to play. however, it is important to recognize content removal alone can't solve these issues. i would like to outline twitter's key policies in this area. first, twitter takes a zero-tolerance approach to terrorist content. individuals may not promote terrorism, engage in terrorist recruitment or engage in terrorist acts. we have suspended over 1.5 million accounts for violations of rules connected to terrorism, and continue to see more than 90% of these accounts suspended through our own proactive measures. in the majority of cases, we take action at the account creation stage before the account has even tweeted, and the remaining 10% is identified through user reports and partnerships. secondly, we prohibit use of twitter by violent extra missed groups, defined in our rules as groups that in statements on or
11:09 pm
off the platform use or promote violence against civilians to further their cause, whatever the ideology. since the introduction of this policy in 2017, we have taken action on over 186 groups globally, suspending over 2000 unique accounts. thirdly, twitter doesn't allow hateful conduct on our service. an individual on twitter is not permitted to threaten, promote violence or direct attacks on people based on protected characteristics. when these rules are broken, we take action to remove the content and will permanently remove those promoting terrorism or violent extremism. fourthly, rules prohibit the selling, buying or facilitating transactions of weapons, including firearms, ammunition and explosives, or instructions on making weapons, explosive devices or 3d printed weapons. we will take appropriate action on any accounts engaging in this activity, including permanent suspension where appropriate. additionally, we prohibit the
11:10 pm
promotion of weapons and weapon accessories globally through paid advertising policies. collaboration with our industry peers and civil society is critically important to address the threats of terrorism globally. in june 2017, we launched the global internet forum for counterterrorism. this facilitates among other things information-sharing, technical cooperation, research collaboration, including with academic institutions. twitter and technology companies have a role to play in addressing violence, and ensuring our platforms can't be exploited by those promoting violence. this cannot be the only public policy response, and removing content alone won't stop those determined to cause harm. quite often when we remove content from our platforms, it moves those views and ideologies into the dark corners of the internet where they cannot be challenged and held to account. as our peer companies improve their efforts, and content
11:11 pm
continues to migrate to less-governed pot forms and services, we are committed to learning and proving that every part of the online ecosystem has a part to play. addressing mass violence requires a whole of society response, and we welcome the opportunity to work with industry peers, government institutions, legislators, law enforcement, academics and civil society to find the right solutions. >> thank. mr. slater? mr. slater: chairman wicker, ranking member cantwell, distinguished members of the committee, thank you for the opportunity to appear before you today. global is derek slater, director of information policy at google. in that capacity, i lead a team advising the company on public policy frameworks to deal with online content, including hate speech, extremism and terrorism. before i begin, i would like to take a moment on behalf of everyone at google to express our horror at the tragic attacks in el paso, ohio and elsewhere,
11:12 pm
and express condolences. google services were not involved in these incidents, but we engaged with the white house, congress and governments around the globe on steps we are taking to make sure that our platforms are not used to support hate speech or insight violence. in my testimony today, i will focus on three key areas where we are making progress. first, how we work with government and law enforcement. second, efforts to prohibit promotion of products causing damage, harm or injury. third, enforcement of policies around terrorism and hate speech. first, google engages in ongoing dialogue with law enforcement agencies, to understand the threat landscape and respond to threats to the safety of our users and the public. when we have a good faith belief there is a threat to life or serious bodily harm made on our platform in the united states, the google cybercrime investigation group will report it to the northern california
11:13 pm
regional intelligence center. that center will quickly get the report in the hands of officers to respond. the cyber crimes investigation group is on call 24/7. we're committed to working with government, civil society and academia. since 2017, we have done this through the global forum for counterterrorism. recently they introduce joint incident protocols to emerge to -- respond to emerging or active events. second, we take the threat posed by gun violence in the united states very seriously, and are advertising policies have long prohibited promotion of weapons, ammunition or similar products causing damage, harm or injury. we also prohibit promotion of instructions for making guns, explosives, or other harmful products, and employ proactive and reactive measures to ensure
11:14 pm
our policies are appropriately enforced. we're constantly improving enforcement procedures, including enhancing automated systems and updating incident management and manual review procedures. third, on youtube we have rigorous policies and programs to defend against the use of our platform to spread hate or incite violence. over the last three years, we have invested heavily in machines and people to quickly identify and remove content violating our policies. this includes machine learning technologies to enforce our policies at scale, hiring over 10,000 people across google to detect, review and remove content, and experts who proactively look for new trends, and improved escalation pathways for ngo's and governments to let us know about content, and finally, going beyond removal by actively creating programs to promote beneficial counterspeech, like the creators for change program. tos broad work has led
11:15 pm
tangible results. over 87% of the 9 million videos we removed in the second quarter of 2019 were first flagged by automated systems. more than 80% of those were removed before they receive a single view. violating ours policies generate a fraction of a percent of views on youtube. we are constantly looking for new ways to improve policies. youtube recently further updated hate speech policy. the policy specifically prohibits videos alleging a group is superior in order to justify segregation or exclusion based on age, gender, caste, religion, sexual orientation or veteran status. we've seen a 5x spike in removals and channel terminations on hate speech. we take safety seriously and value our collaborative religion -- relationship with law
11:16 pm
enforcement and government agencies. we want to be responsible actors and part of the solution. as the issues involved, we will continue investing in the people and technologies leading the challenge. we look forward to collaborating with the committee as it examines the issues. thank you for your time, and i look forward to questions. >> thank you very much. mr. sellm, your group prefers to be known as adl these days, correct? mr. sellm: the anti-defamation league goes by adl for short. >> we appreciate you being with us today, and we are happy to receive your testimony. mr. sellm: thank you for the opportunity to be here, with distinguished members of the committee. my name is george sellm, senior vice president for programs at the adl, four anti-defamation league. for decades, the adl has fought against bigotry and anti-semitism, exposing extremist groups and individuals
11:17 pm
who express hate to incite violence. today, the adl is the foremost non-authority on domestic terrorism, hate groups and hate crimes. i have served on several roles in the government national security apparatus, the department of justice and department of homeland security, on the national security council and now outside government on the front lines of combating anti-semitism and all forms of bigotry at the adl. in my testimony i would like to share with you key data, findings, analysis and urge the committee to take action to counter a severe national security threat, the threat of online white supremacist extremism that threatens our communities. the alleged el paso shooter posted a manifesto to 8chan prior to the attack. he expressed support for the accused shooter in christchurch, new zealand, who also posted on 8chan. before the massacre in poway,
11:18 pm
california, the alleged shooter posted a link to his manifesto terroristsiting the in new zealand and the pittsburgh tree of life attack. three killing sprees, three white supremacist manifestoes. one targeted muslims, another targeted jews, and a third targeted immigrant communities. one thing these three killers had in common was 8chan, an online platform that has become the go to for many bigots and extremists. unfettered access to online platforms both fringe and mainstream has significantly driven the scale, speed and effectiveness of these forms of extremist attacks. our research shows domestic extremist violence is trending up, and anti-semitic hate is trending up. fbi and doj data show similar trends. the online environment today amplifies hateful voices worldwide and facilitates coordination, recruitment and
11:19 pm
propaganda that fuels the extremism that terrorizes our communities, all of our communities. whether the government or private sector or civil society, immediate action is needed to prevent the next tragedy that could take innocent lives. adl has worked with platforms at this table to address hate and its rampant nature online. we have been part of the conversations to improve terms of service, content moderation programs and better support for individuals experiencing hate and harassment on those platforms. we appreciate this work greatly, but much more needs to be done. adl has called on the comings at this hearing, and many others, to be far more transparent about the nature of hate on their platforms. we need meaningful transparency to get actionable information to policymakers and stakeholders. but the growth of hate and extreme this violence won't be solved by addressing issues online alone. we urge this committee to take immediate action.
11:20 pm
first, our nation's must clearly and forcefully call out bigotry in all of its forms at every opportunity. must makeforcement enforcing hate laws a top priority. our communities need congress to act in a range of ways, notably to addressffices domestic terrorism and a stream is him, and create extensive, comprehensive reporting, similar to that required in the domestic terrorism data act. our federal legal system lacks the means to prosecute a white supremacist terrorist as a terrorist. congress should explore if it is possible to craft a rights protecting domestic terrorism statute. any statute would need to include specific, careful, congressional and civil liberties oversight to ensure the spirit of such protection is faithfully executed. in addition, the state department should examine
11:21 pm
whether certain foreign rights of premises groups meet -- white supremacist groups meet the criteria for detonation as foreign terrorist organizations. we look forward to social media companies expanding terms of service and exploring accountability and governance challenges, aspiring to greater transparency in how you address these issues, and partnering with civil society groups to help in all these efforts. adl stands ready, with the government and private sector, to better address all forms o f threats online. this is an all hands on deck moment to protect our communities. i look forward to your questions. thank you. >> thank you, mr. selim. ms. bickert, mr. pickles, mr. slater. platforms, how do you define violent content? how do you define extreme
11:22 pm
content? ms. bickert: thank you, mr. chairman. we will remove any content that celebrates a violent act, a serious physical injury or death of another person. we will also remove any organization that has proclaimed a violent mission or is engaged in acts of violence. we also don't allow anybody who has engaged in organized hate to have a presence on the site, and we remove hate speech. hate speech we define as an attack on a person based on his or her characteristics like race, religion, sexual orientation, gender. we list them in our policies. >> harder to define "extreme" than "violent," isn't that correct? ms. bickert: we see people use that word in different ways. what we do, any organization that has proclaimed a violent
11:23 pm
mission or has document it acts of violence, we remove them. doesn't matter the reason, we don't allow the violence, period. >> mr. pickles, what is your platform's definition of extreme? mr. pickles: similar to facebook, we agree that extremism itself is subjective. could beases, people extremely active on issues. we have a three stage test for violent extremist groups. the test is, we identify their stated purpose, publications or actions as extremist, and engaging in violence, currently involved in violence presently, or promoting violence as a way to further their cause, and they target civilians. we have the three stages, the ideology, the violence, because we believe that framework allows us to protect speech and debate but also remove violent
11:24 pm
extremists from the platform. we have a broader framework, threats for violence, call for arms, wish of violence against other people, not dependent on ideology. >> mr. slater, can you add any nuances? mr. slater: broadly similar, in ban week ban -- we designated foreign terrorist organizations from using our platform, as well as hate speech, so broadly similar lines. has suggested that your three platforms need to be more transparent. what do you say to that, mr. slater? mr. slater: thank you, chairman. and i think that transparency is the bedrock of the work we do, particularly around online content, trying to help people understand what the rules are and how we are enforcing them. something we need to get better at. i look forward to working with this community and mr. selim and others.
11:25 pm
we have in the last year on youtube provided a community guidelines enforcement report, where you can see how many videos we removed in a quarter, for what reasons, which were flagged by machines versus users, and we break that down by violent extremism, hate speech, health and safety and other issues. we think this is a key issue that we look forward to improving. >> mr. selim, before i ask ms. bickert and mr. pickles: to respond, perhaps you could help them understand how you frankly don't believe they are quite transparent enough, at this point? mr. selim: thank you. to be clear, the point i am making on transparency is to make sure there are more clearly delineated categories between the point that mr. slater was making in terms of what the machines or algorithms use to remove certain types of content or stop it from going up in the first place, and what users on any of these platforms go on to say, we think this is a violation of the terms of
11:26 pm
service. there's degrees of inconsistencies across these platforms at the table, as well as others. to get a holistic picture of what a certain issue might be, while individuals might flag versus what algorithms pull down, there are different consistencies in that. so we are asking for transparency, looking for a much more balanced approach in that, across all the platforms. >> mr. pickles, is he touching on something that has a point? mr. pickles: absolutely. i think the balance particularly for companies investing in technology, understanding what came down because a person saw it and reported it versus technology found it, is very important. we published a breakdown of six policy areas, the number of user reports we receive is about 11 million every year. 60% of the content we remove is because technology found it, not because of a user report. so telling that story in a meaningful way is a challenge.
11:27 pm
>> what is the percentage at facebook, ms. bickert? ms. bickert: mr. chairman, when it comes to violent content and terror content, more than 99% of what we remove is flagged by our technical tools. >> by artificial intelligence? ms. bickert: some is artificial intelligence. some is image matching. so known videos, we use a software to reduce that to a digital finger print and can stop uploads of the video again. we have worked with adl for years on this, and transparency is key, we would all agree. for the last year and a half we have published not only our detailed guidelines for exactly how we define hate speech and violence, but also reports on exactly how much we are removing, by category, and how much of that, like mr. pickles said, is flagged by technical tools before we get user reports. >> thank you very much.
11:28 pm
senator cantwell? >> thank you, mr. chairman. mr. selim, i think you mentioned 8chan, but what do you think we need to do to monitor incitement on 8chan and other dark websites? mr. selim: you can really approach this from two categories. there are a number of increased measures, some of which i noted in my written statement submitted to the committee, that these companies and others can take to create a greater degree of transparency and standards, so we can have a really accurate measure of of the types of hatred and bigotry that exist in the online environment writ large. a result of that increased or better data, we can make policies that apply to content moderation, terms of service, et cetera. so really having good data is a framework for better policies and better applications and content moderation programs. >> so you say there is more they
11:29 pm
can do, social media companies? mr. selim: yes. there is much more they can do. >> i look in your statement, you included auditing, third-party evaluations foior that transparency. as i mentioned in my opening you don't basically want to drive us all to a dark web we have less access to. what more should we be doing, together, to address the hate taking place on these darker websites, too? mr. selim: a number of measures. the first, having our public policy start from a place where we are victim-focused. we know whether it is paso,urgh, poway, el any of the other cities members of the panel and committee numbers have mentioned, we need
11:30 pm
extremismhat combat and dementia terrorism beginning from preventing other such tragedies. we need to start from a place with a better understanding of hate crimes, bias-motivated crimes, etc. when we start from that place, we can make better policies and programs at the federal government, state and local, and private industry as well. >> one of the reasons i will be calling on the department of justice to ask what more we can do, several years ago interpol, microsoft and others worked on trying to internationally address child pornography, to better police crime scenes online. i would assume that the representatives today would be supportive, maybe helpful, maybe even financially helpful in trying to address these crimes that they view today as hate
11:31 pm
crimes on the dark side of the web. do i have any responses from the company's here? ms. bickert: thank you, senator cantwell. this is something that across the industry we have worked on for the last few years, in a manner very similar to how the industry came together against child exultation online. online.itation for industry to create a sort of no go zone for terrorist and violent content. as part of that, we train hundreds of smaller companies on best practices and make technology available to them. the reality is, for the bigger companies we are often able to build tools that stop videos at the time of upload. much harder for smaller companies, which is why we provide technology to them. we have 14 companies involved in a sharing consortium, so we can
11:32 pm
help even small companies stop content at the time of upload. >> i appreciate and agree with mr. slater that there -- mr. selim there is more you can do on your own side. setting that aside for a moment, what do you think we should do about 8chan and the dark websites? what do you all think we should do? ms. bickert: i can tell you what we do on facebook. we ban any link connecting to 8chan, or anywhere else these have appeared. the manifestoes for the el paso shooting, poway, were not available through facebook. >> what more do you think government and law enforcement working together besides what you do? anybody else? mr. pickles: to follow up on mr. selim's point. if criminal activity is happening, a law enforcement response is primary. if people are promoting violence lawnst individuals, tha
11:33 pm
enforcement intervention should be looked at. if we can strengthen our cooperation with law enforcement, we can make sure information sharing is as strong as it needs to be to support those interventions. >> so you believe we need more law enforcement resources addressing the issue? mr. pickles: a question of both resources, and, there was a paper from george washington university last week looking at the statutory framework around these spaces. there are opportunities to strengthen them. that's a conversation to have. >> i definitely believe you need more law enforcement resources on this issue. i look at what progress we made with interpol and the tech industry on other issues. i think this is something, and i hear that, more resources. thank you all very much. >> thank you. senator fischer? >> thank you, mr. chairman. in june, senator thune held a
11:34 pm
subcommittee hearing on persuasive design. as we mentioned, facebook, youtube and twitter are engineered to capture, track and keep our attention, whether through predictions of the next video to keep us watching or what content to push to the top of our newsfeeds. platformsl media failed to block extremist content online, this content doesn't just slip through the cracks. it is amplified, and amplified to a wider audience. we saw that during the christchurch shooting. terrorist'sand's facebook live broadcast was up for one hour. that was confirmed by the wall street journal, before it was removed. it gained thousands of views during that timeframe. ms. bickert, how do you concentrate on the increased risk from how your algorithms boost content while gaps still
11:35 pm
exist in getting dangerous content off of the platform? you touched on that a little bit in your response to senator wicker, but how are you targeting solutions to address that specific tension that we see? ms. bickert: senator, thank you for the question. it is a real area of focus. there are three things we are doing. probably the most significant is technological improvements, which i will come back to win a second. second, making sure we are staffed to very quickly review reports that come in. the christchurch video, once that was reported to us by law enforcement, we were able to remove it within minutes. that response time is critical to stopping the virality. finally, partnerships. we have hundreds of safety and civil society organizations we partner with. if they are seeing something, they can flag it for us through a special channel. back to technology briefly. with the horrific christchurch
11:36 pm
video, one of the challenges for us was that our artificial intelligence tools did not spot violence in the video. what we are doing going forward is working with law enforcement agencies including in the u.s. and u.k. together videos that can be helpful training data for our technical tools. that's one of the many efforts we have had to try to improve machine learning technologies, so we can stop the next viral video at the time of upload. >> when you talk about working with law enforcement, you said law enforcement contacted you. is that reciprocal? do you see something show up, and then you in turn try to get it to law enforcement as soon as possible, so individuals can be identified? what is the working relationship there? ms. bickert: absolutely, senator. we have a team, our law enforcement outreach team. any we identify a critical
11:37 pm
threat of harm, we reach out proactively to law enforcement agencies. we do that regularly. also, when there is some sort of mass violence incident, we reach out to them even if we have no indication that our service is involved at all. we want to make sure they know how to submit emergency -- to us. we respond around the clock in a very timely fashion, because we know every minute is critical in that situation. i'm a former prosecutor myself, so these things are personal to me. >> i know that the platforms represented today, you have increased your efforts to take down this harmful content. 's stille know, there shortfalls that exist in order to get that response made in a not just timely manner, but one that's truly going to have an effect. mr. slater, when it comes to liability, do media platforms,
11:38 pm
do you guys need more skin in the game, so you can ensure better accountability and be able to incentivize some kind of timely solution? mr. slater: thank you, senator, for the question. if you look at the practices we are investing in, certainly from our perspective, we are getting better over time. the current legal framework strikes a reasonable balance. it both provides protection from liability that would go too far and would be overbroad, but is a sword and not just a shield, giving us legal certainty we need to invest in these detect,gies to monitor, review and remove this sort of content. that way, the legal framework continues to work well. >> mr. selim, can you comment on this as well? do you think that there is enough legal motivation for social media platforms to
11:39 pm
prioritize some kind of solutions out there? that's what this hearing is about, to find solutions so we hate thathat online i think continues to grow. mr. selim: when thinking through the issues of content moderation, the authorities that exist within the current legal frameworks that reside within the companies represented at this table is sufficient for them to take actions on issues of content moderation, transparency, reporting, et cetera. so there certainly is a degree of legal authority that affords these companies and others the opportunity to take any number of measures. in yourickert, testimony, you say that facebook live will ban the user for 30 days for first-time violation of its platform policies. is that enough? can users be banned permanently?
11:40 pm
would that be something to look at? ms. bickert: senator, thank you for the question. one serious violation will lead to a temporary removal of the ability to use live. however, if we see repeated serious violations, we seem retake that person's account away. we do that not just with hate and inciting content, but other problems. >> senator blumenthal? >> thank you, mr. chairman. thank you all for being here today, and thank you for outlining the increased attention and intensity of effort that you are providing to this very profoundly significant area. i welcome doing more and better, but i would suggest that even more needs to be done, and it needs to be better, and you have the resources and technological capabilities to do more and
11:41 pm
better. the question senator fischer asked of you, answerncentives, your was that they have authority to provide them with opportunities. the question is, really, don't they need more incentives to do more and do it better, to prevent this kind of mass violence that may be spurred by hate speech appearing on the site, or in fact may actually violence to of come. i want to highlight that 80% of all perpetrators of mass violence provide clear signals and signs that they are about to kill people. that is the reason senator graham and i have a bipartisan
11:42 pm
measure to provide incentives to more states to adopt extreme risk protection order laws that will in fact give law enforcement the information they need to take guns away from people who are dangerous to themselves or others. so that information is critically important to prevent mass violence, but also suicides, domestic violence, and the keys and information and signals often appear on the internet. in fact, this past december, in munro, washington -- monroe, washington, a clearly troubled young man made a series of anti-semitic rants online, bragging about planning to "shoot up a expletive school" on video well armed with an ar-15
11:43 pm
style weapon, and on facebook posted he was "shooting for 30 jews." fortunately, the adl saw that post. it went to the fbi, and the adl's vigilance prevented another parkland or tree of life attack. craig gutenberg of coral springs, florida met with me yesterday to talk about a similar incident involving a young man in coral springs, who said he would shoot up the high school there. law enforcement was able to forestall that using an extreme risk protection order statute. facebook,stion is, to twitter and google, what more can you do to make sure that these kinds of signs and signals involving references to guns, it may not be hate speech, but
11:44 pm
referenced to possible violence with guns or use of guns, to make that available to law enforcement? ms. bickert, mr. pickles:, mr. slater? ms. bickert: thank you, senator blumenthal. one of the biggest things we can do is engage with law enforcement to find out what's working in our relationship and what isn't. that's a dialogue that over the past years has led to us establishing a portal through which they can electronically submit requests for content with legal process, and we can respond very quickly. >> what are you doing proactively? i apologize for interrupting, but my time is limited. proactively, what are you doing with the technology you have to to identify the signs and signals that somebody is about to use a gun in a dangerous way, that someone is dangerous to themselves or others and is about to use a gun? ms. bickert: senator, we are
11:45 pm
using technology to identify any of those early signs, including gun violence, but also suicide or self injury. >> andy report it to law enforcement? ms. bickert: we do. cases8, we referred many of suicide or self injury where we detected them using artificial intelligence, to law enforcement so they were able to intervene and in many cases save lives. we have a very similar approach. when we have a credible threat, of someone at risk to others or themselves, we work with the fbi to make sure they have the information we need. mr. slater: similarly, when we have a good-faith belief of a credible threat, we will proactively refer to the northern california regional intelligence center, who will found that out to the right authorities. >> because my time has expired, i will ask each of you if you would to please give me more details in writing, as a follow-up, for how, what identification signs you use,
11:46 pm
what kinds of technology, and how you think that it can be improved, assuming that the congress approves, as i hope it will, the emergency risk protection order statute to provide incentives, more than just the 18 states that have them now, but others to do the same. thank you. >> thank you, senator blumenthal. senator thune? >> i thank all of you for being here today. your participation is appreciated, as this committee continues oversight of the difficult task each of your company space. preserving openness on your platform while seeking to manage and thwart the actions of those who use your services to spread extremist and violent content. last congress, we held a hearing looking at terrorist recruitment propaganda online, and discussed the cross sharing of information between facebook, microsoft, twitter and youtube that allowed each of those companies to identify potential extremism faster and more efficiently.
11:47 pm
i'd direct his question and ask, how effective is that shared database? ms. bickert: senator thune, thank you for the question. through the shared database, we have more than 200,000 distinct caches of terror propaganda. i speak for facebook only, but that has allowed us to remove a lot more than we otherwise would be able to. mr. pickles: i would add, since that hearing, the reassuring thing is that we don't just url's.- we now share if we see a link to a piece of content like a manifesto, we are able to show that across the industry. furthermore, after christchurch we recognized we need to improve. we have real-time communications in a crisis, so industry can talk to each other in real time operationally, so not even
11:48 pm
content-related but situational awareness. that partnership also now involves law enforcement, which wasn't there when we had the last hearing. so it is about broadening new programs to develop that further. broadly, i would say look at how we have improved over time. systems are not perfect. we always have to evolve to deal with bad actors, but on the whole we are doing a better and better job, in part because of technology sharing, information sharing, removing this sort of content before it has wide exposure or is viewed widely. >> senator, i would only add that the threat environment we are in has evolved over the last when he 4-36 months, and the tactics and techniques that these platforms and others use , the evolving nature of the terrorist landscape online,
11:49 pm
whether it be foreign or domestic, needs to keep pace with the environment today. >> as a follow-up, are there partnerships to specifically add enough i mass violence -- specifically identify mass violence? ms. bickert: one of the things we have done over time is to expand the mandate of the global internet forum for counterterrorism. we relatively recently included mass violence incidents, and are now sharing through our protocols a broader variety of incidents. >> mr. slater, youtube's automated recommendation system has come under criticism for usersially steering toward increasingly violent content. i led a subcommittee hearing on the use of persuasive technologies in internet platforms, algorithm transparency and algorithmic content selection. i asked google's witness at that
11:50 pm
hearing several questions for the record about youtube that were not thoroughly answered, and i would say that providing complete answers for the record is essential as we look to combat many of the issues expressed here today. i would like your commitment to provide their responses to any questions you might get for the record. do i have that? mr. slater: to the best of our ability. >> ok. i would like to explore the nexus between persuasive technologies and today's topic. what percentage of youtube video views are a result of youtube automatically suggesting or playing another video after the user finishes watching a video? i don't have a specific statistic, but i can say that the purpose of our recommendation system is to show people the videos that they may like, or are similar to what they watched before. at the same time, we recognize the concern about recommendations for borderline content. that is content that is not
11:51 pm
removed, but brushes right up against the lines. we have introduced changes this year to reduce recommendations for those sorts of borderline videos. >> if you could get the number, i assume you have that somewhere, that has to be available, and furnish it for the record. the question specifically, what is youtube doing to address the risk of some of these features, which as you know are pointing the u.s. in a direction of completing -- the user in a direction of increasingly violent content? mr. slater: the change we made in january to reduce recommendations, it is early days but it is working well. have reduced by 50% just since january. as the systems get better, we hope that will improve, and i am happy to discuss that further. >> thank you. blackburn,e senator followed by senator scott. >> thank you, mr. chairman. i want to thank each of you for
11:52 pm
being here this morning. for talking with us. this committee has looked at this issue on the algorithms and their utilization for some time, and we are going to continue to do this. looking at content, the extremist content that is online, is certainly important. we know there are a host of solutions that are out there, and we need to come to an agreement and understanding of how you are going to use these technologies to really protect our citizens. social media companies are in a sense open public forums. they should be. where people can interact with one another. part of your responsibility in this thing is to have an andctive cop on the beat, be able to see what is
11:53 pm
happening, because you are looking at it in real time. but what has unfortunately happened many times, you don't at a objective view, consistent view. you get a subjective view. this is problematic, and it leads to confusion by the public that is using the virtual space for entertainment, for their transactional life, for obtaining their news. so indeed, as we look at this issue, we are looking for you to approach it in a consistent and objective manner, and we welcome the opportunity to visit with you today. i have a couple things i wanted to talk with you about. theseall heard about third-party facilities, where contractors are working long hours, looking at grotesque and
11:54 pm
violent images, and they are doing this day in and day out. talk a little bit about how you transition from that to using modern technologies, what facebook is going to do in order to extract its, and minimize harm. you talked about, you have 30,000 employees working on safety and security, and there are third-party entities working on this. so, let's talk about that impact on the individuals, and talk about the use of technologies to make up this process and it more consistent and accurate. ms. bickert: thank you for the question, senator. making sure we are enforcing our policies is a priority for us. making sure that our content
11:55 pm
reviewers are healthy and safe in their jobs is paramount. one of the things we do, we make sure we are using technology to make their jobs easier and limit the amount and types of content they have to see. a couple examples. with child exportation videos, with graphic violence, with terror propaganda, we are now able to use technology to review a lot of that content so that people don't have to. >> let me ask you this. sorry to interrupt, but we need to move forward. reviewers, are they all located in palo alto, or are they scattered around the country or the globe? ms. bickert: the more than, we have 30,000 people working in safety and security. some are engineers or lawyers. the content reviewers, we have more than 15,000, they are based around the world. >> all right.
11:56 pm
go ahead. >> for any of them, not only are we using technology, and there are ways we are using, even when we can't make a decision on the content using technology alone, there are things we can do, like removing the volume or separating a video into steel frames, that can make the experience -- still frames, that can make the experience better for the reviewer. >> mark zuckerberg in a "washington post" op-ed called for us to define lawful but awful speech. tell me how you think you could lawful could define awful speech but not overreach or infringe on first amendment rights? ms. bickert: one of the things we are looking to is clarity on the actions governments one us to take. we have our policies that lay
11:57 pm
out clearly how we define things, but we don't do that in a vacuum. we do that with a lot of input from civil society organizations and academics around the world, but we also like to hear views from government, so we can make sure we are mindful of all the different -- out of time. mr. pickles, i will submit a question to you for the record. mr. selim, i have one i will send to you. mr. slater, i always have questions for google, so you can depend on me to get one to you. we do hope you all are addressing your prioritization issues, also. with that, mr. chairman, i will yield back. >> thank you very much. senator scott? >> thank you for being here today. i'm glad we are here today to have a meaningful conversation about what's happening in our nation. thattime to face the fact our society has produced a
11:58 pm
underclass of primarily violent young men who place no value on human life. they live purposeless lives of anonymity and digital dependency, acting out evil desires sometimes with racial hatred. as you know, when i was governor we had the horrible shooting at the school in parkland. within three weeks,, we passed historic legislation, including the protection orders senator blumenthal was talking about. we did that with law enforcement, mental health counselors and educators to come up with the right solution. with regard to the shooting at parkland, the killer had a long, long history of violent behavior. in september 2017, the fbi learned someone with the username "nikolas cruz" posted am going toideo "i be a professional school shooter." in addition, he made other threatening comments on various
11:59 pm
platforms. cruzndividual whose video posted this on reported it to the fbi. unfortunately, the fbi closed the investigation after 16 days without ever contacting nikolas cruz, claiming they were unable to identify the person making the comment. unfortunately, we now have 17 innocent lives lost because of nikolas cruz. mr. slate how is a platform like youtuber, owned by google, not able to track down the ip address and identity of the person who made the comment? when did youtube remove the comment? did youtube report the comment to law-enforcement? if you did report the comment, did you follow-up? what was the process, and any follow-up to see if there was corrective action? mr. slater: >> it was a horrendous event. we strive to be vigilant, to
12:00 am
invest heavily to proactively report when we see an imminent threat. i don't have details on this pacific fax you are describing. looking ahead, parkland was a to reachat did spur us out to law enforcement to talk about how we can do this better. that's part of all we reached out to work out with the regional intelligence center to make sure that we did have these good faith beliefs and we can go to a one-stop shop to get them to the right law enforcement locally rather than trying to have people. in the last month, there was an incident where pbs was streaming news hour on youtube, somebody put a threat in the live chat. we were for the regional intelligence center, they referred to numeral and -- orlando police, who took the person in custody. that's not to say things are not
12:01 am
perfect. we look forward to working with you and law enforcement on that. i think we continue improving over time. >> you give me the information contacted, when, when it was taken down. i cannot get an answer on what anybody did with regard to this , what youtube did, with the fbi did, nobody talked about it. if you will give me that information. that anotherrtable nikolas cruz puts something up, you have the process to contact somebody and there will be a follow-up? our processes are getting better all the time. area where it is an evolving challenge because technology evolves, because tactics evolve. i will be happy to follow up and get more information on how they
12:02 am
operate and we work together. can nicolas maduro, who is committing genocide against withholding, who is clean water, food, medicine, still have a twitter account with 3.7 million followers? highlight the behavior taken and the question for us is a public company that provides a public space for dialogue is someone breaking rules on our service? we recognize there are are world where there leaders who have twitter accounts in countries where twitter is blocked. hope thee a view, we dialogue of that person being on the platform starts, contributes to solving the challenges you outlined.
12:03 am
>> he has been doing it for a long time, it is not getting better in venezuela. illustrationgood of how the role of technology companies and other parts of policy responses, and if we remove that account, it will not change that. howeed to bear in mind other levers come into play. >> i disagree. andalks about things continues to act like he's a world leader, and he is a pariah. it seems to me what you are doing is allowing him to continue doing that. >> his current account hasn't broken a rule. when he breaks a rules, he will be treated the same as every other user. we will take action when necessary. get toare trying to other people, i would be happy to work with the senator from florida on this issue. i think we are not doing enough. the specific case i mentioned
12:04 am
rohingya and would have been on facebook is another example, happy to work on this issue with you. you.ank there is a vote. i'm shocked to hear they are going to leave it open until 11:30. that is generally what happens. senator duckworth. >> thank you mr. chairman. thee i appreciate interception of extremism and social media, many would agree the hearing is another data point in congressional ham bringing on gun violence. according to the archive, since 2019 began, 250 days ago, we have witnessed 318 mass shootings in the u.s.. more than one per day.
12:05 am
mass shootings are those were at least four people are shot, excluding the shooter. after 20 children, six adults lost their life in 2012, many officials, including myself, declared an end to congressional inaction. since that day, our nation has endured 2226 mass shootings. think about that number. focused on ways to stop gun violence, but the score joe social media. i'm not going to say there's no connection, but every other country on the planet has social media, video games, online harassment, crime, and mental health issues, but they don't have mass shootings like we do. nothing highlights congress inability to highlight the crisis and then seeing 318 mass , thenngs in 260 days
12:06 am
holding a hearing on extremism in social media. this is a chart from the digital marketing institute that according to their website, highlights the average number of hours social media users spend on platforms like facebook and twitter. less,ited states and the our users a middle of the path when you are online. do you agree that america plus use of social media is not especially unique on a per capita basis? are you aware of specific trends to explain the amount of gun violence in the u.s.? us, because some of us can't see the details. >> this is how much time, the average number of hours social media users spend no social media each day via any device. cracks the arrow points to the u.s.?
12:07 am
the highest is philippines, the lowest as japan, the u.s. is in the middle. i have a four and a half year old. i have an 18-month-old. it is the iphone, she knows how kids and goutube right to what she wants to watch. usage,. was social media would you agree it's in the middle of the pack compared to yes,est of the world? >> according to the study which i'm not familiar with. cracks in other words, are you aware of trends on your platform to explain the amount of gun violence in the u.s.? reflects over 80% of our users outside of the u.s.. i think your image speaks to itself. you brought up the rule that video games can play in online hate and harassment.
12:08 am
i agree any dissemination of hate must be addressed, regardless of the platform used, but if a meaningful connection between video games and gun violence exists, you'd think the widespread use of video games in japan and south korea would reflect a connection, correct? i think there's something to be said about the availability of guns in the u.s. if you look at the amount of time folks in japan and south korea spend on video games, it is far greater than anywhere else. we are third. you look at the number of victims of gun violence and people,or every 100,000 here's the u.s.. we are not the biggest users of video games. would this be accurate? >> i have not read this study, but i have one data point. according to a report looking at extremist related murders and homicides over the past decade, research shows 73% of extremist
12:09 am
related murders and homicide were committed with firearms. to the extent you're making a point that extremists with weapons result in violence and homicide, we have the data that backs up that point. >> the world is full of individuals who use social media platforms to disparage others to cast false equivalencies and question fax. tonymity of online platform spread hate. our use of social media, video games and other variables does little to explain the 2200 26 mass shootings since sandy hook. the internet has emboldened and empowered hate by allowing individuals to build up on my communities and share their ideas. it is our week the models that allow that hate to become lethal. it is a clear and undeniable connection between the number of guns in the u.s. and the number of gun deaths in our communities.
12:10 am
this is number of guns per 100 people. this is number of gun related deaths are 100,000 people. the world,rest of some of whom use more social media than we do. in morewhom engage videogames than we do. we are saturated in weaponry designed for war, but made available to anyone who attends a local gun show. a shooter can have a 100 run drum. i didn't have a 100 run drum in iraq. yet, you can find them at gun shows. 90% of americans agree congress should expand background checks. 60% agree banning high-capacity ammunition clips is what we need to do. this is not controversial. it is well past time leader mcconnell brings it to the house, the house passed bipartisan background checks for a vote. i hope leader mcconnell will allow the keep americans safe act, the disarm hate act, and
12:11 am
domestic terrorism prevention act. each of these bills will keep our children and neighbors safer. i hope my republican colleagues will join in these bipartisan efforts. i yield back. let's do this. if you would reduce those three copyrs to a size we can and build the admitted in the record at this point in the hearing without objection. senator young. >> thank you. i want to thank all of our panelists for being here. andpreciate your testimony answering our questions. collaborate in
12:12 am
curbing online extremism, which i understand to be one of multiple causes that we can cite as we all think about the issue of mass casualty events and extremist events, or generally. the nation is wrestling with mass violence, extremism, and issues and responsibility, and digital responsibility for some of these events. indiana,e state of hoosiers and crown point indiana, recently experience how a person can become radicalized over the internet, something i know many of your companies have studied and are working on. in 2016, a crown point man was arrested and convicted for planning a terrorist attack. after becoming radicalized by isis over the internet. thankfully, the fbi and the indianapolis joint terrorism task force intervened before any
12:13 am
violent attack occurred. however, that isn't always the case. we have seen this across the country. that's why it's critically important we continue working knowing collaboratively the products and platforms provide incredible value to consumers and they weren't intended for this purpose. is our responsibility in congress, definitely your responsibility to make sure we monitor how the great value you provide can be used in an illicit improper, dangerous, and nefarious manner. in one minute or less, because i have three minutes left, i would request the representatives from google, facebook, and twitter, tell us why americans should be confident that each of your companies is taking these issues seriously, and why americans
12:14 am
should be optimistic about your efforts going forth. indeed, google. pointing totart by youtube community guidelines enforcement reform, which details a recorder videos are removed, the reasons why, how much has been flagged first by machines. removing violent content with a combination of technology and people. technology gets better at identifying patterns, dealing with the right nuance. getting better at taking down the content faster and before people needed. 9 million videos we removed in the second quarter of this year. flagged byrst machines. 80% were removed before a single view. it is generally better in terms
12:15 am
of removal before wide viewing. we were already seeing advancements in machine learning, not just in this area, but across broadly. machine learning has data, it learns from mistakes. those systems will get better. why would you be optimistic? ideally, the systems will get better. will they be perfect? no, but they will evolve. there is reason for optimism based on the collaboration between all of us today. >> facebook. >> the first thing i will say is facebook won't work as a service if it is not a safe place. this is something we are aware of every day. if we want people to build a community, they have to know there safe. the incentive is there to make sure we are doing our part. have on ourhings we team of more than 350 people who
12:16 am
are primarily dedicated to the job of countering terrorism and hate is expertise. i lead this team, my background is was more than a decade of the criminal prosecutor. the people i have hired onto the team have backgrounds in law enforcement, studying terrorism and radicalization. it is something people work on at facebook because this is what they care about. they are not assigned to work on it while at facebook. this is bringing in expertise. colleagues, we have taken steps to make will we are doing very transparent. showeports being published a steady increase in our ability to detect terror, violence, and hate much earlier when is uploaded to the site and before anybody reports to us. more than 99% of violent videos and terrorist propaganda we
12:17 am
remove from the site we are finding ourselves before anybody reported to us. >> twitter. >> we can be optimistic. a few years ago, at the peak of the islamic caliphate, people challenge their industry to do more and be better. 90% of the terrorist content that twitter removes is detected through technology. i look at independent academics who talk about the community being decimated on twitter. i look at the collaboration between our companies, which did not exist when i joined twitter. all of those areas have driven better technology, faster response, and a much more aggressive posture too bad actors. in no-shows benefit in other areas. we can also take confidence that no one will tell the committee are working is done. everyone of us will leave knowing we have more to do and we can never sleep. we have to keep this like this. >> i can spend five days, maybe
12:18 am
five years on this. i only have five minutes. i'm are ready one minute over. mr. chairman. >> i will go vote. i will not let them close the vote until you have asked your questions. >> thank you for holding this hearing. i want to thank the witnesses for being here to talk about this very real and difficult issue. the rise of extremism online is a serious threat. the internet is proven a valuable tool to extremists connecting with one another through various forms to spread hate and dangerous ideologies while we're here to focus on the proliferation of extremism online, which is incredibly important, we must not lose sight of the fact that violent individuals who find communities all my to feel hatred have acted in the name of hate.
12:19 am
we cannot ignore the fact that the sensible commonsense gun safety measures like background checks are allowing individuals to access dangerous weapons far too easily. we know the majority of americans wants to support that. i represent the great state of nevada. approach, unfortunately, the two-year anniversary of the october shooting in las vegas, the deadliest mass shooting in modern american history, we know coordination with and between law enforcement is more important than ever. the southern nevada counterterrorism center, also known as our fusion center, is an example of a dynamic ownership between 27 different law enforcement agencies to rapidly and accurately respond to terrorists and other threats. with las vegas hosting nearly 15 million tourists and videos each
12:20 am
year, the center is responsible for preventing countless climbed -- crimes of terrorism. us you please discuss with your coordination efforts for law enforcement when violent or threatening content is identified on your platform? would you need from us as a legislative body to promote and thise and facilitate partnership to keep our communities safe from another shooting like in october? >> that attack was incredibly tragic and our hearts are with those who have suffered. our relationship with law enforcement is an ongoing effort. that trains to ensure law enforcement understands how they can best work with us. that is something we do proactively.
12:21 am
violencehere is a mass incident, we reach out to law enforcement immediately, even if we are not aware of any connection between our service and the incident, we want them to know they can reach us. we have an online portal where they can submit the legal process, including emergency request. we can respond quickly. we proactively refer imminent threat to law enforcement whenever we find them. i want to echo her sympathies for your constituents who were victims of the horrible tragedy. the lessons we have learned since that event have continued to inform our thinking. not waiting for the ideological infant of it to be known before acting. is of the challenges we have you may look for an organization affiliation before we would say it is a terrorist attack. we at first to stop people using
12:22 am
our services. we do cooperate with law enforcement to provide credible threats. one of the questions about the companies, met with a number of agencies yesterday to discuss how we can further deepen our collaboration. one of the questions is a huge amount of information within the law enforcement committee know with the umbrella. they might help us understand the threats come the trends, the situational awareness. understanding how more information can be shared. theould you tell us some of tools you may need to help you better cooperate to protect the communities? >> that was the subject of the meeting yesterday, we had a p productive conversation. >> similar here in core and sympathy. in the ways we proactively cooperate with law enforcement
12:23 am
to refer credible threats and valid requests coming emergency disclosure requests. >> i see my time is up. i will submit a question for the record about embedding violent anti-semitism online. have votes, i appreciate your time and commitment to solving and working on this issue. thank you. >> thank you. with a simplet yes or no question, i don't mean this to beatrice your single question. it's not either yesterday, or your chenault with a brief caveat if you need. i would like to hear from each of the three of you. thatu provide a platform you regard and present to the public as neutral in the political sense? yes. our rules are politically
12:24 am
neutral. >> so you aspire to political neutrality? >> we want to be a service for political ideals across the spectrum. rules and theyur are enacted without ideology included. similarly, we craft our services and regard to political ideology. we are not neutral against terrorism or violence. >> i appreciate you pointing that out. that is not what i talking about. that leads into the next question i wanted to raise. important. the work you're doing in a chariot is important for anyone occupying this space to be conscious. you do a service to those who access your services by removing things like pornography and terrorism advocacy, things like that.
12:25 am
there's a lot of debate that surrounds this issue and the legal framework around it. section 230 of the communications decency act has received criticism. from being a website held liable as a publisher of information by another information content provider. significantly, section 230 good samaritan provision gives you the promise that you will not be held liable for taking down this type of additional content, whether it is something that is constitutionally protected or not. witnesses, the same each of you represents a private company. each of you are accountable to your consumers within your company. this means that in some sense, you have incentives to provide a safe and enjoyable experience
12:26 am
and your respective platform. i have a question about section 230. particularly the good samaritan provisions. does it help you in your efforts to swiftly take down things like pornography and terrorist content of your platforms? would it be more difficult without the legal certainty that section 230 provides? absolutely. it is critical to our efforts in safety security. >> i would say it has been critical to the leadership of american industry and the information sector. >> absolutely, yes. point, imagine a world where this is taken away with those provisions no longer exist. large companies like yours might be able to -- still would be
12:27 am
able to and probably would, filter out this content between the artificial intelligence capabilities at your disposal and the human resources that you have. douspect you could and would your best to perform the same function. what about a startup? a company trying to enter into yourspace that each of companies entered into when they were created not very many years ago? what would happen? >> thank you for that question. it reminds me of industry conversations involving smaller companies before we formed the global internet form to counterterrorism in june of 2017. we're having closed door sessions with companies large and small to talk about the best ways to combat the terrorism online. companies were concerned about liability. to ben 230 is important
12:28 am
able to begin to proactively act and assess on them. part of a fundamental maintaining a competitive online ecosystem. without it, the ecosystem is less competitive. section 230.as it's part of the reason what we have been a leader in economic growth, innovation, and technological development. other countries suffer. study after study has shown that. away,it were to be taken all three of your company's are not exactly known for being a small business or a business with a modest economic impact. you can identify this concern i'm expressing. if we took that away, you might be will to keep up what you need to do, but will be harder for someone to start a new search engine company, a new tech
12:29 am
platform? somebody starting out in the same position where your company was a couple of decades ago. would that be extra naturally more difficult? think it would create problems for innovators of all stripes. small and medium-size businesses potentially getting your arms around that significant change to the fundamental legal framework of the internet. >> my time has expired. >> i wanted to thank our committee chairman for holding this hearing. it's a vital conversation for us to be having. we need to be taking a hard look at how we address the rising tide of online extremism and its real-world consequences in our country. you onve questions for this important topic. i wanted to echo some of what my
12:30 am
colleagues have artie said, which is there is much more the senate must do to address gun violence. whether or not it is connected to hatred on the internet. more than 200 days ago, the passedf representatives a bipartisan universal background check bill. this commonsense gun safety measure has an extraordinary level of public support. it deserves a vote on the senate floor. hearings,simply have but we have to act to reduce gun violence. adl center on extremism has closely studied hate crimes and extremist violence in this country. is it fair to say there has been an alarming increase in bias motivated crimes, including extremist killings in the last
12:31 am
several years? >> that is accurate. >> in the case of extremist killings, what role do you feel access to firearms has played? >> is a briefly alluded to earlier to expand on what i was mentioning, according to a recent report, extremists of all ideological spectrums that committed murders are homicides in the u.s., 73% of those were committed with firearms. >> what impact you believe the increase of hate crimes have on the minority communities and members that have been targets of these attacks? let me add to that question one of the unique aspects of a hate crime is it not only victimizes the targeted victims, but
12:32 am
strikes fear among those who share the same characteristics with the victim or the victims. months, we saw24 in calendar year of 2017 with a 50% increase in anti-semitic incidents across the country. dataoj's own hate crime shows an increase in hate crimes. we continue to see the troubling statistics year after year. imperative as part of my testimony through submitted written and my oral testimony speaks to the need of enhancement and enforcement of hate crime laws and protection for victims. >> i am an original cosponsor of senator bob casey's legislation the disarm hate act, which would bar those convicted of misdemeanor hate crimes from obtaining firearms. do you agree the measure would keep guns out of the hands of individuals who may engage in extremist violence?
12:33 am
yes. thank you for your leadership and all members who have supported the legislation. efforts thatte the social media companies have described to combat online extremism, including to provide transparency to their users and the general public. important tolly understand how you are addressing problems within your existing services and platforms. i would like to learn more from you about how you were thinking about this issue as you develop and introduce new products. feel the lot of us approach of rapidly introducing a new product and assessing the is a problem.ater
12:34 am
ask how you plan to build combating extremism into the next generation of ways in which individuals engage online? safety by design is an important part of building new products at our company. one of the things we have built in the past five years is a new product policy team. their responsibility is to make sure they are aware of products andfeatures being built, explaining to these engineers thinking of the wonderful ways they can be used, all of the abuse scenarios and making sure we have recording mechanisms and other safety features in place. >> we are in a very adversarial space. access will change behavior. when we have a policy decision, one of the key things is how it
12:35 am
can be used against us. how will people change and circumvent the policy. into learningat and share it with smaller companies. 200 small companies around the world sharing that with them to understand the challenges. it is invaluable. >> our trust and safety teams are at the table with product managers and engineers from the conception of idea to the development and possible release. from ground up, it is safety by design. i want to thank the witnesses. i will be taking over as chair. i will call on myself as the next witness. you, yourask all of companies, your technology is famous for its algorithms, which seem to have the ability to pinpoint what people want.
12:36 am
you can put an email out, some people talk about your interest , next and youters know, you have ads popping up that talk about yellow sweaters. who knows how that happens, but to a lot of us, it happens. it is pretty impressive. isyour algorithm technology so good at pinpointing things like that, particularly as it relates to ads, order the tollenges with regard directing that kind of technology to help us and help what has been talked about on both sides of the aisle, which is the people who are committing this kind of violence are particularly disaffected young males.
12:37 am
aren't there signs, things you can do with the technology you do so well in other spaces to provide more warning signs of this kind of violence from these kind of individuals who already have a profile online? that?u working on thank you for the question. technology plays a huge role in what we're doing to enforce safety policies. in the area of terrorism, extremism, and violence, it's not just the matching software's we have to stop organized terror usinganda videos, we are artificial intelligence machine learning to get better at identifying new content we have not seen before that may promote violence or incite violence. the incredible threat of imminent physical harm, we send that to law
12:38 am
enforcement. new systems are getting better. >> are you using algorithms to advanced technology you use in other spaces to identify those? >> there is cross learning along the companies. >> is it a priority of yours? >> absolutely. >> out of all of the companies? >> investing in technology to find terrorist content is absolutely a priority. >> yes. add to this part of the conversation as somebody who researched the data are these issues for nearly two decades, the environment has changed significantly. white supremacist terrorists in the u.s. don't have training camps in the same way that foreign terrorist groups do. they are a training camp where they learn and coordinate with one another in the online space.
12:39 am
the machine about learning, technology, artificial intelligence, continues to disrupt the environment and make it a inhospitable place for individuals that want to promote violent content to be disrupted. >> this is a bigger policy question. all of your companies have the tension between eight want eyeballs on, more clicks, more time on. facebook, google, or twitter, i think there is showing thetudies amount of young men and women, young girls who feel the sense of loneliness from the time online. there's indication that among teenagers, suicide rates are increasing for young girls.
12:40 am
we are all dealing with the opioid epidemic. we are looking back wondering how we did that, how we got to this position in the 90's and americansgs, 72,000 died of overdoses last year. we're looking back and asking how this happened. policymaking, are we going to look back in 20 years going how did we addict a bunch of young americans to look at their iphones eight hours a day? 20 years from now, we will see the social, physical, and psychological ramifications were we may be kicking ourselves and wondering why we allowed that to happen. it worries me. you have tension, because you youngore face time,
12:41 am
teenagers spending seven hours a day staring at their iphones, because that helps revenue. that 15, 20 years from now we will be in the same spot that we were with opioids and wondering what we did to our kids? your power, your negative implications of what's happening in society right now. mother, i take questions of wellness seriously. this is something we look at and we talked to wellness groups to make sure we're crafting products and policies in the best long-term interest of people who want to be a connector. we have seen social media be a tremendous place for support for those thinking of harming
12:42 am
themselves or struggling with opioid addiction, or getting exposed to hateful content. exploring and developing ways of linking people up with resources. we're doing that for opiate addiction, for thoughts of self people asking or searching for hateful content. can be ank this positive thing for overall wellness. >> we have similar programs in place for opioids and people using terms referencing self harm or suicide. we provide them with a sort of a report. that's what we have rolled out. and recognize things like digital literacy are issues that we need to invest in to make sure that people using our services have the skills and awareness to use them deciding
12:43 am
the. our ceo has committed the company to looking at the health of the conversation and not just using the metrics you reference, but looking at much more broad metrics, mention the health of the conversation, rather than revenue. >> thank you. senator cruz. >> thank you. i will say thank you to my friend from alaska for sharing this deep void and longing in your heart. you will be getting the yellow sweater for christmas. mr. slater, i want to start with you. i want to talk about project dragonfly. of 2018, it was reported google was developing a censored search engine onto the alias of project dragonfly.
12:44 am
in response to those concerns, alphabet shareholders requested the company published a human rights impact assessment by october 30 of this year examining the actual and potential impacts of potential google search censored in china. during the show holder meeting on june 19, the proposal for the assessment was rejected. the board of directors encouraged shareholders to vote against the proposal. alphabet commented that google has been open about it desire to increase its ability to serve users in china and other countries. we considered a variety of options to offer services in a way consistent with our mission and gradually expanded offerings to consumers in china. i want to start with clarity. has google ceased any and all development and work on project dragonfly? >> to my knowledge, yes.
12:45 am
tohave google committed forgoing future projects that may be named differently, but would be focused on developing a censored search engine in china? announcee nothing to at this time. whatever we would do, we would look carefully at things like human rights. we work with the global network how ourve to evaluate principles, practices, products, with human rights and law. roughly contemporaneously, didn't want toit work with the u.s. department of defense. justify having been willing to work with the chinese government on complex projects, including artificial intelligence under projects time, not at the same
12:46 am
being willing to help the department of defense develop ways to minimize civilian casualties for a better ai? how do you reconcile those approaches? >> we have talked about today, we do partner with law enforcement and the military in certain ways offering some of our services. we draw responsible lines about where we want to be in business, including limitations on getting in the field of building weapons and so on. evaluate that over time. >> let me shift to a different is this panel has talked about combating extremism and the efforts of social media to do that. many americans, including myself, have a long-standing saysrn that one big tech
12:47 am
it is combating extremism, that it is often a shield for advancing political censorship. recently,talk about twitter extended its pattern of censorship to the level that it took down the twitter accounts of the senate majority leader, mitch mcconnell. then i found a pretty remarkable thing for twitter to do. accounto because the had sent out a video of angry protesters outside of senator including anouse, organizer of black lives matter in louisville, who is heard in the video saying that the senate majority leader "should have broken his little raggedy wrinkled -- neck."
12:48 am
and someone else who had a vood oo doll of the majority leader. another angry protester said "just stab his heart." all of that person did not abbreviate mf. majority leader sent out the threats of violence and found remarkably his own twitter account taken down. how does twitter explain that? >> it is something we have been thed around the world, clamor in many political jurisdictions of safety of people who hold public office. when we saw the video posted by numerous users that identified someone's home and contained some severe threats, out of an abundance of caution, we did remove the video, we removed the tweet from everybody who had posted it. the essence of a video with
12:49 am
someone's personal home, where the senate majority leader may have an residing at the time with several violent references, we felt out of abundance of caution, we should remove it. we then discussed it further that we understood the intent was to call attention to the very threats of violence. we did commit the video to be put on twitter with a warning message saying it is sensitive media. it is not balanced with striking between many different situations where we have been asked the opposite, which is similar content should be removed because it contains a violence threat. that is something we strive to get right every day. thefirst instance was safety of leader mcconnell and his family. a you would agree there's difference between someone posting video where they are threatening someone else and the target of the threat posting the video? you would agree they are qualitative? >> in the situation where you have the person's home visible
12:50 am
in the video, there is still a risk. preventingvated by harm that could have occurred, because the home was visible. discussing with the campaign team and his senate office, we appreciate your insight. our motivation was to prevent harm, not the potentially ideological issues you may have alluded to. -- >> have you rethought your incidence thate senator cruz asked about? i will call your attention to , whichtimony on page 2 says "we do not allow propaganda or symbols that represent any of
12:51 am
these organizations, and to be shared on our platform, unless they are being used to condemn or inform." instructive toe your platform? don't you think that clearly it beginningt from the that senator mcconnell and his videogn have posted the to condemn in forum? >> this is an issue. we as a company have taken a more aggressive posture after the christchurch attack. we saw people posting cap certs of the manifest -- excerpts of the manifesto and clips of the video to condemn it. even in those incidences, we removed it. in the u.s., images have been posted to manifestoes with large
12:52 am
chunks, even when condemning it. this is something constantly under tension. the case you illustrate highlights the rights. if we are going to air on the side of caution, fewer violent threats and fewer homes visible on our platform is notably a good thing. we have to work harder at taking into account the highlight -- the outline. this is the first time i have ever been asked why we didn't leave something up. that's in itself is illustrative of the complex situation. >> in terms of the context, in was the owner it of the home who chose to inform about what was being said against him.
12:53 am
it was the individual himself who posted this. it seems to be a clear-cut case in that instance that differentiates it from the condemnation of the larger incident of the christchurch violence. shouldn'tggest it very long for twitter to understand. senators sullivan, you were recognized. -- you are recognized. question, cruz's whether a company wants to work with the pentagon is something the leadership of individual companies have to , that is fine.on what troubles a number of us is
12:54 am
there's a declaration you're not willing to work with the department of defense on certain issues, yet there's willingness to work with one of our country's potential adversaries, particularly on sensitive technological issues important to the competition between the nations. do you understand where that has caused bipartisan concern? how should we address that? should congress take action on those kinds of situations? not saying everybody has to work for the pentagon, but if you don't want to work to help with butnation's defense, working with the country that poses a significant threat long-term in the u.s., do you understand why that causes concern? >> i do appreciate the concern.
12:55 am
americanbably an company. we draw responsible lines. we look forward to engage with the committee and others to do that. >> do you think if there are instances, a clear-cut example of not doing anything on the defense of the nation with the u.s. department of defense, but working with the chinese, something clear and obvious to do to prevent that or penalize that? we, the congress? >> it is an important question. we try and have responsible and consistent lines. the details would have to matter. >> let me ask one final question, a follow-up to senator scott's earlier question. account,the twitter nicolas maduro has not broken any of the rules. what are those rules? at what point would you look to
12:56 am
have somebody who is certainly not treating his citizens well -- what are those rules and what point would you look at what they are doing to their own citizens as a way to maybe not provide them the platform you have? to manyules apply views. i can make a copy available. ofther it's encouragement violence, if the twitter account -- and some of the ways we have seen around the world, to encourage violence and to organize violence, we would take action on those accounts. >> would twitter allow vladimir putin to have the account or xi jinping? >> if they were acting within our rules. it is slightly different but important, some governments have
12:57 am
sought to manipulate our platform, despite propaganda information, through working our rules. one of those governments is venezuela. we made a public declaration that every account we remove from twitter for engaging in information operations covertly that we believe is responsible for that government, we made the archive available for the public and to repurchase. we have taken the same step with operations directed from countries including china, iran, and russia. it's not just the single twitter accounts, some governments also seek to manipulate our platform. we would take action to remove them and make it public so people can learn from it. the government takes violence against its own citizens, is that breaking twitter rules? >> it is happening off-line, and the key question is what's happening on twitter. >> thank you. >> thank you to our witnesses.
12:58 am
the record will remain open for two weeks. senators are asked to submit any questions for the record. upon receipt, the witnesses are requested to submit their to thee written answers committee as soon as possible, but no later than wednesday, october 2, 2019. thinkse of business, i each and everyone of you for appearing today. this hearing is adjourned.
12:59 am
announcer: c-span's washington journal, live every day with news and policy issues that impact you. coming up thursday morning, california congressman tom
1:00 am
mcclintock shares the latest on the committee investigation of president trump. and gregory meeks democrat of new york discusses trump administration policy toward iran. in the political scientist and -- discusses the need for multiparty democracy in the u.s. be sure to watch washington journal live at 7:00 eastern thursday morning. join the discussion. life thursday, the house returns at 10:00 a.m. eastern for general speeches on c-span. at 12:00, members take up a short-term spending bill. on c-span2 at 8:00 a.m., the environmental protection agency holds a news conference announcing they will revoke a waiver that it allows california to set their own fuel standards. when the senate returns at 10:00 a.m., work on executive
1:01 am
nominations, including brian mcguire. on c-span3 at 9:00 a.m., the confirmation hearing for eugene son of anthony scalia. he has been nominated to serve as the next labor secretary. c-span is back in des moines, iowa, this saturday 42020 coverage of polk county :00 p.m. beginning to eastern we are democratic candidates will take the stage for speeches. watch live on c-span or listen using the free span -- free c-span radio app. announcer: next, the house gun violence task force hears from witnesses about the impact of guns in their lives, the effect on communities, and what legislative steps could be taken to curb gun violence. this is

19 Views

info Stream Only

Uploaded by TV Archive on