Skip to main content

tv   Discussion on ISIS and Social Media  CSPAN  May 13, 2016 8:23am-9:31am EDT

8:23 am
[inaudible conversations] >> alleged misconduct in the tsa held by the house government oversight reform committee. find full-length video from both hearings online at c-span.org. >> social media profiles are not currently used when conducting federal background investigations. this morning the house oversight and government reform committee looks at why that is and results from pilot tests that consider information from social media sites. that'll be livat 9 a.m. eastern on our companion network, c-span3. >> executives from national youth football, hockey and lacrosse programs will be on capitol hill this morning to talk about concussions. they'll be joined by public health officials and the parents of victims suffering from chronic brain injuries. we'll take you there live at
8:24 am
9:30 a.m. eastern here on c-span2. >> next, a look at how isis recruits supporters on the internet and how the u.s. government combats them online. this is about an hour. [inaudible conversations] >> good afternoon, everyone. we're going to get started. so welcome to this panel, disrupting isis online: the challenges of combating online radicalization. this panel is put on by the advisory committee to the congressional internet caucus, and we're hosted by the congressional internet caucus, and we'd like to thank the co-chairs, congressman goodlatte and congresswoman eshoo and senator john thune and patrick leahy for hosting us here today. the caucus hosts events every few weeks on salient topics to the internet and policy, and and we invite you to come out for
8:25 am
events throughout the summer. so today we have several excellent panelists with us today. we have emma llanso from the center for democracy and technology who works on the free expression project. we have rashad hussain from the department of justice, counterrerring violent extremely over there, and we also have seamus hughes, deputy director for the program on extremism at george washington university's center for cyber and homeland security. and my name is miranda bogen, and i'm a fellow at the internet law and policy foundry and a fellow at the congressional internet congress in the past. so let's get started. i'll just give a brief overview of the issue going on, and then we'll jump right into it and get into what's the real issue here with extremists online, what role do the platforms like twitter, facebook, google play in this, and how -- what is the right way to be approaching the issue of dealing with extremist
8:26 am
content online and recruitment for terrorist groups abroad. so as you may have seen going on, we have -- the social media platforms like twitter and facebook have generally, especially in their early years, been quite in favor of leaving their platforms as places for free expression. they've been adamant supporters of that, but gradually, especially over the past few year, we've seen that being taken advantage of by groups like al-shabaab in somalia, like al-qaeda and then we have the islamic state beginning to use the platforms even more active lu than that -- actively than that, bringing it to a totally different level. and now the platforms are facing pressure on multiple sides from the government here, from governments abroad, from their users to do something more to take the content out of people's social feeds, you know?
8:27 am
it's not manager you want to see -- something you want to see every day, but this content is not something that we want spreading around because it isly effective in recruiting people to go abroad and join these causes. so why don't we turn to seamus who's really been working on this issue and tracking this phenomenon over time. can you tell us when did this start? how are the platforms being use? what are the groups doing? >> yeah. i think it started when the internet started, right? in the early ages when we looked at terrorist groups online, it was on the password-protected forums, and as it shifted to twitter and facebook, so did recruitment. so if you look at the number of individuals that have been arrested for isis-related charms in the u.s., it's 85 since march 2014. the average age is 26. so isis recruiters and spotters are going online to where their demographic is, so that tends to be twitter. we've seen a shift actually moving back over to teleframe
8:28 am
and other platforms, but they clearly use the online environment in a way that is conducive for them to recruit. think of it in three ways. so they use it as grooming, so over the summer the program on extremism at george washington, we did a six month study of isis recruits online mostly focusing on american but also english language speakers. so we look at about a thousand accounts on a daily basis. of those you see them grooming online. so we watched a young woman from the midwest who had questions about her faith and ap isis recruiter realized she was naive and was answering the questions in an innocuous way. and a few weeks later he would slowly introduce the isis ideology into the conversation. the other way is logistical support. an individual like mohamed khan, a 19-year-old kid from chicago, when he gets picked up at o'hare airport, him and his underaged siblings, a 17-year-old and 16-year-old, are planning to go
8:29 am
join isis. when they arrested him, they went through his stuff and realized he had four numbers. he received those numbers of people to call when he reached turkey. it lowers the bar for an individual to meet a radicalized recruiter online. and the last is what the fbi says, the devil on the shoulder, so egging people on to do this. you also have to realize that the numbers pale in comparison to any other form of conversation online, you know? you're talking about 34,000 twitter -- 44,000 twitter accounts for isis sporters. the -- supporters. they're clearly using the online environment. and the last thing i'd like to say it's not like if twitter went away tomorrow, we wouldn't have recruits that were joining, right in the fact that there's a physical space, that there's a so-called caliphate, that is a driver. twitter, telegram, places like that tend to help facilitate that recruitment, but they're not -- the reason why people
8:30 am
decide to become radicalized and join groups like this, it allows in the u.s. for when you see at least the people that have been arrested communities don't radicalize in america. individuals do. we don't have these pockets of radicalization like you would have in some european countries. here if you're trying to find a like-minded individual, you usually try to find that online. i'll leave it there. >> rashad, maybe you can tell us about how the department of justice and the government is approaching this phenomenon and how you're working to combat. >> well, it's a threat that we take very seriously. our first priority, of course, at the justice department is to protect the american people from attacks. and what we're seeing isil do online is use some very sophisticated techniques. seamus talked about some of the approaches that they've used. they've also done something different than previous groups in that they have adopted crowd sourcing model through which
8:31 am
they encourage anyone anywhere to go out and commit attacks against innocent people. so part of the challenge we face with government is we have to be successful 100% of the time. isil's overwhelmingly rejected. they're recruiting millions of people around the world. they reach out to an audience of 1.6 billion muslims and others, and even if they are successful in the minuscule number of those cases, then you still have a problem of 20,000, 30,000 foreign fighters. you still have the problem of isis getting followers all around the world. and they're very adept at using different techniques, targeting different audiences in multiple languages. what they try to do is reach out to disaffected youth and offer a sense of purpose, a sense of belonging. they use a combination of strength and warmth that they try to lure recruits with, a sense of camaraderie. and as twisted as it sounds, they claim to be building something.
8:32 am
so we've all seen the atrocities that they've broadcast around the world, but they've also put out positive messaging. i've mentioned the themes of camaraderie and strength and warmth, and they claim to be building something, and they're calling people to build something which is in their conception the caliphate. and is so one of realizations that we have as government is that there are multiple audiences, and we have to be smart about using the right messengers to reach the right audiences. so government isn't always going to be the right messenger to reach the various audiences we're trying to reach. potentially thinking about joining isil in the short term, then you have the immediate influencers around them, family, friends, peers, then you have a set of cultural influencers that
8:33 am
canning influence generally and you have kind of a mass audience. so government may be more effective in the prevention space in reaching out to people that haven't already bought into aspects of the propaganda or the ideology, but you really need specific audiences to reach, for example, the specific class of fence sitters. who are fence sitters going to listen to? it's a question we think about. perhaps they'll only listen to other extremists, and maybe those are extremests that are not violent, but people that are extreme in their views that can persuade them to come back. that's not a role for the government to play. who is the best audience to reach out to cultural influencers? so what we've tried to do in government is where possible message ourself to the audiences which we think we can reach. and some of the common themes that we've, that we've used are to highlight isil's atrocities against particularly miss -- muslim communities where they're
8:34 am
also killing in big numbers, amplifying the people who have defected from isil, highlighting losses as seamus noted they actually have territory which they can point to and say come and help us establish the -- [inaudible] so we point to the losses that they're taking particularly in iraq and syria. and we've also tried to expose the living conditions, and defectors have done some of that under isil territories. and perhaps most importantly, we i we think it's important to work not just with government, but with partners to disseminate positive messages that make clear what the rest of us stand for, what the rest of the muslim community stand for and to highlight positive alternatives. so if someone says i really have a problem with what's happening in syria under the bashar regime and i want to do something about it, we've got to find other paths for people to take that are constructive rather than destruct i have. >> -- destructive. >> so it sounds like we have the dual use of the internet both as a platform for recruitment, but
8:35 am
also as for engagement on the other side. and we also see that the platforms are torn between taking down sunt content and threatening content. and and on one hand leaving it up for intelligence purposes and on the other hand really trying to minimize what they're taking down because -- so that they don't have to be the ones judging what is appropriate content and what is not. maybe, emma, can you tell us about the response that we've seen from the companies and some of the concerns they might be considering when they're asked to comment on how to approach this issue? >> sure, yes. so, obviously, over the past year and a half -- can you hear me now? ah, great. clearly, over the past year and a half we've seen a huge amount of scrutiny on major internet companies, you know, the big social media platforms about how are they responding to the existence of is so-called extremist content online. and it might help to describe
8:36 am
just a little bit sort of the legal framework around, around speech online, you know? what is it that enables the kind of exchange of information and expression of opinions that we all enjoy. in the u.s. we've got both the strong protections of the first amendment for speech where we have, you know, very high standards for what is speech that the government can actually say is unlawful, kind of relevant issues in that context are, you know, is a comment a direct incitement to imminent lawless action or imminent violence, it a true threat to violence or intended violence against another individual. but we don't generally have broad prohibitions against hate speech, and there's no -- there's certainly no kind of definition of extremist content as, you know, a set of unlawful speech. so already we're sort of in an
8:37 am
environment where what exactly are we talking about, what sort of speech and content are we talking about is unclear. what we've seen a lot of the companies do is in trying to apply their terms of service which are kind of variable across platforms as ways to remove content that gets reported to them. so so internet companies, you know, hosts of our speech online are generally protected from any legal liability for speech that they are not themselves the author of. this is section 230 of the communications act that unsures that if i -- insures that if i, for example, tweet something defamatory about seamus, seamus can sue me, of course, because i'm the one who said the comment, but he can't sue twitter about it. and this law has been incredibly important to, you know, the amazing innovation we've seen with the internet and online
8:38 am
platforms and also to supporting speech online. all of us depend on a number of different intermediaries being willing to host and transmit our speech if, you know, if you're an isp or you're a social media provider who could face legal liability for your speech, they'd be very unlikely to let you, to speak. so, but also in that law companies are protected from liability for their decisions to remove speech. this is where we see companies developing terms of service where they set out the standards for what kind of speech they'll accept on their platforms and what they'll say is kind of a violation of their rules or standards. and so a lot of the platforms have rules about hate speech even though this is very often speech that's totally protected under the law in the u.s., they may still say they don't want to host speech that is denigrating of a particular group or class. most of them have standards
8:39 am
against direct threats or threats of violence. of i believe facebook has a standard against dangerous organizations in particular by which they tend to mean terrorist organizations or organized crime. so we've seen kind of a range of different kinds of terms grow up on the different platforms over the years, and then companies then in response to kind of user flags about speech that appears to violet their terms will -- violate their terms take a look at content and say does this seem to go too far, does this step over the line of what they've already described to be acceptable or not acceptable on their platforms. >> so i meant to ask the panel about this balance of sort of the opportunity of the internet as a platform to spread various different types of speech, positive speech, to keep track of what's going on and sort of
8:40 am
the desire to control this, the dangerous speech, the hate speech. what have you, in the research arena how do you see that playing out? >> sure. so i'm kind of dual-hatted on this one. we have a fellow at the program on extremism, jan berger, who looked at english language accounts over a month period to figure out if takedown was effective or not, and here's the takeaway with a caveat. they were effective in terms of reducing the number of followers that the person had when they came back, particularly on twitter there's the first part. here's the second part that we should also keep in mind. there's a built-in system for resiliency into the m. so an individual like terence mcneal who's arrested for terrorism-related charges last fall, when he started watching him, he was lone wolf 7. by the time he was arrested, he was lone wolf 21. every time he came back as 8,9, 10, the eye us -- there's an
8:41 am
isis echo chamber that has essentially shot out accounts. they build in resiliency. they say here is lone wolf 8, he used to be lone wolf 7, everyone follow him. we know we're going to get kicked off for violating terms of service, but're going to help other people get back on. from a research perspective, you clearly want more data as much as you possibly can. it's clearly a balancing act on whether takedown is the necessary way. i tend to think that, i tend to be more on the positive encounter and alternative messaging than i am on takedown, although there are some instances where i think take down's warranted. >> yeah. we've been encouraged by companies enforcing their terms of service. and there's echo chambers out there in the violent extremism world where they're posting violent tweets and beheading videos. it's not a l of intelligence value necessarily in that echo chamber. now, where there may be some
8:42 am
limited cases in which it can be helpful and there is some intelligence side, and that can always be communicated to companies. but for the most part, you know, i agree with seamus' view on it. now, it's important again to remember that overwhelmingly isil is rejected around the world. and there's a reason for that. it's because of, largely because of their own actions. and a lot of the atrocities that they're committing, the stories that have been told by people that have been impacted, by isil and other groups, the stories of defectors, all of those are getting out through social media as well. and so it's -- i know we have, you know, perhaps a thousandth of a percent of people who are targeted by isil have gone and joined, and that's unacceptably high for all of us because we're trying to prevent any single attack from ever happening. but it's important to remember that these platforms also provide an opportunity to put op not just counter-- on not just
8:43 am
countermessaging, but positive messaging that allow the rest of us, including muslim communities, to communicate what we stand for. >> and that's really, i mean, the risk of the overbroad content policy or particularly like increasing pressure on companies to strengthen their policies, make them so that more content can come down, is that it is this, you know, potentially vastly overbroad response to what ends up being, you know, as seamus' research seems to indicate, you know, it's a lot of one-on-one communications that end up driving the actual, you know, individual to commit an act of violence. and if you're trying to capture one-on-one, highly-tailored, direct conversations with a policy that's about taking down all of the speech that's sort of in the general area of discussing isis and terrorism and u.s. foreign policy, you're throwing out a whole lot of baby with very little bath water.
8:44 am
[laughter] >> is that's a good segway, because we have had some pressure from the u.s. government to add additional liabilities for the platforms or at least to compel them to turn over certain information if they come across it or more government agencies to use certain information in their response. and we've also had more collaborative approaches with the summits between the administration and silicon valley both here and in california. what is your sense of the right way to approach if the overbroad approach is just that? >> well, so there have been some proposals in congress that would try to require internet companies to report apparent terrorist activity to the government if they identify it. and this kind of proposal is pretty concerning.
8:45 am
there's not, in the particular bills that have been proposed, there's no real definition of what terrorist activity might be and what that sort of model would set up is, basically, a huge incentive for all of our communications providers to err on the side of caution in reporting their users to the government as a suspected terrorist or as suspected to be involved with terrorist activity. i think kind of the results of that would be a huge amount of overreporting which is both incredibly concerning for individual civil liberties, you know, our right to privacy and in our own communications and also not really generating useful information for law enforcement. so i think it's very much more what rashad had been saying about the need to support the environment where the defectors or the journalists or the advocates who are out there countering the message that isis
8:46 am
presents and providing their own, you know, kind of positive viewpoints and positive ideas, we need to insure or that there are strong protections for free speech in place so that that can happen. we unfortunately see there's reports by the committee to protect journalists about the way that a anti-terrorism laws in egypt and turkey, you know, countries that are allies in the fight against isis are also using those anti-terrorism laws to put journalists in jail. and that kind of overbroad approach that ends up constraining the speech of exactly those people that we need to get different viewpoints and different messages out there is a real risk. >> there's also kind of an interesting dynamic here, because you can think about the government's amazing act for convening. if i call ten service providers
8:47 am
to get them in a room, i can't do that. i was talking to an imam who wanted to do counter-isis videos online, i said, well, sir, what do you want to do? i'm going to grab my phone, record myself talking about how isis is wrong for the following reasons. that's great, sir, but no one's going to watch that. it's going to be ten minutes of you holding your form. here's the guy who wants to do the messaging, but he has no idea how to tag the videos so they pop up when the next al-awlaki video pops up. the government has the ability to be the intervener. we don't actually want to be anywhere near this thing, but here's somebody we know that you may want to talk to about these types of things. >> and that's how we've tried to use our convening role, by bringing together civil society, artists, people that are adept at using social media and the
8:48 am
platform, you know, advertising sector, silicon valley companies. and, you know, after that our job to sustain communication to some extent, but realizing that government is not the best messenger in this space is -- our job to also step back and allow the creative people that know how to put out the best positive messaging and counter-messaging to do their thing. and there is evidence to indicate that we're making steady progress in this area. you know, not only have the social media companies, we've had cooperative relationships and discussions with, not only have we seen announcements such as twitter's announcement that they've taken down 125,000 isil-affiliated accounts, but we've also seen polling data indicating that larger and larger percentages of young arab populations are totally ruling out any possibility of joining isil. there's a survey that came out recently that said 80% of 18-24-year-olds in the arab world in 16 countries that were
8:49 am
surveyed said that they would never even consider joining it. and if you were to do a poll of the disapproval rating of isil in many of these country, it's even higher. so a lot of attention is paid towards the small percentage and deservedly so that has bought into that ideology, but it's important to keep in mind that there's a lot of good work that's being done largely outside of government to make sure that those that might be susceptible to isil don't fall prey to their message. >> i think that's a very important point too because when you look at this, we're talking about a manageable number. the fbi talks about -- director or talks about some 900-1,000 number in all 50 states from an actual messaging perspective, you can tailor your messaging to those 900-1,000 people. you can do one-on-one interventions on line. you're never going to be able to de-radicalize or disengage someone online, but you might be
8:50 am
able to introduce a seed of doubt about killing civilians, and then you can a real-life or offline conversation about how that person should come back in the fold. >> yeah. and reaching that right target audience is the challenge. now, if it is, if the numbers which you've stated and we talk about on this panel are approximately correct in terms of the number of people in the united states, for example, that might be susceptible to isil's ideology, you don't want to have a messaging campaign when you're trying to target that group that sends the message that somehow all muslim youth are vulnerable or just because some muslim youth might face discrimination, that means that they might be us is especially bl to violent extremism. that's not the case. muslim youth in the united states overwhelmingly are excelling in a number of fields. there's data that indicates that per capita they're at the same level or higher education level per capita higher income levels
8:51 am
than people of other faiths. and so you don't want to have kind of a one-size-fits-all mass messaging approach to reach the awed yepsz that we've talked about. and if you look at seamus' report in terms of the isil-related arrests, i believe there's a statistic that says 40% of those that are arrested are recent converts to islam. sometimes there's a narrative that there is youth that have grown up alienated, that somehow muslim youth are generally susceptible or vulnerable to isil's recruitment. and, you know, the 40% of those that are recent convert, they didn't even grow up, you know, in muslim communities as young muslims. so we have to be careful how we message on this. because, you know, muslim-americans sitting at their dinner table every night are talking about the same issues as all other americans. isis is not -- just because they're muslim, it's not the number one conversation point at
8:52 am
dinner table and, in fact, they're overwhelmingly rejecting the message that isil is putting out there, and that's borne out by all the data that we see. >> is so messaging itself is one issue, like what do we say, but it sounds like targeting is equally as important. so is there a role for internet platforms to help in advising how to go about that targeting or to prioritize certain content algorithmically? are we seeing anything in that direction, or is that from a speech perspective equally as problematic as taking down content? >> well, is so some of the things that we've seen from a couple of the big social media platforms have been much less about actually kind of affecting the main kind of content whether it's, you know, the facebook news feed or twitter feed or search results. they've been pretty clear about not wanting to change and start
8:53 am
manipulating those displays of information, kind of their core product because of pressure from governments. and i think that's the right call, right? i mean, that's the kind of overbearing government effect on, you know, our access to information and kind of what views and perspectives are out there that i think would really undermine a lot of the very good kind of counternarratives that we see coming out. what we've seen some companies do is programs that they've had with nonprofits around like a number of different kinds of topics but really focusing in on the question of radicalization and extremism right now where in the kind of the advertising space that might appear alongside search engines or appear on your facebook page. kind of sponsoring different nonprofits to, you know, so that they can have the, they can have their message show up kind of as an ad alongside related content.
8:54 am
you know, i think there's still some questions there about, you know, is this company getting too far into trying to promote certain ideas over others. we have this funny relationship with social media pratt forms where in a -- platforms where in a lot of ways we really like it when content we cower about is displayed to us, and we don't want to see 19 million baby photos if that's not what we're into, but also it seems when companies are taking a nonneutral or very ideologically-motivated position, that can also make people feel really uncomfortable. and so i think a key part around all of this is transparency. people are, i think, particularly uncomfortable when it's not clear where the motivation is coming from or where kind of -- how viewpoints are trying to be shaped. so the more we can hear from the companies what are they doing, the more we can see kind of open public discussions about what government might be considering, what companies are considering,
8:55 am
you know, as opposed to kind of closed door meetings where we only sort of get leaks of agendas and bits and pieces of anonymous reports in the news, the more transparent we can be about, you know, how are are things being worked out and what influences are there, i think the more comfortable a lot of people will be. >> there was a lot of talk not so recently, but before when the platforms seemed to be doing a little bit less to combat that maybe they were actually helping but didn't really want to talk about for two reasons, one being that you don't want to kind of show your cards to the people who are trying to game the system and put that content up and, two, that cooperating with the government -- especially post the snowden revelations -- was notnecessarily desirable for the users. my sense is that we've seen a shift, and users are now actually wanting to see more of that. is that something that you've seen? and do you think that trend of sort of trying to keep the distance will start to e involve
8:56 am
away from that and to see more public cooperation, or do you see that continuing? >> i mean, i'd come back to the point about transparency. i think one takeaway we can have from the snowden revelations is that when people finally -- you don't want to surprise people with the scope of what's going on. that that creates a really strong backlash, and, you know, it's our right as citizens to know how is our government, you know, affecting our environment for speech. how is our government, you know, influencing what access to information in the kind of in public do we have. and so i think having these conversations more publicly is really important which is not to say that, you know, necessarily we want really close coordination between governments and companies on this. i mean, very much for the point -- i was really glad to hear you talk about sort of the recognition of when government needs to step back, because the
8:57 am
worst thing would be to undermine the efforts of the people providing alternative viewpoints because those people are sort of, you know, cast as being too close to the u.s. government and so discounted for that reason. >> i understand the sensitivity that you mentioned, miranda. but it's also -- at the same time it's true, and the social media companies are very clear about the fact that they done want to have their platforms being used by terrorists to spread their messagement and so -- or their message. and so there is a lot of basis for cooperation. and we're seeing progress in that area. and i think that the trend is headed in the right direction, as you mentioned. >> so what, given that, given these sensitivities and given the sort of overbroad approaches that we think might not be the right way, what would be helpful from companies, from civil society, from the american people to helping combat this content in the right way, in a mart way? -- in a smart way? >> i think there's some low hanging fruit on this.
8:58 am
when we did our report on isis in america, we talked to a number of muslim-american community members, leaders, religious leaders and these types of things, and they said, listen, i want to do counter messaging online. i want to talk to a kid that i'm worried about, and i want to bring him back into the fold. but i'm worried if i do, i'm going to get secondaried at airports for engaging with a known or suspected terrorist. i think there is some level that, you know, department of justice or other orientations could provide, at least some policy or legal guidance for what's acceptable and what's not online so people aren't being brought up on material support charges which is a very broad charge. i understand when i engage with these individuals that i'm probably going to hit up against stuff, but i understand the risk, and i know the transparency in it. but to ask somebody from middle america who wants to do counter-messaging to understand those nuances without kind of a right and left latitude, that would be something that at least the government could provide relatively easily. >> and i think one, one
8:59 am
contribution that companies can make in all of this, you know, in addition to all of the work that they're already doing is even more improvement in appeals processes for when people have their content come down or their accounts deactivated. because, you know, we know as they're focusing on trying to enforce their terms consistently, you know, mistakes happen. the kind of the scale of content that gets posted and that gets reviewed by companies every day is enormous. and so there are always going to be cases where the, you know, 10 or 15 seconds of human review that makes the decision that an account should come down errs too far on the side of takedown. and you might be losing really important countering voices in that kind of process. so insuring that there are ways that people -- and just kind of generally in the way that we look at how content policies are enforced on platforms to make sure that they're looked at not
9:00 am
just with an eye to how to keep the most extreme or violent content off of a platte form, but make sure that the space for discussion and debate about that content and about these issues more generally can still persist. >> and we can look into providing additional guidance in addition to what's there for those that are, you know, doing the work of counter-messaging. they shouldn't be in a position where when they're doing counter-messaging they have to be concerned about being accused of providing, you know, material support.
9:01 am
>> well, one other question i had is, you know, there are, there have been several lawsuits against the platforms for, for hosting this content which they're immune to under the law but can you explain a little bit more like, do you think those cases will go anywhere? do you think they're just people jumping on sort of the topic of the day? >> i mean, so, right, generally the law is pretty clear that there's no, there's a strong protections against holding platforms civilly liable for speech their users post. so i think there have been a few cases where people are seeking damages for the death of a loved one that they tried to ultimately tie back to content that had been posted on a social
9:02 am
media network. and of course it is like it is heartbreaking story, you understand why people find restitution but i think we need to be very careful how broadly we would scope kind of who is the proximate cause of the death of somebody in a terrorist act. and i think, trying to sweep online platforms under like a very broad idea of general liability for actions that are many steps removed from anything they're directly involved with is ultimately not going to succeed. >> i know the department of justice has sort of played with the idea of going after people who are sharing content itself. is that something you're continuing to pursue or are you approaching people maybe not promoting content directly, they're not the recruiters but they're supporting it and sharing it? >> our approach is governed by
9:03 am
the first amendment and there is a lot of speech that is protected speech we may not agree with but we're not prosecuting those cases. the cases which could be prosecuted are ones which there's been specific threat or solicitation of crimes against particular individuals. i think, seamus, you referenced one of the cases from ohio. those are cases we are talking about. >> we have to wrap it up in a few minutes. given this is such a live issue and important one because it is affecting lives even on whatever scale it's happening, it is very distressing to the public and platforms dealing with this and everyone is working in it, what do you think the most important thing for congress to take away from this issue is moving forward as they're thinking about how to either legislate or hold off on legislating or asking the companies for help with it? and maybe on the flip side, any
9:04 am
other parties involved, what do you think the most important thing we should be doing to continue the trend individuals rejecting the message that isis is spreading online? >> it is very clear we'll not kill our way out of this problem. we'll not delete our way out of this problem, so we need to continue reaching right audiences through the right mentioners. -- messengers. we put into place the government level and working with civil society a number of mechanisms we can get out the right countermessaging, the right positive messaging and then the right positive alternatives for young people. and as i spoke about in the beginning, may be disaffected for whatever reason. may see something happening on other side of the rule which they view injustice atrocity against a whole people, they say i can not sit still.
9:05 am
i have to do something about it. we have to work together to find those mechanisms for that small segment of the population that may be attracted to isil's message. remember to keep in mind that their message is overwhelmingly rejected already and we don't want want to be reaching out to, in the name of reaching targeted communities with overbroad tactics or messages that could paint entire groups of vulnerable or as a problem, when we have a distinct audience we're trying to reach through some actors and then in the preventative space general audiences that we're trying to reach perhaps through a set of those actors or different set of actors. >> and i say to, for congress and everyone to remember that the u.s. will be watched very closely for our responses to all of this, right? that, the kind of standard we set and model that we set can do
9:06 am
either a lot of good or a lot of harm. so if we keep it on the side of good, show that there are ways to pursue this fight against isis that don't involve broad based censorship, don't try to play whac-a-mole with extremist content online, are conscious of and actively trying to avoid stigmatizing effect of muslim communities and instead focus on showing how truly supporting our fundamental values of freedom of speech and right to privacy can actually help us succeed in the fight. i think that's ultimately a message for what does it mean to connect this fight from a position of democratic ideals will be much more convincing than an approach that kind of motivated by fear looks to, you know, crack down on more speech and put many more people under scrutiny by the government. >> i think i'm going to be contrarian for the sake of
9:07 am
conversation. congress has the ability to have a large megaphone and you see when congress uses the megaphone naming and shaming you actually see action. so i don't believe there would be summit convened by the white house if it wasn't for congress constantly hammering social media companies to deal with the content. almost a fortune function. they think there is a reason why youtube has a flagging for terrorist content. because for two years they got beat up on the hill for videos of u.s. soldiers being killed that were posted by a baghdad sniper. so it is a balancing act. i understand free speech and free expression issues but congress can play a role in forcing the convening as uncomfortable as that is and the default of social media companies is very libertarian in these type of things, understandably so but there is balancing act of family members we talked about in the lawsuit
9:08 am
and free expression of conversation online being contrarian though. >> we have a question from the audience. >> [inaudible] on the importance ever countier messaging developing this type of positive content, are there any empirical ways of messaging that in terms of -- you can see someone exposed to this type of content but actually translating that to off-line behavior and being able to correlate that is actually proving to be deradicallizing. is there a way to measure that or is that sort of guessing in the dark? >> i will start. i think it is hard, nearly impossible. easier when you focus it down. there is a think tank in u.k., small sample size, 14 people, did direct one-on-one online interventions to see how
9:09 am
disengagement would work and very small sample size. very labor intensive to do that type of thing. in terms of broad based messaging how do you measure that, very difficult. how do you measure don't do drugs and see something say something? it is a very difficult dynamic. >> it is difficult to prove a negative. absent this messaging who would have gone on a vow of extremism and who wouldn't but there is data that's out there and we see the types of messages that tend to resonate that get traction. stories of defectors, for example, stories of family members. there's data indicating that some of the best intervenors are family members, particularly mothers. there is polling data. i mentioned the poll indicating 80% of arab youth 18 to 24, would never consider joining isil. one year prior that same poll
9:10 am
was taken, the number was 60%. so you do see a trend in some of the polling. we are able to measure what types of messages and they're often time not government messages, messages out there picking up traction but the at end of the day, finding the right metrics has its challenges. that doesn't, that does mean that there aren't metrics we can use, we should continue to use and should continue to develop the use of data as we engage in what we're doing. i think it is important to make sure that you have empirical research, you know, particularly in the area of interventions as seamus was spoken about because you get a sense overtime what type of tools and strategies work and what types don't. some of that we've seen from the work that's done in europe and other places. so as government we try to draw on some of those studies that have been done by groups, for example, such as exit in germany and others that are operating in
9:11 am
the space. we had examples of programs that have worked and we have examples of programs that haven't worked. we try to draw from the best and go forward. >> i know you have to run so, sorry about that we'll keep questions going for other panelists. right here in the front. >> [inaudible] you say the numbers are so small of isis recruits in the united states and there was a blog post last week that toddlers have killed more americans than isil recruits. given mr. hussein's concern that we don't want to create the illusion that muslim youth are at risk because they're not but when focus is on violence, violent extremism or isil-inspired violence doesn't that kind of create that because you're just looking at one slice of violence in the united states
9:12 am
when overwhelmingly the violence committed in the united states is not, is not inspired by isil? and so why separate it out? because it seems like the indicators toward violence are sort of the same thing. alien nation, frustration, you know, that whole general thing that drives people to do violence. >> i would say that's a fantastic point and i think it's been one of the critiques of countering violent extremism frame on some of the government's work in this area, there is sort of this back and forth, are we talking about all kind of violent extremism? are we talking about all of the sort of threats domestically to, you know, violence against civilians? or are we really talking about, like anti-radicalization for people who might be recruits to
9:13 am
isis? and i think it's very clear people notice that sort of, that shifting of target and i would just encourage the government to be a lot clearer about, you know, what is the focus. as you say, if there are actually much more significant threats inside of the united states to, like the safety of our civilian population from people who have really nothing to do with isis, prioritizing a focus on that could be very important. >> i don't think it should be either/or proposition. when the administration released their strategy countering violent extremism would focus on all forms of extremists. that is theory. in practice it is focusing on isis and fighting terrorism. let's put that out there. you should be worried about nadal hassan's as much as the dylan roof es a of the world. at program of extremism we look
9:14 am
at all forms of extremists. we are looking at this exact issue. next month we'll have paper that looks at isis supporters on line versus white supremacists online. what do they talk about? how is it different? how is it the same? what are the followers like, those types of things. so we can have a conversation on extremism and how we focus on these type of things. i hate to do the number who has been killed, more likely. becomes either/or proposition and i would rather not do that. at end of the day you are talking about families that lost. talking about jihadist-inspired terrorism, the numbers are similar to white supremacist because of san bernardino few months back. they are very small but general population of the u.s. >> [inaudible] one is just about volume. specifically with american users or american content.
9:15 am
here over the past five years has the isis traffic gone up, down, stayed relatively the same? is there a way to measure that? if there is not a way do you have observations? one on take-downs. thoughts around when take-down campaigns or take-down efforts from the government either funding organizations or convening these conversations to talk about how do we do take-downs. when this becomes, i'm sorry. i reverse the question. the take-down question is when the government is funding organizations to participate or who lead campaigns, civil society campaigns to take down content is therefied dance how these campaigns are run? which i assume are well-intentioned about terrorist content and there are rights of civil liberties and protected speech. on countermessaging what are your thoughts around government convening conversations with tech companies and funding organizations to do countier messaging?
9:16 am
at what point does that become a domestics propaganda campaign to influence religious views and foreign policy views in particular? >> which one you want? >> i will do the take-downs. >> sure. >> i mean it is an excellent point that you raise. so i think an instructive example is actually some programs going on in the united kingdom and at the european union level. they have these programs called internet referral units which are one step further than your hypo kind of funding non-profits. where actually members of the government themselves in a -- it is metropolitan police in the united kingdom, chief law enforcement body, who has a unit dedicated going on to social media platforms, identifying content they think, that they want to see come down and figuring out which of the platforms in terms of service it violates. flagging it to the platforms for
9:17 am
their review so that, it is sort of this way of saying, well it is the platform who is making the decision whether it comes up or comes down. we're telling them content that violates their terms of service but we think there is a huge concern with that kind of approach. this is in a formal government program seeking to have certain content removed from the web and because companies terms of service can be much more restrictive than what government can actually go after under the law, it's a way for governments to kind of succeed in getting content taken down. that they wouldn't actually be able to go after through a court. so even when you kind of expand that out a step and say, you know, governments are funding and incentivizing private parties to do this kind of flagging you still end up back at this question of government action and so when you've got government identifying particular kinds of content, particular kinds of speakers and trying to restrict that, even
9:18 am
through these somewhat attenuated means in the u.s. that would raise major first amendment issues. >> see if i can take the first one which was increase of isis use of social media comparatively to other terrorist organizations. i mean, i think it is clear to say that isis has been very adept using social media and i say that meaning anywhere between 4,000 new videos a year. so they go on, they have twitter, they have telegram, various different platforms of channels come down or come out depending on the day. i'm on 50 telegram isis channels right now. sorry rashad. there are different ways in order entry points talking to these individuals. think of it like democratization of recruitment. if i am, give you an example. there were three girls from denver, 17-year-old, 16-year-old, jumped on a plane bound to go to turkey across the
9:19 am
border to syria. got picked up frankfurt and turned around. because their father called up every number in phone book to get. they went on tumblr page, every step, who to got to at turkey and what to say at customs. lowers a barred from 17-year-old kid from denver to realize how to make the next step. always eager people will figure that out but it allows for an ability that they didn't have before. it allows for interactivity they didn't have similar to social media in general. so i can have a conversation with them that i wouldn't be able to have five years ago because i know who she is on twitter and we can go back and forth on dm. similar to having a conversation with foreign fighter in fallujah, asking him what i need to know, what do i bring, not bring, who do i talk to. it is concerning from this perspective.
9:20 am
that is where isis is very adept at. allowing for those who would not necessarily be, it is making ease of use in away i think is concerning. i will leave it there. >> [inaudible]. >> i would have to point back look back at notes. charlie winter at georgia state university does good work looking at propaganda online. you might want to look at his recent reports. i will pull those and get back to you if you by me your card. >> [inaudible] can you talk more about what qualifies as -- what is overreach, what should not be taken down? >> yeah, i mean it's a great question and you know, i would love to hear from seamus, kind of how you framed your research but the, the concern that we see is that, you know, sort of, it depends on which government officials you're talking to and you know, are you talking to
9:21 am
someone from the u.s., the u.k. or europe but you know you can hear everything from like somebody planning a specific attack that seems, you know, very clearly something that would be unlawful to just sort of general pro-isis propaganda. so, you know, heard references to videos that are about you know, not about inspiring any specific violence but talking about how great life is in the caliphate. what economic opportunities there are or other sort of -- it's views that are disagreeable, flat-out wrong or untrue but not anything that falls under traditionally what we consider unlawful speech. it is much more in the kind of, building up people's, kind of positive feelings about, about isis. and so when you see the conversation kind of sliding
9:22 am
back and forth between, well, do we want to stop specific, like commission of violence or do we want to try to convince people that they're wrong to think in a certain way? you know, it is that ladder where it is really, trying to convince people that they have the wrong view or the wrong ideas. i don't think it should be the goal of any of these programs because i don't think it's going to work. stopping people from committing specific acts of violence is absolutely an appropriate goal but trying to kind of win people over to think, you know, according to a certain set of values or beliefs is i think a losing proposition. >> this is also one of the reasons that people have brought up, you know, why do the platforms not put in algorithm like they do for child pornography for instance? the answer is, so subjective. every piece of content, is this propaganda, is this extremist content, is this incitement to violence? it is very -- a scale you have
9:23 am
to look at each piece of content individually. >> technically when companies filter for child pornography or child abuse imagery, what they're doing comparing hashes of known images of that material to things that are up loaded to their own server so they can see, do we get, this file one of our users is trying to upload, does it match to something we already know about not wanting to have on our platform? that's a kind of image matching that is very different from the subject tiff assessment day-to-day of tens of thousands or hundreds of thousands of pieces of content that could run everything from, you know, a direct threat to somebody, stupid joke between friends, a thoughtful discussion about kind of the, you know, the ideas that isis is putting forward or an invitation, instructions for how to come to turkey. it is just a huge range of content that gets kind of swept
9:24 am
up in extremist content bucket as miranda says is that easy algorthymic assessment. >> depends how you define the buckets and your research. >> i think it is important when we talk about isis at least in the american context, isis in america is the spectrum. one time you have like a 17-year-old tweeting to 4,000 followers who isis is, drives his best friend to the airport to join isis. spends years in jail for material support. one side of the spectrum. the other side of the spectrum, spend 20 years in the u.s. gets u.s. citizenship, 11 days later goes to syria become as midlevel commander running battalion foreign fighters. that is isis in america. that is two different extremes. kid tweeting in his parents house and guy running battalion. when we scoped out the report, communicating threats depending on nexus to isis. then in terms of extremist
9:25 am
content we look at notes. if someone is tweeting to one follower, sure, yeah i guess they're an isis supporter but i'm less interested than i am if they're tweeting to 10 or so nodes pushing out new and interesting content that hadn't seen before. if they're connecting to them or talking or communicating let's talk on dm. that is where i start becoming more interested, is connections to it, not just the speech. >> a little bit more about the who, rather than the why. >> yeah, yeah. >> any other last questions? wait for the mic. >> have you folks considered mocking or making fun of isil in some of its practices? for example, the men seem to be guys who can't get a date except by kidnapping them. and i would think there would be a lot of room to make fun of some of the things they do. you can't make a lot of fun out
9:26 am
of beheading but -- putting them in context, these do seem to be guys who can't get a date. >> are there alternate approaches to -- >> right. i think this is an important question, right? what kind of countermessaging will be effective -- >> -- as opposed to countering. >> right, this is where kind of the term, counterspeech or counternarrative or messaging really falls apart. what we're talking about people sharing their views, sharing their ideas and one thing, we all sort of seen from content on soes media, funny content gets shared a lot more than kind of, nice, five paragraph essay, carefully breaking down points. there is definitely room for, a role for that too -- >> [inaudible]. seem to me to be a very interesting message.
9:27 am
isil is absurd. >> i would hesitate in terms of what intuitively makes sense to us what would be effective for isis recruits. if you actually look at the data of old data, aqi, mocking videos on that it was less effective than videos mocking aqi, when they were talking about atrocities or killing civilians. there was news article last week about a head cam of a isis fighter couldn't shoot straight. that got shared thousands of times. everybody in the media thought it was great countermessaging. didn't get any messaging in echo chamber we were looking at -- >> [inaudible] >> english language isis supporters on line, purported. they tend to have the run the data a little more. we're doing a collection now. they tend to care more about defects, people stand up saying i was wrong or things like that.
9:28 am
they tend to really get angry, want to counteract that. there is marked shift in messaging since administration talked about losses territory. isis videos clearly shifted away from giving candy out to kids in raqqa we're winning bottles, there and there. they're key in what we're doing, how they're adjusting their messaging on these type of things. the last thing i think is quite effective, at least from looking at it in different instances is, when you bring up families and the dangers -- what happened when an individual goes to syria and iraq and what they do when they leave their families behind there is level of effectiveness there. i'm talking to a number of individuals who are true believers on this but when you bring up family members, have you talked to your mom late did i? what do you think about the fact that you left them behind, they are tense up in a way i'm not used to seeing for them. i think there is effective way to do things. this is also to say
9:29 am
radicalization is highly complex and not a linear process, right? humans by their very nature are complex. disengagement or deradicalization will be as deeply complex and not linear. people float in and out. things that work for you will not work for you. how do we figure that out tailoring messaging. that is something rashad will have to figure out. it is very difficult to figure out the dynamic. >> any last points on that? okay. thank you all for coming out. we appreciate you coming out and we have the, upcoming event in the next few months. keep your eye on the mailing list for the congressional internet caucus, on the website, netcaulk does.org. have a great weekend. [applause]
9:30 am
[inaudible conversations].

13 Views

info Stream Only

Uploaded by TV Archive on