Skip to main content

tv   British Committee Hearing on Fake News Twitter Panel  CSPAN  February 16, 2018 4:05pm-5:24pm EST

4:05 pm
i'm sorry, we're running slightly late. but hopefully we will won't run
4:06 pm
any later than we are. if i could -- the members of the committee are returning but if i could start with some of my questions, you'll be aware that the committee has made repeated requests to twitter for information about the activity of accounts, fake accounts, particularly where they may be connected with russian agencies. it is a similar request made by the u.s. senate intelligence committee. >> thank you, chairman. thank you for having us today. i would like to defer that question to nick, my colleague from the uk, who does have an update to give you.
4:07 pm
>> as he said, and in my letters previously, we have been doing further investigation. i would like to read this because it is -- i don't want to misread them. >> is it very short? >> two paragraphs. we have a broader investigation that has been identified a a very small number of suspected internet linked accounts, 49 accounts were active. we had 49 accounts that were active, which represents less than 5% of the total accounts that tweeted about the referendum. they collectively posted 942 tweets, representing less than the total tweets posted. these tweets cumulatively were retweeted 461 times and liked
4:08 pm
637 times. on average, this represents fewer than ten likes per account and fewer than 13 retweets per account during the campaign with most receiving two or fewer likes and retweets. these are very low levels of engagement. >> what's the audience reach for those accounts? >> less than two retweets. >> what's the audience for that? we have a set number of accounts. they have people who follow them. what's the audience? >> the engagement metrics that we have been using in this investigation to understand how they have been on the platforms. as i have highlighted, very low levels of engagement. they will directly impact on the viewability of those accounts.
4:09 pm
if there is very low engagement that would suggest very low views. >> the number of accounts activated is a number of information but the reach for that information something else and what is being shared is of interest too. it would be interesting to know the sharing. is it content they created or links to other sources of information. would that be useful to know? so thank you for giving us that update. i think we're clearly got other things to follow up on and we need more information on that. and would need more information about that. we'd be interesting to know whether you've restricted your searches to certain already identified accounts or whether you've done a troll across the whole platform for accounts registered in russia that were active during the campaign. we know with twitter, there was evidence of large numbers of suspected bot accounts and then being taken down after the referendum was over. that's why we're persistently asking for this information. >> and i can touch briefly on
4:10 pm
the point there. we were asked to look at the city university research. one of the challenges we have, these accounts weren't identified by research. twitter is an open platform. our api can be used by universities and academics around the world and it is. unfrntsly, that doesn't give you the full picture. in some cases, people have identified accounts as suspected bots, which have later been identified as real people. one of the things we do, we work with academics closely on asking people to bear in mind when they make assertions about the level of activity on twitter that there may be cases where those assertions are based on very active twitter users who are real people own not bots. one of the dangers of using activity as a metric to identify bots is you may misidentify prolific tweeters who are human.
4:11 pm
so that's a benefit. researchers can use our platform. it's a challenge for us. the researchers can't see the defensive mechanisms and the user data we can. >> there's been plenty of analysis done looking at characteristics of suspicious bot activity. twitter also knows where accounts are being operated from. therefore, you could easily detect the creation of accounts operated in a different country that suddenly start tweeting about something happening in another location. that sort of activity is easy to spot on the sight if you're looking for it. i want to ask carlos, what cooperations were given to the u.s. senate investigation? has the evidence of russian-linked activity on twitter just been extrapolated from the work that's been done looking at the facebook pages, or is that separate intelligence you've supplied to the senate?
4:12 pm
>> thank you for that question. you know, we're constantly monitoring our platform for any activity that's happening. the internet research agency in particular, we came across that information in a number of different ways. starting in 2015, there was a "new york times" article about some of the activity on some of those accounts, and at that time, we started actioning those, in june of 2015. in the course of the follow-up to the election, we got information from an external contractor whose name is q intel that we share with other platforms that provided a seed of information that they told us was related to this farm in st. petersburg. the ira. i believe there were about a hundred accounts turned over. bit by bit, following more and more signals, found accounts
4:13 pm
that were related to it. we've improved the information we've provided to the public. as you mentioned, sir, to the u.s. senate and to the house intel committee. now that number is 3,814 internet research agency accounts that were active. noting that, you know, we started suspending these accounts in 2015, all of them have been suspended. they're not functioning on our platform. and you heard from our peer companies earlier today. we are very good at understanding what's happening on our platform, but sometimes it is important to have that partnership with third parties, with contractors, with civil society, with academics, and with government and law enforcement in particular, to help us figure out what we don't know, what we can't see that's
4:14 pm
not on our platform. we're good at tracking the connections on things on twitter and sometimes we need some partnership on the rest of the picture. >> does twitter believe that there are likely to be or other farms, agencies running fake accounts from countries like russia? there's been a focus on one, but some people say if the level of activity that people believe has been the case and it was being carried out, it would probably be too much to be done by one agency, and there will probably be others as well. >> i think we have to be humble in how we approach this challenge. to say we have a full understanding of what's happening or what happened, we have continued to look. we're constantly looking for efforts to manipulate our platform. we're really good at stopping
4:15 pm
it, especially when it comes to the malicious automation side. but we will not say or stipulate we will ever get to the full understanding. >> some of the evidence that the committee took in when we started the all evidence hearings in westminster related to the referendum in catalonia and research there that had been done suggesting there was not only russian activity but agencies based in venezuela. is that something twitter has looked at? >> nick pickles, not only our uk lead but also one of the leaders in the company when it comes to information quality, i think could perhaps address that more than i can. >> this is one of the challenges that twitter presents an opportunity. research is done and published. that particular research wasn't published in a journal. there's no underlying data. we've not received a formal communication of the findings. it's very difficult for us to validate those external research
4:16 pm
findings sometimes. what we have is the numbers at an aggregate level. one thing i would say is that the -- and just to respond briefly to your previous point, chair, the assertion that it's easy to identify very quickly where an account is operating on the internet, where someone is based. i was logging into my e-mail earlier on as a standard corporate practice, we use a virtual private network to communicate securely with our company. that took two clicks. as far as google is concerned, i'm not in d.c. right now because my virtual private network is connecting somewhere else. so the idea that companies have a simple view of how customers communicate with us, it may be rooted through data centers, it may be rooted through vpns, it may be rooted through users deliberate ri trying to hide their location. i want to caution the idea that somehow saying there isn't absolute certainty there. all of this work is based on a variety of signals, and we make probable decisions based on that, but they are very rarely absolute. >> if i could, just to build on nick's point, which is an important one. geographic basis of where tweets are coming from, where users are, are not always the strongest indication of what we use to action accounts based on violating our terms of service,
4:17 pm
which we take super seriously, which means even if nick is dialing in from a vpn or tour browser or other ways to obfuscate where he's coming from, if he breaks any of our rules, we're going to hold him accountable. >> the explanation you've given there, saying it's possible for people to hide where they are, i understand that, but also given that i would imagine if we were talking to your advertising people, they would say it's kwiez easy to buy advertising on twitter that targets people based on where they live. that would be one of the rudimentary requirements of a brand seeking to advertise on a platform like twitter. >> only about 2% of our users share geo location data in their tweets. >> that's not what i asked though. >> that's one thing people often assume. someone may identify their country in a biography. you may be able to infer it from the language they used. i think sometimes -- and i'm not saying the research isn't important. i'm just saying that sometimes the conclusions reached don't
4:18 pm
match what we find as a company. and we see that quite regularly. for example, some of the research on bots will identify people. some of the research will identify other manipulations opt platform that we were able to detect and prevent. but that's not -- >> so if an advertiser came and said, i want to pay for promoting my messages on twitter, i want to target people in the state of virginia, we can't do that because that's not the way we're set up, or yes, we can? >> so that's an excellent question, chairman. thank you. we work with our advertising clients to try to get them the best value for the money. we don't have as much information about our users as some out of our peer companies. we try to figure out what are the analogs tries to reach the audience they're trying to reach. followers of cnn or fox news or the bbc. we can -- we do have a degree of geolocation for -- within a country or within a media market within a country, but we don't overexaggerate our precision on that.
4:19 pm
but we do provide extremely good value to our advertisers. but again, we are limited by some of the factors. >> but you would sell advertising based on location, even with those caveats. >> it is one of the approaches, but often we find people who are interested in certain subjects or search for certain issues can sometimes be a better -- >> i understand that. you could sell to an audience based on location. >> we work with our advertisers to try to get them as close as they can to what they're aiming. >> but yes. >> by country, we do really well, like nick said. about 2% of our users geolocate at any moment. at any given time, there's a degree of uncertainty. >> i just want to be really clear. the question i'm asking, would you sell advertising -- would you sell an audience based on a
4:20 pm
location in a country. >> yes. >> thank you, chair. now, you mentioned the figure before has gone up in terms of what you said of fake accounts effectively. we've had security briefings from people who stood at this subject since 2014. they're suggesting the figure could be in the tens of thousands in terms of fake accounts. in addition, the phenomena we see which is probably more damaging is the means by which this is used to amplify these fake accounts and disinformation, which are often accounts which have a certain bona fide texture to them. if you drill right down, they have all the signatures of falsehood about them in that respect.
4:21 pm
now, obviously you don't have the monopolistic position of google. you don't have the money of a facebook. and you seem to have this infestation of these types of accounts. is this too much for you? is this a little bit like the wild west? >> no. >> that was -- i don't believe so, sir. we are a smaller company. we have 3700, 3800 employees worldwide. google has more employees in their dublin office than we have in our entire work force. but we have been leaders in many fronts in utilizing machine learning ai to address some of
4:22 pm
the thorniest issues, precisely the ones you're talking about. i'll give you one example, if i may, which is terrorism. 75% of those before they tweeted once. we have incredible engineers. we have incredible ability to address precisely the issue you're talking about, which is malicious automation. we now currently are taking down 6.4 million accounts per week. not taking down but challenging them and saying you're acting weird, can you verify you're a human being. we measure our progress in weeks. that's a 60% improvement from october -- >> what are the amplification of those accounts? >> that's precisely what we're talking about, which is malicious automation. you see a lot of people acting in a coordinated way to push something or try to push something artificially. we have gotten really good at stopping that particular effort
4:23 pm
to manipulate our platform. we've protected our trends from that kind of automated interference since 2014. and one of the challenges that we see and that nick referred to is that many of the folks who were investigating our platform are doing so through an api we provide freely. there are a lot of things we do on a day-to-day basis to action those accounts, to take down that malicious automation. the way we challenge accounts puts folks in a time-out where they cannot be tweeting, but they're still visible to the api. things like safe search, which is the standard default setting. >> okay. you say you're exploring whether
4:24 pm
to allow users to flag tweets with false information. how soon will we see that? >> one more time, i'm sorry. >> i've had a document given to me about public actions that you're undertaking under the social media companies. what they say is flagging tweets that contain false or misleading information. where are we with that? >> sorry, i just want to clarify. >> users, other users saying -- flagging tweets from other people saying we believe this to be false or misleading. >> i'm just curious of the source of that. i'm not aware of that being discussed. >> right, okay. so that's not something you would consider. >> it's more broader than that. firstly, it goes to the point of some of the wider questions the committee is asking about. what would you do with that information, and what's the response required? secondly, the likelihood of users gaming it to try and abusively remove people they disagree with. >> they're already gaming you already. >> this is one we're very conscious of. >> you've clearly not got the
4:25 pm
man power with 3,700 people. that's clearly the case. i do appreciate that. why don't you allow more of your community to flag up these tweets which are containing misinformation? >> i just want to clarify this. we're removing currently 6.4 million accounts every week for breaking our rules, specifically around automation. that's a huge number of accounts. now, are you asking us to remove content that's untrue? >> no, i'm saying where potentially you could explore this area. and i understood you were exploring this particular area. to allow other users when they
4:26 pm
see quite clearly a piece of misinformation, much of which is in order to damage political processes within our own country and stability. they could then flag that as a warning to other users when they see that. it's not something you would consider? >> i think through the whole sweep of your pretty incredible hearing today, you're hearing from a lot of different voices who are trying to approach the issue from different ways. where there's a piece of information that's against the law, we can action that quickly. during the 2016 election near the end in the u.s., there were a series of tweets that were spreading the idea you could text to vote or tweet to vote. it's a standard effort to mislead people and voter suppression, which is as old as electoral politics but has moved into the 21st century. that's against the law in the u.s. we're able to take that down extremely quickly. and the truth reached eight times more twitter users than the initial falsehood. but i think when it comes to plainly false information, it's -- you know, the conversation that happens on twitter at any given moment is a wonderful democratic debate. we are very cognizant of the
4:27 pm
things we can control and the things we cannot. we're trying to address malicious automation, the kind of things that can take a bad idea and spread it quickly. we're trying to -- i think a lot of the things monica mentioned in the last panel, elevate voices that are credible. and then give our users more sense of understanding of who's talking, more -- a broader view of who is actually speaking. >> okay. in terms of outside help, if you like, considering your relatively small work force, the need to better understand your users' information, what about what was mentioned by my colleague earlier in terms of academic oversight, in terms of allowing them no see the information. i don't mean one or two but a much more open, much more
4:28 pm
transparent means by which can be see effectively what is being done and what can be done to the max. >> i think that's absolutely true. obviously that conference in california last week with academics discussing this product. we're currently hiring for two roles at our headquarters whose jobbing will be specifically to ensure those conversations are happening. one thing we're very fortunate for is you've already seen a lot of information academics have produced based on twitter data. api is open. researchers already access it every day. there's a huge amount of research happening right now on twitter data. so we're absolutely committed to deepening that relationship and learning more. but i think it's important to recognize twitter data is arguably the biggest source of
4:29 pm
academic data. only last week we expanded the amount of information available, increasing the ability to search for any tweet ever posted, rather than a fixed time period. >> based on your statement at the beginning, it seems these academics are probably more effective at finding problematic content than twitter itself. >> we spoke about misinformation. if we're looking at how misinformation spreads and what kind of content spreads. twitter is 280 characters. so often the content might be a link to a newspaper, a link to a video. some of the information may be off platform. we can absolutely learn how those networks are working. it also informs the solution. i think one of the big challenges, and there's a lot of research done on this, has found some of the solutions proposed to educate users have actually had negative effects. they've created false trust where it wasn't true or created more hyperpartisan feedback loops where people see their own content, worried it will be removed, so they share it faster. there's quite a lot of research going on there. we want to learn from that
4:30 pm
research to inform our approach, not just about the behavior, but to improve the quality of debate. >> if i could add on top of that, because last year twitter offboarded and devoted the money they put on the platform globally towards exactly the kind of research nick talks about. we are unique among the big platforms in the amount of information we're giving freely to the world. we know we can do better, and this is part of the answer. >> thank you. >> julie elliott. >> thank you, chair. you've said about these 6.4 million accounts you take down, how many accounts do you have at the moment? >> 330 million. >> so it's a really tiny number, isn't it, in comparison to how many accounts you have. >> your colleagues on the science and technology committee thought it was a rather large number. >> just to clarify, those 6.4 million, we challenge them.
4:31 pm
it's all a question of certainty, right. the more certain we are that you're violating our terms of service, the more aggressive our actions are. sometimes people in a moment of peak or excitement do things that are unusual. they tweet an unusual number of times. what we'll do is send them a note saying, you're acting a little strange, can you verify you're a human being. if you're really into twitter, you will pass that test. but if you're a bot farm, it's too expensive a thing to do. >> so you've got all of these millions and millions of accounts. the twitter handles people have often bear no relation to who they are, who is following you, who is tweeting you. what are you doing to try and identify who people actually are? >> so there's two sides of that.
4:32 pm
one is verification is an important tool we've used for a long time. we're taking a hard look at that and trying to figure out ways we can give our users more context about who the credible voices are on twitter. we, as part of our effort to get ready for the 2018 elections in the u.s., are working with the major parties to verify a larger number of candidates so that as a hedge against impersonation. impersonation is against our rules. you can't say i am an mp from manchester if you're not. so that's against our rules. however, we do honor and respect certain voices that have to speak anonymously. it's an important part of democratic debate. it's an important part of satire accounts. it's important to note in various places in the world, there are freedom fighters, there are christians in china, there are people in the middle east who are combatting isis, who are trying to promote counternarratives against radicalization that if they
4:33 pm
didn't do so, their lives would be in danger. so we try to respect that. it is a challenge, and it's a complicated one, but again, as i said, we're taking a hard look at how do we give people more context about who's speaking while also giving voice to those who are putting their lives in danger to communicate. >> so one of the things we are struggling with in the uk at the moment is the very abusive tweets coming to people in public life, particularly women in public life. we've had debates in parliament on this issue in recent weeks and months. every day as a woman mp, there are abusive tweets come at me on one thing or another. if i speak on certain issues, you know you think, right, let's wait for this cyberattack going on. you never see who these people are. there might be a random name
4:34 pm
with numbers. there's nothing in their descriptor that says who they are. if you call it to the police, the police can't track down who they are. so it goes on. there's usually a huge level of not just attacking what they're saying but disinformation as well. these things keep getting retweeted and spiral and spiral. what, as an organization, are you doing to try and stop that kind of thing happening? >> i can maybe pick up on the committee for standards and public life report. i met with the committee twice to discuss their work. i'm a former candidate myself. so this is something i have a very personal interest in, and i have many friends who have been in the electoral process. firstly, when i joined the company at twitter, our safety approach was in a very different place. i joined in 2014, just after the abuse we saw directed to -- and since then, our approach has massively improved.
4:35 pm
we have developed a lot of technology. we've strengthened our policies. we've built the tools and the platform to make them much easier. when i joined, i think it was ten clicks it took to report content. it's down to three or four. we've made it easier for users. we allow users to report other content, not just content directed at themselves. and to give you one idea of the impact we think we're having, and i can share the detailed break down, we take action roughly now on ten times as many accounts every day as we did do last year. so in one year, we've been able to massively increase the number of accounts. you may have also seen we made the decision in december to expand our policy on violent extremist groups, which led to britain first, an account being
4:36 pm
removed if our platform because we felt that was more we could do. something we're also doing, which is relatively new, is the penalty box of when someone crosses the line, how can we change their behavior to stop them crossing the line again. by locking their account, saying this specific tweet has broken our rules, you must delete it before you come back to the platform. you must also give a phone number. then they can come back on twitter. we're seeing -- i'm just double checking the number. of the accounts that go through that process, about 65% only go through it once. so we think we're changing their behavior. it's not to say by any means this is a done issue. safety is still the company's top priority. jack dorsey, our ceo, spoke about it being our top priority, but we think we're making progress and we're encouraged by the results. >> okay.
4:37 pm
thank you. >> i think simon has a quick question. >> on that point, i'm very encouraged by the shift that you've been able to achieve between 2014 and 2017. but the case i want to bring to your attention, because it demonstrates the problem, former colleague garon davis was subject to a five-week campaign based on an accusation of a criminal act with which neither he nor his wife had ever been involved. despite five weeks of efforts to persuade facebook and twitter to do something about it, he was told that nothing could be done. are you saying on the record now that couldn't happen again? >> no, i think it's important to stay the story being circulated was published on a british regulated publication that's a member of ipso.ay the story bei was published on a british regulated publication that's a member of ipso. the idea this was an unfounded delegation that only lived on social media -- >> i'm not interested in what came up in print media. i'm asking you whether that clearly, provably untrue statement which could have arguably cost his seat -- are you offering us guarantee here that sort of thing couldn't happen again?
4:38 pm
>> we are not going to remove content based on the fact it's untrue. the one strength twitter has is it's a hive of journalists, of citizens, of activists correcting the record and correcting information. if we could have this same conversation about the 350 million -- >> i must be quick because i'm not scheduled. the fact is it qualifies to a great extent as abuse and/or intimidation. and it's untrue. but you're saying, well, no, nothing we're going to do about that. >> so context is important. if ab account was created for the sole purpose of abusing somebody, that would cross our sole purpose of use. if it was using language based
4:39 pm
on hateful conduct, we'd remove it for that. the truthness of a piece of information is not a ground -- >> so the deliberate use of your platform to distribute absolutely false and defamatory information can continue in election time according to what you've just said. >> i want to cover more than that. i don't think technology companies should be deciding what is true and what is not true, which is what you're asking us to do. i think that's a very, very important principle we should recognize. during the brexit referendum, we've heard there were similar claims made on both sides. >> so you are expecting a different level of regulation than -- if that had been a public broadcast or a newspaper, that would have -- legal action could have been taken. you say you should be outside -- you shouldn't be aligned in the same way as they are. >> as i said, the information did originate in a currently regulated member of ipso. so i think the idea this is a distinction, we are not going to remove information based on truth because i don't think that's the role of technology companies. the idea we're not regulated i think is, as you've heard from previous witnesses, not an accurate one. >> so i think just to be clear
4:40 pm
for all the bad actors who are listening in, if you set up a false account under an anonymous identity, you can disseminate as much lies as you like and it's not a breach of community guidelines. >> no, i didn't say that. >> no, i'm just trying to understand it. >> i said clear we take context into account. if you create an account deliberately to target an individual in a harassing, abusive way, we have rules -- >> what you're talking about there is a harassment, abuse. that's clear. what we're talking about here is lies. someone who's deciding to spread lies about someone else. they're not harassing them, not intimidating, not inciting violence, just spreading lies. they're using the anonymity of twitter to do that. there's basically nothing you will do about it.
4:41 pm
>> the anonymity on our platform is not a shield against breaking our terms of service. >> what i'm trying to get to, this isn't a breach in terms of service. >> it's conflating a number -- i think monica earlier, and juniper, both mentioned fake news is an over broad category that doesn't -- labeling something as not true doesn't necessarily mean the receiver of that information will discount it. what we're focused on doing is attacking the things we can control, which is if something's illegal, it's against our terms of service. if it's telling people to tweet to vote, that's against our rules. we'll take that down. elevating credible voices is also incredibly important and figuring out bbc, the guardian, what other voices people trust. >> just want to finish on this point. i know people want to come in. telling lies on twitter isn't a breach of community guidelines and wouldn't require action to be taken against the account. that's what you're saying, isn't it? >> if that's the only ground?
4:42 pm
we do not have rules based on truth. >> so and obviously people can be anonymous to protect their identity. someone can use a false identity to spread lies about someone else. that could be reported to twitter. these could be demonstrable lies. it's not a matter of opinion, but things that are demonstrably untrue and that wouldn't in and of itself require you to take any action. >> i don't want to sound like a broken record, but the context would be important. >> i understand what you said. on this particular point, just lies, not inciting violence or not a campaign of harassment but just telling lies, spreading lies is not a breach of community guidelines. in that case, if i made a complaint, someone's spreading lies about me using a false identity, i can't take legal action because i don't know who they are. but it wasn't harassment. then you wouldn't do anything about it or you would feel you were doing the right thing in not making a judgment. >> yeah, and i think that would be the same position across most big platforms.
4:43 pm
i don't think that's particularly unique. >> that's probably why we're all here. >> just -- i was looking this morning at the quote that a lie can travel halfway across the world before the truth gets its shoes on. >> yay, yeah, on twitter. >> actually, it was attributed to mark twain incorrectly. it wasn't actually him that said it. for all the stories of folks spreading misinformation on twitter, there's as many stories of the truth coming out loud and clear. what we do as a company is try to figure out what are the voices that resonate with people that have credibility and let the debate flow. and who would you put in the position to tell us to be the arbiter of the truth? >> well, i think throughout this inquiry on its did information, fake news, it's quite clear there are aspects of opinion, which are up for debate. i appreciate that's difficult, but sometimes there's downright
4:44 pm
lies. to use your misquoting of mark twain, if you wanted to spread a lie halfway around the world before the truth got out, twitter would be a pretty good medium to use to do it. ian lucas. >> could i just be clear about the information that an account, a twitter account holder gives you when they open an account. do you hold an address for that person? >> not a physical address, no. >> so what kind of address do you hold? >> we might have an internet protocol address, an ip address. >> so you don't know what jurisdiction that person's in? >> there would be a range of other information we may be given. a phone number might be one. an e-mail address might be
4:45 pm
another. the user may share their location in their profile picture. there's a range of information that we may have. >> so if i was disseminating information in a political campaign, then i could use twitter regardless of which jurisdiction i was in, in the world. >> yes. >> and i could pay for advertisements on twitter to assist targeting of that campaign. >> i don't want to be rude, can you just answer the question? >> about ad transparency, yeah. so we were very happy last year to lead industry announcements when it comes to ad transparency and ads that mention candidates for all advertising. people are going to have to say who is the account behind that advertisement, what is the creative, the term we use for the video or the advertising image, and an additional layer of information including who is paying for the advertisement and demographic targeting. we didn't have -- >> you don't know where they are, where the source of the tweet is, who they are? >> so yes, our center is intended to enable advertisers to disclose where they are and to be in compliance with election law. >> so in both the u.s. and the
4:46 pm
uk, as i understand it, it's illegal to accept donations from outside the jurisdiction. so are you saying that your new rules will enable a candidate or enable the relevant authorities to be clear that a donation does not come from outside the jurisdiction? >> it's our intention to develop a system so the paying entity can disclose that information
4:47 pm
and the entire public can see that. >> so at the moment in the general election, in none of those elections was there a system in place to prevent someone from outside the jurisdiction illegally disseminate information in election campaigns. >> that is correct. i will note our investigation revealed a small number. nine advertisers altogether, two of those were the ones we discussed, russia today and rt america, which is a branch of russia today. this is a news site with advertisers in a lot of platforms. we're the only major platform to kick them off. >> we come back -- >> it wasn't a major --
4:48 pm
>> well, we don't know whether it was or it wasn't. as i said to facebook, you guys have all the information. we don't have any of it. and you won't tell us what it is. >> i want to clarify one point. you used the word donation. i think it was referenced in the previous hearing relating to the exemption. so if someone was to donate to a political campaign and that political campaign then spent that money on advertising, we wouldn't know the origin of the donation, which i think was your question previously. we would know who spent the money and who was our customer, but we wouldn't know the money had originally come from that source. >> it was my colleague, actually, who asked that question. >> apologies. >> what i'm concerned about is someone -- i have some friends who are american politicians. if i illegally purchased
4:49 pm
advertising in the united states and disseminated it through twitter, there would be no way of establishing that i was in the uk. >> and it's our intention through the advertising center to address that. it's not just a question of building the technology to allow people to report but to try to figure out ways we can, at scale, actually make sure that people are following the rules. it's an important challenge. i'm, again, very proud that twitter was the first out of the box on those reforms. >> okay. can i just ask one brief question? do you agree with me that twitter is addictive? >> i think one of the challenges of all technology and of all tools and things that we use in our every day lives is different people react in different ways. some people are able to sit through a family dinner without checking text messages. some people like to check them. some people check their e-mails
4:50 pm
every five minutes. other people don't. i think the research is very interesting and certainly something we're taking on board, but i don't think it's in our position to say that because i think it depends on the individual. >> i think it's exactly your and, you know, young people, children use twitter. i use twitter. i think i'm addicted. my wife thinks i'm addicted. [ laughter ] and one of the reasons why you are so effective is because you are addicted. and the analogy is smoking. and that doesn't mean that we can't regulate and take appropriate steps to manage it. so my question. . do y do you agree with me that twitter is addicted? >> well, in a personal capacity
4:51 pm
i'm able to go a couple of days without tweeting. >> how many days in a year do you go? >> well, it's public so you can check. >> you know, i'm not -- it's incredibly important question. >> and we are working with the secretary of state for health working with a group, working on the internet safety. these are important questions to how to equip young people and make assessments of the information. but i think in terms of those type of questions, i think that's where academia is well placed and we have to work with them. >> well, it is addictive. >> i spent a portion 6 yesterday event run by common sense media looking at these questions. and they ran a very entertaining campaign that featured comedian
4:52 pm
will farrell talking about having a social free media dinner. these are really important issues and we think about it carefully especially for younger users. we don't allow folks younger than 13. >> thank you. >> and my first question is we heard from google and facebook this morning and they both said the trust of their user is one of the key elements to their business. as the trust of your user as important to you as it is claimed to be to facebook and google? >> definitely. people come to twitter to figure out what's happening in the world and talk about what's happening in the world. and if they have a bad experience, they'll find somewhere else to go. we are proud and jealous of 33 million users and want to keep
4:53 pm
them. we know there are other places they can go. we know we have competitors trying to take them away. and part of that is giving them a sense of the kefd ens and safety in what they are seeing the reason i ask that question and your answer to mr. hart, where you seem to say that it's the wild west out there, absolute free for all, and you won't act on what is a lie, you won't act on anything that is out there. and i'm wondering how can you build trust on a platform among users when you yourself say that under no circumstances other than legal, i presume, and which you will out them? >> you know, i think you are familiar with the debate about whether scotland would have access to the pound after the referendum. we are all familiar with the debates around whether the britt ance spends 33 million to the european union. actually one of the strongest
4:54 pm
characteristics are those people sharing in the debates, sharing information they disagree with. sharing their version of events as they see it. actually, our job is to make sure, this is where the committee's work is important, those who seek to manipulate our platform are not able to manipulate. and those voices credible, journalists are elevated. but the question i think, this is an important question, because it goes through a lot of issues in the industry, and also internationally, is i don't think people do want technology companies sorry you've said something untrue we'll delete your account. it's about elevating credible voices so you can speak to other users and come to their own conclusions. >> you talk about manipulation of the platform. the platform i think is generally gathered as being
4:55 pm
manipulative. i mean, how can you sit and argue that twitter does seem to be the number one platform of choice for those who would seek to disseminate disinformation? and why would twitter be almost identified as our weakest link in the chain for those of us to do that? >> i think the question around -- been so much interesting research talking about the fake and junk knows. oxford university published a story and i would challenge your premise. the vast majority people on twitter were excited to see recent numbers of the number of people visiting news features from twitter is significantly increasing. so i think the idea that there are a huge number of journalists in parts of the world, journalists like the guys at
4:56 pm
bellingham, an organization that didn't exist a few years ago, but doing ground breaking research. >> i have no doubt it's true, but i would expect that platform of 330 million users vast majority would be a human, and, b, truthful and honest and decent. but you are seen as being weakest link in the social media chain for that dissemination of fake news and false information. and i'm wondering why that would be? >> well, actually, that's very conversation to have with claire coming on afterwards that would say the term fake news is looking at things like we have a lot of politicians who use twitter around the world and journalists so a lot of different perspectives shared. people also sharing tweets from the moment something happens in the country, often first time we
4:57 pm
see is a photo, tweet, from someone on the ground often realizing what's happening. and our job is to credible information to help users. >> i'm sorry, sir, further challenge the premise. one of the things that nick mentioned earlier that i think is important going back to is the researchers tend to flock around our data because ours is only one out there. professor at dublin city university talks about the over indexes of challenge to twitter. we are not denying people trying to manipulate our platform. we are betting getter and better at stopping it. this isn't going to be a challenge solves itself quickly. i think the folks you are asking to testify in front of you today provides a good cross section of trying to understand that. but we recognize that is a challenge. and we work every day to make it a better experience.
4:58 pm
and part of that is the kind of progress that we've had. 6.4 million accounts a week, that's up 60%, just from october of last year, that's up 3 x where it was two years ago. we measure our progress in weeks. and in months. and just as the other folks who testified earlier today, our bottom line is tried to getting this right. >> i accept that. but we have an awful lot of investigation and discussions and it's an old rule but never usually wrong, if you follow the money, and having many millions of fake accounts are bots among those 330 million users, that allows you to sell advertising and claim a potential reach to advertisers far beyond what you should without the fake accounts. and what we have you become a person of the bots? >> well, firstly, so we don't
4:59 pm
clue the spam accounts in the metrics that we report to the markets. >> so then ask i ask how many bot accounts are there? >> so, we can share the information, shared with the stock market this morning, less than 5%. >> 5% of 330 million? >> 330 million excludes that. and to your point the exact opposite is true. if people don't have faith in what it is we are saying, they won't be willing to come to the platform and advertise on the fl platform which is why we fight this with all the tools we have. >> and do you accept that everything you identified fake account on a bot someone somewhere on a bot farm thousands more, and you are fighting against this. are you absolutely convinced and
5:00 pm
yet convince us you are doing absolutely everything you can to combat the bots? >> so two things. firstly, the benefit of twitter for academics is the open api. that allows people to build apps which are often part of that bot problem. we speuspended 155,000 of those applications since last year which was responsible for 2.2 billion tweets. one thing i would say is being a bot per se is not against our rules. so there are plenty of bots around the world. and some of them monitor wikileaks. so the idea that an account is automated and used word bot is not a bad phrase and something that is concerning last year if you saw something you didn't like you said it was fake news and that's how you discredited the source. and this year we see that
5:01 pm
happening real people replying to people on twitter it's not bot disparaging their view points and that's not helpful. so i think it's important to distinguish between malicious automation. >> and in the great scheme of things, of these how many 6, 7 million that you have identified, how many of them check air quality and how many of them would be deemed to be malicious? >> you are not looking at 50/50 split? >> we are removing a lot of those accounts. one of the challenges we have, we in stress in technology, when we remove somebody, they come back. >> factory basis, when they just keep coming and coming and coming, how can you be confident and can you assure your users that you are doing absolutely
5:02 pm
everything you can to stop this factory production of fake accounts? >> absolutely. and so we have a dedicated information quality in the company which brings together people from our news team, from our engineering teams, from our safety teams, to make sure that problem is being addressed. it's also why the industry partnership is important. because the journey, and it's a bit like the committees heard about the advertising model how many steps between the user and the person who is the tde advertiser. and the same thing works on the internet. lots of different partners. so we as industry doing more together. so to use your analogy of farmer, you are getting sym cards, how are you using phone systems to create the accounts. so we have to make sure there sa top to bottom response, across industry hand not just the surface layer. >> just to be clear. on the 5% of user base about 330
5:03 pm
million, on top of that would, that would be about, what, 16 million fake accounts, bot accounts, sorry, roughly? >> so these are rolling averages, you know, and the 330 million isn't our total user base, it's the people who check in on monthly basis, right. so we are constantly knocking bad accounts off of our platform. they keep trying to get back. we are getting better at stopping these reincarnated accounts by looking at signals, where they are dialing in from, device, to the ebbs tenth we have it. that has been the key to success, by the way, on fighting terrorism. more than 90% taken before anyone else reports it to us, 75 before they can tweet. and the way we do that is getting better and better by doing ari, machine learning. >> what was the figure, you said how many accounts every day do
5:04 pm
you take down? >> we don't disclose that. we can share the market this morning, and zbif you a break down on all the take down. >> do you know how many take down number is? >> on the day, so i'm just trying to double check, 6.4 million accounts challenged every week. and. >> we just announced new numbers this morning. we can come back. >> is that number a week, you are basically challenging a year equivalent entire active base in twitter, that would suggest there is a massive problem. >> also suggest the steps we are taking is showing results taking activity and stopping t which is important for us. >> yes, i was going to comment on bots but we have seem to covered that subject. i'm still sitting here in absolute shock about the responsibility you take or don't take. i mean, we are not talking about 18th century chip paper. we are talking about something
5:05 pm
that reaches into every corner of society. we are talking of the famous words twitter storm that take off. and then it becomes the truth because it's repeated often enough and people start to believe it. i'm still astounded that you think, i mean, with this enormous reach that you have, you have enormous power. and with enormous power comes great responsibility. and you seem to want to duck with that and say it's not up to us to decide if it's true or not. but i would suggest to you in fact you have to take that responsibility and start taking that on board. bus what ha very small percentage shift in people's views which you can do through your twitter accounts can change an election, or referendum which might well have done, and we don't know. so i believe you have to start taking that responsibility. is that something you are ever going to take on board or something you'll carry on ignoring? >> just to clarify the responsibility you are talking
5:06 pm
about is removing content based on truthfulness or not? >> yes. >> so during election campaign, so in the u.k., advertisements are exempt from advertising rules sochlt that will be taking regulation of u.k. advertising and giving it to american technology companies which to me seems like terms of deck democratic process. >> do you not have moral compass that says we have to start addressing this? the routes to address it might be difficult, might have to deal with issues of laws, et cetera. but is it not something you even begin to think about? >> actually to the question earlier context is essential. so we do see who will create accounts, impersonation accounts pretending to be another person and use that as persuading reputation.
5:07 pm
we might have an account hateful language disparage on their gender. so that responsibility is absolutely one we take very seriously. because that's the behavior of our users. i think there is an important distinction between the behavior, i use exhibit and the content of the ideas, the view points they hold. and i think that is a big public debate that we haven't started to have yet in terms of where does that responsibility lie. you heard from previous witnesses in germany the debate is now happening about have they found the right balance. so i think that debate one that's important and this inquiry is very valuable quality to have that discussion. and you have great witnesses who will have strong views as well. >> i look forward to you pushing that along. do you feel you might risk a loss of trust, if twitter is seen to be disseminating fake news and not taking it down and
5:08 pm
dealing with it, might you lose trust, and therefore it becomes an exhibit t a problem for twitter? you say there are people trying to move into your territory now. is that something you are facing now? >> when you look at the problem of what's fake news and break it down, one of the biggest problems is content na that is e in some ways but spun to do a viewpoint. the research from oxford and dartmouth as well found the consumption of that goes along party lines. so there is a partisan community feeding themselves on contempnt. and we take seriously as our role helping people see broader perspectives. go to the part how do we manage information, we don't want to
5:09 pm
intervene and push people more into those communities of bipartisanship, sorry, partisanship. but we are interested in how can we a positive value to help give people access to broader information, credible information from credible sources. so projects like the trust project, which was mentioned earlier on, are incredibly value how we deal with these problems, but delicate line to trend, because in some countries the government would very much like us to remove political and independent media. we have to make sure it's a global platform. >> perhaps to certain extent, i mentioned this before to other people, but you are a step behind everything. these might not be intended consequences of your business. but out of them come these extraordinary events like the twitter storms we were talking
5:10 pm
about. do you not accept that perhaps you are one step behind and you have to get ahead of this game, otherwise you will lose confidence and trust, and you will ultimately lose business? >> as i said, it's an issue that the company is right now both developing new technology, deploying technology from other purposes that we can reuse, and you may have seen we paused our verification process. one of the questions at the heart of pausing that process was how can we leverage that process to better inform our users about information. so, yes, it's something that we don't just want to fix today's problem, we want to make sure the problem in the future, bot x will change methods, we want to make sure we are staying ahead of the problem. >> that is good to hear. thank you. >> i think, mr. pickles, it was you that mentioned britain in your evidence whose videos president trump retweeted last year. i'm sure you'll recall that part
5:11 pm
of the ecvidence in the trial o the man. leaving aside the bot issue, i'm concerned about the flaws in your policing system for your platform. because it was only after premt from britain first that you took it down from the platform. so how can your system be described as anything other than inadequate? >> grateful for the opportunity to discuss this because it was a policy discussion that we discussed a lot internally. so the actions of any user and retweeting another issue are not factors in our policy framework. we had had announced we were reviewing our approach to nonterrorist extremist groups. so our policy prohibits terrorism but certainly groups that fell in the nonterrorism. then we took the better part of three months to care fully understand where would the lines of that policy fall. and not just in the u.k., but in
5:12 pm
india, it's been mentioned in the u.s. and other organizations, i'm sorry, countries. so when we then enforce that policy, britain first were one of the organizations affected to t but it wasn't a decision to remove an account because of another account's actions. >> just coincidence? >> it's actually we took accounts down around the world relating to that policy change. because as i say we think that as we change our rules, groups change their behavior to try and stay on the right side of our rules. so we felt that was an opportunity to take action on the accounts that we felt shouldn't be on the platform. >> so i shouldn't read anything more into it other than it's a coincidence of timing? >> yeah. >> thank you. >> i think quick question from rebecca pow. >> thank you very much. >> it was just like my colleague absolutely staggered to hear you say we are not the arbiters of truth. we do not have rules on truth.
5:13 pm
and to me that gets to the nub of what we are all talking about today. and that you are, as a platform, openly able to spread disinformation, which is tweeted and retweeted and shared. and what worries me is what is this doing to our children? and didn't you ought to be taking some responsibility for it? and if you can't and aren't able to and your policing system isn't up to it, certainly some organization or body is going to have to be put in place to make sure that the next generation is safe? >> if i can just echo some of those things that is correct is an excellent question, ma'am, thank you. i don't know if in the u.k. you have the same experience of conspiracy theorists, right. and folks who if you can't do anything to convince them they are wrong about a particular issue. we have seen our companies try different things that have actually been counter productive
5:14 pm
and where fact checked articles get more attention than they would otherwise. we feel the best way to have a healthy platform is to be very good at what we are good at and to be humble about what we are not. when it comes to maliciously using automated tools to try to manipulate our platform, that's something we can stop. it's against our rules. when it comes to elevating credible voices. i'll give you an example of the boston bombing. it was terrifying and people locked in the houses. and story that went out that the suspect had been captured and it wasn't true. one of the things we do as part of our philanthropic mission is work with first responders across the globe that they can get the truth out more quickly and boston police department said no we don't have suspect in
5:15 pm
custody, stay in your homes. we feel that is more appropriate for our platform to be. >> thanks for sharing it. but seems completely hypocritical as well you can say on the one hand we don't arbitrate the truth. on the other hand you are running trust courses and things. seems like a model. one small thing i would put to you is should you at least have a much better rating, erasing system, so when people see tweets they know whether it's fact, fiction, so they can at least have some sort of gauge about how to rate this in their minds? i mean, and how on earth are children going to manage though deal with this? >> two questions, i'm not familiar with the training you mentioned. >> no, you said you said you run a trust program. >> oh, sorry there was a third program called the trust project of which a number of media organizations are using. >> yes, i'm asking you about a
5:16 pm
rating system, which would include some sort of register, whatever they are seeing, whether it's fact, fiction, half fact, you know, half fiction, or something like that. is that possible? >> so, it's an interesting question there in terms of how do you verify source? so do you tell someone for example this is new versus this account has been around for a long time. this account is sat tire versus this account is not sat tire. because some of the things is researchers has flagged danger of those labels is it doesn't work if the person's bias is subject of the story. so even if you put a label, immigration too high, you jump to the subject of the story rather than reading the label. so that's contract trust project and the academic research will be funding and partnering with is coming up with solutions to that problem. because absolutely we think there is more information we can give to our users.
5:17 pm
the question is what information is it useful is incredibly difficult one right now. our verification review is rooking at these questions as well. >> i'll put to you more academics would like to join with us incoming up with these solutions. >> thank you. >> mr. pickles, i'm still concerned about your categoric statement about the case of a conservative candidates in an election where lies are demonstrable told but you don't see it to be a arbiter in any way. a lot of people don't have thick skins as politicians. analyze that are disseminated can lead people to commit suicide, for instance. and we have a locally, to me, anonymous malicious twitter account that spreads innuendo
5:18 pm
about me and other people. that doesn't effect me because i never look at it but it effects other people and their mental health. i think we know who is behind it and i take the view don't look at it then because this person is assad miserable pathetic excuse of a human being. but i do actually have some concern about the effect it has on other people who do look at it. so if i came to you, say with lawyers, and asked you for details of the person, assuming you knew or not, the ip address of the person who established that account, what details would you give me? >> so, firstly. >> and i'm not president asad, i'm ordinary member of parliament doing a decent job i think. >> obviously, appreciate that data would be covered by protection law. and so we would be required to ask for legal process to allow us to disclose the information. the police, for example, seek that legal process on a daily
5:19 pm
basis, and we assist in thousands of investigations every year where we do provide information. but we would require legal process. >> would i have to go to court to get an order? >> yes. >> rather than you cooperating. what if i can't afford to go to court to get an order. >> well, i think you raise a very valuable question and access to justice. you know, parliament recently passed a law about defamation about that question. and i think if there is concern over the accessibility to justice system, then parliament is best to address that. >> you can see in the mind of right thinking people, if you are warned that something is a lie, and if you are warned that actually it's having severe possibly life disturbing effects on people who cannot afford to
5:20 pm
go to law, you could possibly understand that actually some people might take the view that actually you are complicit and therefore should face liability? >> there is existing legal framework that parliament amended to improve. and that legal process involves a third party, not companies, deciding on the veracity or not of the conversation at hand. >> final question. i've had cases of people coming to me very disturbed that their identity has been stolen and they are being impersonated on twitter. and to my knowledge they have not got anywhere with you in making complaints. what's your policy, if i were to come to you say someone is impersonating me, i would like you to remove that account? >> we can send you the full text. we don't allow inpercent nation
5:21 pm
accounts. but sat tire. >> i'm not familiar. >> we are happy to follow up on it. >> we heard that twitter challenges over 6 million accounts. >> sorry. >> we heard early on that twitter challenges over 6 million accounts over week. >> yes, sir. >> how does a company that plows less than 4,000 challenge 6 million accounts a week? >> that's an excellent question. and so the secret, for us, the kind of numbers to keep in mind is there are 500 million tweets a day. there are 350,000 tweets a minute. so the key for us is to use machine learning. we acquired an incredible bridge company called magic pony that has not brought not only
5:22 pm
technology but incredible talented engineers and data science advertises that help us train the machines to make consistent decisions about what is and isn't automated. so some of the things we look at is near instantaneous retweets. if you tweet something in fraction of a second later i retweet it, that's not human behavior, it takes a moment to open it and decide if i want to retweet it. a certain regularity of tweets is not human. sometimes in moment of excitement will tweet a large number of tweets in a given time period. what we'll do, based on the certainty that we have, again, using our machine learning, is challenge accounts. so you are right it's not all the time people. we have a combination of human review and machines that help us police our platform. >> that's great. thank you very much. >> tomorrow night on c-span, former florida governor hand presidential candidate jeb bush
5:23 pm
talks about school choice and education savings accounts. you can catch his space at 8:00 p.m. saturday on c-span. sunday night on q&a, michael fabe on the china and pacific. >> it's hard for westerners to get an idea what that means to the asian culture especially someone that that's a big and as proud as china. they lost a lot of faith and came away with never again mentality. so they start, after that, this mindset we will buildup our navy and missile defense forces so we never lose face again. >> sunday

22 Views

info Stream Only

Uploaded by TV Archive on