Skip to main content

tv   Cato Institute 2018 Surveillence Conference  CSPAN  December 14, 2018 1:16pm-5:36pm EST

1:16 pm
the interest of time, isn't it true that any incentive works against the best interest of the client? >> well, i don't think so because if what you're doing is saying to a broker, hey, if you -- if you now have $100 million, for example, you grow it to $200 million, you should make more money. i think that's okay. that's the way investment advisory firm works, if you're managing pension plan for someone and you get another pension plan to manage, you may get paid more. i think that's okay. >> we are going to leave this and you can find it online, c-span online, cato conference on surveillance and privacy in washington, d.c. up next flash top, short presentation by single speaker, we will hear from a representative on rating the creep factor of network appliances. >> tracking what people do in a
1:17 pm
way that becomes increasingly difficult to avoid as censors saturate space. but description of a world we are creating for the sake of inconvenience and there are benefits to be had from smart devices that can risk our needs and take advantage of sharing information over networks. but it also creates serious privacy risks and so to panel discussion, i would like to bre up heather -- bring up heather west to talk in many cases in which privacy is not not included. >> thanks, julian. my life is basically science
1:18 pm
fiction novel, the air of my house, lights turn on and it's wonderful but we know that those people aren't reading privacy policies, i don't necessarily, we are not thinking about that in terms of holiday buying guide, one of the things we did in popular connected devices was look at the privacy implications of these devices and we put out a report called privacy not included. you can search that online and get to the buyers guide. and so in trying to reach people we knew that we had to simplify, we had to think about things that would be useful and impactful for that and really help educate about these censors so people are making smart decisions, we don't want to tell somebody to buy something or not buy something, we want to let them make that choice, but to back up just a little bit,
1:19 pm
mozilla is software company, we made fire fox. that is one of the reasons that we started looking at iot. this is the second year that we have released the guide and apologizes that this shot is grainy but you can take a look and see what we evaluated and some of those, you know, what we thought was interesting, what we thought was important, more or less fine, what we thought was a little bit creepy. in looking at all of these products, we developed something called the minimum security standard. specifically 5 things every connected device ought to do because it has data flows, it is a part of your life. those 5 things that we evaluated
1:20 pm
each of the products on are does the product use encryption, pretty important. does the company have the ability to provide automatic security updates? does the product use a nonstandard password? for example, baby monitor that has a hard wired password of 1, 2, 3, which is not ideal. we want these companies to have a vulnerability management program so if someone comes in, hey, i figured out how to hack your thing they can push security update. and they must have a privacy policy. this is a relatively low bar, we think and as such minimum security standards, we work with consumers international and internet society to develop this and apply criteria. we reviewed 70 products and 32
1:21 pm
meet minimum security standards, about half. so these 70 products across 6 categories, we also know that we, the users looking at the guide also just want the really quick and easy version of this -- we looked at privacy policies, we did technical testing and tried to simplify everything down. this is from the nintendo switch, they did pretty well which is great because there's one sitting in my living room and -- but we knew that we could simple -- simplify a little bit more and make it more interactive and interesting to people. we invented the creep-o-meter.
1:22 pm
and but what we did we created a way consumers to rate how they felt about a particularly product. one thing we know that the context matters and having a particular aspect of a product in a nintendo is different than remote than remote unlock on my door. one of them is a physical security risk and it is amazon ring, for example, it has a lot more power over my life than a switch that i can just turn off. but the creep-o-meter, people loved it, we got somewhere north of 50,000 interactions, which is rating and they can say whether they are likely to buy it which the likely to buy it percentage and the not creepy percentage adifferent which i think is actually interesting and useful
1:23 pm
to compare. it is turned into the iconic things that we are thinking about in terms of how to educate around this -- this stuff. and people are actually kind of reaching out to us and talking to us about the guide and the ratings, for example, there was a guy in san diego who works for the fire department where they had been recommended a smart doorbell with camera, should they be recommending this to people and he we wanted to know whether the product was safe. it wasn't one that we evaluated and we just don't have the resources to evaluate everything and of course 70 devices just a drop in the barrel. and, you know -- but that guy from the fire department started thinking about devices
1:24 pm
differently. that's exactly what we want to happen. even though, you know, this is a good start. we started hearing from the press and they we wanted to know about these products so it's become a resource that's useful i think we are going to keep working with folks, some companies have reached out and said, hey, we think you have it a little bit wrong and we are happy to chat and make sure that we actually get it right, we don't want to be maligning a product that isn't bad. so notably there was some products that do really, really well. the nintendo switch has 72% not creepy rating. the -- i don't think it's on the list. there's a drone that did very well. there's a baby monitor that did really badly. there's a connected coffee maker that did really well and actually really interesting but one of my favorite parts on the
1:25 pm
product, what if something goes wrong section, it's not that we expect anything to go wrong but what if it does, what if your door unlocks itself, what if someone snips game on your switch, there's different kinds of things that we don't think about in terms of what those scenarios look like and it's so contextual and preferences are special that we think that is the right way to do it. i think one of the goals here is besides educating people that are going to buy these for themselves or for friends and family is to show companies that people care and even if most of the buyers guides are based on price or features, privacy is something that people will think about. and we did actually give badges to all of the -- all of the products who met minimum
1:26 pm
security standards, you can go and look and actually relatively easy, harry potter coating kit did really well which made me happy, i had never heard of it. really the fact that 32, 33 of them actually passed is nice. but it's really also worrisome and interesting that hack didn't. making those considerations are part of discussion and part of the product development life cycle has to be the next step when we are talking about connected devices. and this is the best -- no. that's not the best. oh, yes, there was a connected teddy bear that did not do well. the switch did do well. the baby monitor did terribly.
1:27 pm
that baby monitor with the hard-coated 1, 2, 3 password which from show, first segment about baby monitors getting hacked. all right. this one 100% more emojis than expected. in terms of our engagement and mozilla thinks about topic and helping people about creep creey meter. the advocacy team is offering flash threatening to do valentine's day for connected sex toys because they are out there. we are taking these lessons and kind of feeding back into
1:28 pm
privacy work and trying to think about those next steps. we will start thinking about artificial intelligence in machine learning in these products, of these 70, about 20 had embedded ai. people actually started asking us why didn't you evaluate alexa, alexa is not a product, alexa is an ai, we are thinking about how to expand this work, put more emojis, maybe, think about how to really understand products. take a look, i'm happy to chat about any of this and answer questions and i think the goal now is to talk about what are the implications of the internet of things and connected devices in the context of surveillance, not necessarily corporate surveillance but government surveillance, et cetera.
1:29 pm
[applause] >> hi, everyone, i do not have as many emojis to present. sorry. my name is matthew, director of cato's project, my pleasure to direct you to this panel where we will discuss implications of the internet of things. i want to introduce the two speakers who you don't know, the first is professor andrew ferguson who teaches and writes in the area of criminal law, criminal procedure and evidence at the university of district of colombia david a. clark law school. national expert of policing and fourth amendment.
1:30 pm
i think heather's presentation might have prompted some of you to cross items off holiday list but i can recommend andrew's book rise of big data policing. andrew is nice enough to come to cato, i think it was last year or earlier this year to talk through his ideas with me and some of my colleagues and i wholeheartedly recommend it. legal commentary has been featured in numerous media outlets including cnn, new york times, economists and many other outlets. the other speaker we have with us here is hanna, who is a senior technologist at the center for democracy and technology, while she brings technical expertise across cdt projects, she's focused on student project, maintain privacy of data while reaping the technology. she received ph.d from brown and
1:31 pm
designed and built in schools. while going through panel i was going through article that andrew wrote, i feel like i'm your publicist. wrote an article on the fourth amendment which i think included a great quote from new york times article that discussed the kind of what -- world that we might be living in. you can imagine one sock e-mailing the other to say behind the dryer, your car knows when you're acting up and set up appointment. these all seem like i supposed great applications of technology but i'm reminded that earlier this year there was a story about the fitness app which released map which included
1:32 pm
3 trillion individual gps points, locations of some u.s. military bases in syria as well as french military base -- so with that, i will move to the front of the stage to ask the panel the first question. i will start with andrew, domestically, what are the -- what does it mean for surveillance when we are considering the network devices, what are the threats out there and i suppose of -- what we can do about it? >> i'm a criminal law and criminal procedure and how technology impacts policing, we are in a world of censor surveillance where police have the potential to obtain a lot
1:33 pm
more information about all of them. it used to be that if you were a police officer you hat to sit in hot car and drink cold coffee and watch suspect two down daily business and follow them on path of crime, today you can sit and watch as they leave their smart home, smart car with their smart fit bit on telling their heart rate and everything else they are doing and go about their business in a way where you can follow using data trails in a whole new way. it opens questions about are there any protections legal or constitutional that can stop that and even if you say there might be some fourth amendment protections around government just doing it in a way sort of true surveillance without a case, what happens when you get a warrant and you can actually go in and find all of the detail that the smart home is collecting, when you go to bed, stay up doing fun things on saturday nights or when you are
1:34 pm
drinking too much, when the conversation -- i mean, we are opening a world of law enforcement that's going to change relationship with government and people and so we are at the early stage of internet of things, that's in large brought on by us. wonderful, wonderful list just shows how pervasive is and it's only going to grow and building things into smart cars, literally computer. you'll be driving a computer, where you went, how fast you went, when you were speeding and when you weren't speeding, valuable for law enforcement as surveillance tactic but investigative tactic, where we are now is asking hard questions about shouldn't we be pushing back, shouldn't we see what is happening and what will happen in the future to start writing either law that is will push back on it, maybe hoping courts will interpret the fourth
1:35 pm
amendment in more protective way and just educate. this is wonderful about the project, educating all of us that this is the world that we are doing to ourselves that's changing the power because of these very smart devices, and this is the issue, your heart stint can tell you when you're using drugs, not that you are, but when you are using drugs, that's something that we might have to have conversation about, can we make sure that that's something that we can expect to keep private, not that you should be using drugs, maybe you do or you don't, something that we should have national conversation about technologies are changing information available. >> hanna discussed a project that helps identify some of the issues and andrew is keeping a close eye on the law, but when it comes to producers, is there anything they can do the heavy-lifting of informing when it comes to privacy issues
1:36 pm
associated with the internet of things? >> producers of iot products? >> manufacturers, producers of the products. >> i certainly think there's plenty they can do from information standpoint, right, you can think about the fact -- i will piggy back a lot of heather's talk because she brought up good issues and good example of this sort of baseline security steps are very baseline and the fact that over half of the products failed those is pretty concerning and so from an education standpoint you can imagine, a, that they should meet some of the things and explain why it's important that they met them and the password is a great example of this. there's been prolife ration of internet and one of the ways they do it is default passwords, you have situations where you can't change the password but even situations where you can the set up of iot doesn't prompt that, doesn't explain that's the thing you want to do, the risks that you're exposed to if you don't do that and so, you know,
1:37 pm
i think sort of explaining that this thing, this iot product that you're bringing into your home is in fact, a computer and carries wit all risks that a computer does and i think that consumers are growing increasingly aware of the sort of cybersecurity risks of phones and we don't necessarily think about connected people in quite the same way and emphasizing the computerness of the products will be a great step in the right direction. >> heather, i was wondering about how to gauge the creepiness, someone might say, i bought the devices and i get a on -- ton of benefit from them and i like that my smartphone knows a lot about me. what's response to that? follow-up, any producers of the products that you've been analyzing that do a good job of making this information
1:38 pm
available? >> i think that it is still very difficult for to really understand the potential implications, i think there are a few really, really good examples. i think fit bit which i have and love, it's on my body most of the time was talking about what if the heart rate monitors could detect heart. oh, no, we just became medical device. unanticipated use of the censo. i think -- censor. one of the things they need to be doing is change expectation that you don't want to bother the user and say, set a password and you don't want to kind of talk about the scary things. you never want to talk about the scary things, right? the connected teddy bear, i don't know exactly which teddy bear that is. we did a lot of work on cloud
1:39 pm
tests which was connected teddy bear and we found vulnerabilities in it, we tried to reach out to this company, hey, guys can you fix it. no response. their website was more or less shuttered. turned out they had gone bankrupt but no one knew. it was marketed to a specially focused on military who are overseas and record messages to their kids, fantastic, the fact that somebody can see messages via bluetooth is not great. people want the information and that people are interested is useful thing. i think we are in a interesting point in this dialogue where people care about this in a way that is -- they are realizing these are little computers and i do like that framing.
1:40 pm
my house is full of little computers and make my life a lot better but they need to be something that we think something conscious about and that's the way we will get people to educate themselves and to kind of demand the kind of transparency from manufacturers. >> we've spoken a bit about the --i suppose criminal potential to hack but in the law enforcement context what's the actually protection that's around these devices, the fourth amendment says the right of the people to be secure in their person's houses and effect and couldn't argument be made that, well, these are protected or am i being somewhat naive, andrew? >> a couple of issues, the protection and statutory protection, wiretapping if you want to turn that smart teddy bear into spy device, that could be attempting hopefully illegal without some type of warrant.
1:41 pm
you know, the complication is you have the fourth amendment, it was not envisioning smart phones, smart cars, smart object that is we will talk to you and others and yet the principles do apply. you have smart devices in your home. the law created in fourth amendment context about things that happen in your home probably can apply to those smart devices in your home. harder when you go out in public but we've had some supreme court cases that recognized the aggregated low -- location data. also on the fourth amendment, adapted to this age and so what we will be seeing is courts struggling with this idea that this constitutional protection of expectations to privacy, expectations of privacy and security is something that should be protected when your teddy bear starts recording you
1:42 pm
but might have to be modified in a new way. and what i think we will see is that the courts so far, this is optimistic in otherwise dystopian day have been responding relatively case. the carpenter case that came out this year, whether police can obtain your cell locational data from a cell company, third-party provider and the court chief roberts said that they did need warrant to get it and what that means is a greater protection of all of the information from the teddy bear company from the nintendo switch company, google, all the places that are holding your data, it means that there must be a arguably a warrant requirement before law enforcement can get that. just pause, in some ways a pure victory. it's true that law enforcement needs warrant. there's something. once they have the warrant we really have opened up a new
1:43 pm
world for investigation. all the interesting cases caming out where -- coming out where you have iot devices as witnesses in homicides. so there's anecdote, come out in their ways, you have a case where a guy's alibi was by wife's fitbit. turned out he allegedly killed her a while earlier and the fitbit revealed the lie. you had case of arson, he was wearing fitbit, the pacemaker and you are seeing evidence that's going to be incredibly valuable for law enforcement. one last story involves alexa, case in new hampshire where police we wanted to get the device that after alleged murder had how do you get rid of body, how do you clean up blood, they didn't know it was there and
1:44 pm
they won and subpoenaed amazon for the information. now court battle of why they should get it. you see the power of these revealing trails that reveal so much about us. >> hanna, what's a producer to do when maybe working in the companies, watching the conference, actually the gathering of this data makes the product better, important feature of a lot of the products that they learn about us or learn our habits, is there a way to navigate the convenience and privacy problems here from the manufacturer's point of view? >> that's a great question. and i think that the answer is a little disappointing in that -- well, first of all, i do think there's a general thing that producers could do which is sort of really be much more careful about what you consider essential to the functioning of your product and that's
1:45 pm
something historically technology companies haven't really been careful with and simply because there is this sort of fence that the more data you have that the more potential you have for discovery, right, and i think that this -- this sort of sense of potential value maybe has had its day and we need to move on from that into actual -- actually being able to very clearly point to what data you're collecting, what exactly your cases for that and given the use case how long does it need to lens and then how do you dispose of it when you're done and i think because -- and this isn't limit today tech companies, it's pretty common of maintaining a lot of information because you have the idea that in the future you would be able to learn something from it or improve products, i think that that can be true, the calculus needs to shift away from potential future value into you
1:46 pm
can't recognize value for it pretty quickly maybe you don't need it and then to further that a lot of the value that can come sort of for product improvements more grandly you can do that with atomize data, collecting data for future and treating it in such a way where you can tie it back to an individual person but doesn't help you with products that you want to personalize, right, in that case you really do need the data that's tied to a person and then i think to some degree the answer there is, you know, we need legal protections around it because there is value to this data, it does allow you to do useful things, alexa doesn't work without you talking to her all of the time and you have created the data. i think to some degree a lot of this does need to happen on the legal side and i do think that companies can just improve their
1:47 pm
security practices so that, you know, this data isn't ending up through certain parties where the legal standard is maybe more limited. >> right. that point on improving security reminded me of heather's password 1, 2, 3, why would someone set a password to that? >> because they are just not thinking about it. one of the things that -- that was very interesting to me as we started poking around in this iot space and to be frank i'm not worried about the tech companies, they kind of know that this stuff is important. i'm worried about the baby monitor company who did use 1, 2, 3 and when you have industries who are flapping a censor and connection to their product they often don't have expertise or don't think about it that they have changed the game by creating this connected device. i think that legal protections
1:48 pm
will be such a good -- such a good idea and would be amazing to like create that certainty and clear expectation but i -- if i'm a coder and i am putting together the codes for this connection and i need an update, i will put in 1, 2, 3, i will fix it later and later never comes and a lot of the time that's because there's not a clear process or clear review and auditing inside of a company. i think it is unfortunate but we are seeing a lot of companies they have to mess up pretty badly at least once before they put that kind of data hygiene in place and really start thinking about it. we have started doing trainings on something we call data practice which is the way you he data and part of that is actually the security of the data because it's not secured, the game is over. >> andrew did you want to --
1:49 pm
>> two points in insightful comments. one is that people who are not in the room when we are creating the products are the lawyers and all companies are data companies. all companies think they can add value by collecting data and they need to have the experts in place to talk about the risks and that's certainly true also policing space. police tend to find information when they can, they are not necessarily tech experts and they are going to be making the same mistakes but with greater consequences to liberty. the other piece of that is in audience where you you would hope maybe the market might have an answer, i think by doing things like mozilla is doing pushing people on competitive advantage, we are a better company because we are not trying -- you should buy our baby monitor because we understand security risks.
1:50 pm
you're actually creating better incentives for companies to do the right thing and i think that part of the problem is getting people who are rushing into the space, iot space, seems like everything can be verifiable, we will figure and data is gold. need to have cautions put in because it may be the case the companies that are more protective of your data, taking security seriously would be in better place economically and in the market to succeed, so i think that it is really valuable what you guys are doing because you are pushing people to think about that as an economic value. >> well, i think actually just to feed on that a little bit, when we figured out cloud test, there was no way they would update software that they could in the teddy bear, we went to the marketplaces where they were being sold and turned out by third parties but amazon doesn't list it anymore, target doesn't list it anymore, wal-mart took it down pretty quickly and that hopefully provides economic
1:51 pm
incentive in a real clear way. >> yeah, and i think another -- in addition to these sort of retailers listing it, one of the great facts about iot is they are interactive devices, they spend a lot of time talking to each other and there's increasing number of them that are connected to hubs, google home, alexa, and so those hubs are actually a good place to deal with a lot of this too, if you can get alexa to say, well, i'm not going to talk to you if you haven't changed default password or something like that, you can encourage companies who maybe have a little more say to care about and that can push in places they interact. >> power of network. i think we need to conceptually start thinking about this teddy bear, the network space can be protected and one of the dangerous in security you update regularly for security patches, that's what that is doing when you get ios update that comes at
1:52 pm
the wrong time of day. you will not do that with your smart fridge, you will not buy a new smart fridge over two years, you will buy every 10 years and companies will go bankrupt, what happens to that data when you're in bankruptcy, what's their value in bankruptcy, it's data. you will be opening up a lot of private information in places that you didn't expect when grandma bought smart teddy bear because it sounded like a great idea. >> alexa and google home are the most popular, certainly most notable of iot devices and a question for the panel, what's the actual data that law enforcement would have access to in an investigation into one of these? is it audio, is plain text transcript, what's the actual degree of intimacy of data, are they always on, does anyone have
1:53 pm
any ideas on -- >> i have four google homes in my house. >> okay. [laughter] >> which is quite overkill but incredibly, incredibly useful and that's the one i'm familiar with in terms of the protections and preferences and one of the things they did get -- meet minimum standard standard, by the way, i don't know what their creepy factor is but they talk to you and listen and i think that there's -- but what google has done has put assurances online if you look at preferences, you can delete search results which is what it is doing unless you're commanding something on your home like the thermostat but i think that -- andrew will have a better legal answer but there's a lot of data there and it is intimate data. there's been some really hilarious interactions with the smart thing that all of my other stuff is connected to.
1:54 pm
it will answer anything you want how do we spice up sex life? >> well, i found the results on the web for you and started reading, no, shut up. this is the worst idea ever. but those -- that's not actually that weird of an interaction when you're thinking about just part of your home. >> ii just want to weigh in. i believe this is case, i'm not certain in alexa, it isn't recording, it's constantly listening but data until you say okay, google or hey alexa and at that point the data start being recorded in permanent way and gets back up to cloud to investigate.
1:55 pm
it's unlikely to have just audio recording of your conversations unless you say something that either is the watch word or sounds enough like the watch word that starts listening to you. after you say the watch word it will listen to whatever you choose to say. >> and just remember, that's today, i think you're actually write about where it is, there are companies that are trying to push other kinds of insights, even as i was sitting here listening, i got my phone out, article in guardian today talking about british ai company that wants to figure out moves and coughing and whether you have cold and information about cold medicine or other things, sense that maybe you can find out people are depressed or abuse going on in the homes. so this is not happening now but the technology that is embedded in our homes could allow that if you wanted to, that may be a case of companies that there could be race to the bottom as opposed to the top. there could be recognition that a lot of what has happened is if
1:56 pm
you pay for a more expensive product, you have more security and privacy. but if you want things free, the cost of that free is your data, right, so you can imagine instead of how much it cost for alexa, maybe take free device but the cost is that device will listening to you and see what you want and what you like and what you do, that's an amazing consumer advertising platform. we haven't seen that quite yet because the world has been dominated by big companies and companies that are conscious of the pushback from consumers but you can imagine that the more you give away free, the more people say, hey, i can get equivalent of free alexa and only costing the data on improvements or whatever, maybe i will take the trade-off and then we are in a different space. >> what's the legal standard required for access to this information from law enforcement's perspective? so if you're a detective you go into a house and you see a body
1:57 pm
on the floor, obvious violent death but there's a google home in the living room, what do you -- politely ask, do you have subpoena, warrant, what's the actual standard required for law enforcement to access that information? >> so, you know, somewhat open. right now what has happened out of abundance of conscious usually law enforcement will get a warrant. in new hampshire case there was a murder of two women, there was a person who was the suspect because this home also had biometric camera to get in. id'd from that and there was an alexa in the house and police went to a judge and said, hey, there are two dead people, we know this person was there, we hope, think that alexa might have clues about what happened during the murder, we don't know exactly what happened, we think we have probable cause that this device will have information. and the judge signed that warrant asking amazon to turn it
1:58 pm
over and amazon resisted a little bit and i think they might have because without the watch word you don't actually know whether or not it collected anything. on the other hand, if you go back to old-fashion sort of predigittal probable cause, if you murder someone in a house, police are going to get a warrant to search the house and find everything in it. because it's digital and amazon cloud it might be different probable cause standard, something that's just not open, there's not resolved yet. i do think right now law enforcement has recognized that in big cases where they don't want to risk losing the evidence that they'll go after a warrant, get a judge to sign off on it on probable cause that there's evidence of a crime in this device. clearly whether that's true or not. i'm not sure that's actually true in the new hampshire case but it's clearly what they are doing. whether they need a warrant or ask for subpoena or court order, you know, amazon pushing back, which is one of the things that
1:59 pm
professor was talking about, law enforcement hasn't been -- hasn't been in cooperation with the companies, it's not in competitive edge to give information from your homes to law enforcement. so actually been resistance from the companies to actually give this information. >> i just wanted to add onto that that i think part of -- i mean, i think there's a lot of motivation for resistance but one of those is exactly what you're saying about it's unclear to me that the probable cause to search the house should necessarily be probable cause to search the alexa because is there -- is there any indication that the watch word was actually said and so i think some of the concern around sharing this information is concern that the people asking for it aren't -- don't have enough awareness of what the actual capabilities of the device are, what the bounds
2:00 pm
of the information it might have are, so their concern that the reach is overbroad simply because there's lack of education on the law enforcement side and they're not convinced of what they're actually asking for and that gives people pause in terms of sharing the information. >> i think that's really true. if you're a new hampshire homicide detective and going to new hampshire judge, probably neither one is tech experts, probably not. they probably weren't aware of it and maybe amazon is pushing back, do you know what you're asking for, do you have predicates to be able to do it? educating people in all aspects of this world before you ask for information. >> i'm reminded of the presentation we had earlier that discussed the australian encryption legislation and i -- while listening to heather's presentation i thought, well,
2:01 pm
couldn't we end up in situation where if these devices become more and more secured we will have law enforcement more and more demanding that they have more access. have any of you heard of iot back door push from law enforcement yet or if not, it's something that we should fear? >> i suspect back door, that's how i would do it. we worked a lot on this, we worked on raws -- australia bill, it's indicative to me in some cays of lack of education. i think that we can do a lot better aztec companies to go in and say this is how it works, here is where we can help you, here is when we can't. when mozilla gets orders, that's cute, we don't have it. but that's not always the case and we have to work with law enforcement. but also helping them understand
2:02 pm
where these tools can be responsibly used to make lives easier because there are so many cases where they couldn't get the iphone broken into, for example, it worked out okay. a case out of new york i believe where they tried to get the order to get apple to unlock the phone and they were gearing up for this court battle and then they just asked -- unlocked the phone, done deal. thinking through to do this rather than weakening security of consumer devices. at some point i should count how many i have in my home, probably 20. the fact that backdooring a device would actually significantly change the whole industry and my acceptance of it
2:03 pm
>> yeah, i was going to follow up on that with i think that we have been talking a lot about law enforcement but you can't and i know i probably don't need to say this to this room but you really can't make a back door without weakening encryption and suddenly you're opening this door that's now to our homes in a very, very serious way, so, you know, just to say that that would be a huge step. >> there was a reason that encryption was the first of five security standards, it's really important. >> and particularly because the network nature of iot means that if you can get into one of these it's substantially lowers barrier to getting into everything else in the same house that exists in the same network, they talk to each other a lot and if you break into the terribly designed teddy bear it's not hard to get into the wireless router. so weakening any particular
2:04 pm
piece in this very complex architecture of this internet of things is weakening the entire thing. >> yeah, i think unfortunately we will have more cases that andrew has discussed where they will demand access information like biometric because people think they are safer, if you can have front door that let's certain people in, that's safe, that's something i want but with government access to it i suppose that does raise questions. is there evidence not just in the united states but i'm thinking more globally of the kind of surveillance we should be may worried about down the road? are there any countries ahead of us in terms of iot surveillance or surveillance of person technologies? >> a lot of people -- a lot of countries -- [laughter]
2:05 pm
>> i do think that -- i mean, again, i'm going to point to dystopian science fiction. there's a lot of discussion of what does that future look like and a lot of it is playing out and its information control, you know, if you're in on the devices you have this incredible ability to -- to track everything and that's a really appealing idea for a less -- more oppressive regime than the united states. >> my latest article, fascinate to go read, only 70 to 80 pages details about why smart cities are unconstitutional because as we are building smart cities which is iot connected world that will monitor as you walk down the sidewalk, have the apartment read your face to allow you in, smart car which is are tracking devices everywhere, will give you benefits, they'll know how to tax you if you're trash because they will only tax you for the waste that you use.
2:06 pm
all of this is built on iot infrastructure, really building the city as your platform. if the goal is to create a platform to be the google of facebook of the world, if you build the city like google is doing in canada or trying to do in canada, toronto, there's talk, talks and moving forward with it, they are actually going to get incredibly powerful sense of how you live in urban space where you go, where you shop, where you buy, they might actually try to -- they don't really need to know you personally but they'll be able to collect this, right, and so i do think that we are moving to this future where it won't be that. i mean, dc and other places, not necessarily that but the ability when you walk into whole foods here in dc and they know who you are because you have amazon prime subscription and you know that you buy, you know, blueberries and strawberries
2:07 pm
every sunday and they know your name and income and everything else. they are tracking a lot of information. see who shows up in amazon ring, it's like building into this world where the consumer surveillance is growing in power and they'll be a question of okay, why is it okay for amazon to have that and not law enforcement investigating murder because we have murder victim. once we built the architecture of surveillance it's hard to say law enforcement shouldn't have it for a legitimate purpose. >> i think goes back to a question you asked earlier of what like can companies do about this and i think the more we get into the pervasiveness the more companies will have to think about ways to implement these things in sort of -- in a way where they can work without requiring to know things and so what i mean by that, for instance, you can imagine the way facial recognition, you
2:08 pm
could design facial recognition system where you have biometric doorbell but the doorbell doesn't know who they are and have map of a face, right, the map of the face is written in a particular computer kind of way and say human can't look at it and reconstruct who that is and then all the doorbell is doing, you walk, stand in front of it and either says yes or no, that face matches a profile of a face i have thanked face is allowed inside. and so you can think ant -- about designing for them doing their job and not knowing more than that. but that's difficult, it's a hard thing to do so -- >> did you want to -- >> i agree. [laughter] >> okay. i wanted to comment on -- ask a question about andrew about statement there, wouldn't
2:09 pm
someone perhaps make the argument that, well, in a world where internet of things, prolife -- proliferation, you're going around and using face to go shopping and all the rest of it, haven't you signaled to the world that you don't believe you have an expectation of privacy in that behavior, are you really allowed to push back against law enforcement when they ask about data that relates to your public activity out on the sidewalk? >> a reasonable expectation of privacy tech in a world of censor surveillance doesn't make a whole lot of sense and may mean that we need to rethink our standards in terms of how we think about the fourth amendment. 1967, a telephone booth, one end put in nickel, you got -- doesn't exist in this world.
2:10 pm
so we may need to adapt. struggling with maybe our precedence that we built don't quite fit this new world but i think you're right. that's exactly the argument about how unearth can you claim expectation of privacy when everything you're doing is essentially being revealed to third party, why on earth would we protect. the court in carpenter said the fact that you're giving locational data to cell phone provider isn't enough to sort of -- there's a claim that the aggregated long-term surveillance is fourth amendment purposes and there's some protection. i think you're right, that is the claim, that's the way we will go and hopefully creative lawyers pushing back and saying, no, that shouldn't be -- shouldn't be the default otherwise there's no fourth amendment. >> and i think i'm really glad the judges are starting to say, wait a second, this doesn't make
2:11 pm
sense, carpenter, we were excited about the carpenter decision but when we are talking about expectations in the nonlegal sense, not a lawyer, the measurement for me is with us the user surprised. i suspect the dude that killed his wife who was wearing a fitbit was surprised that fitbit was used in this way. it's pretty valid to go to say hey, you have data, question mark, question mark, can we look. when we are talking about much different kind of things, traffic ticket or something, i don't know. when your easy pass says you were on x, y road and you were lying to them about something. we love devices and we don't think about how they could be used. and i'm not going to call it abused but they can be abused too and that surprise and
2:12 pm
expectation, you know, tech companies we do a lot of user research. we sit down, what do you expect from this and sometimes it's really, really interesting. you know, if you sit them down and say what do you think this does, what are your expectations here. i think that's not a legal standard but might be interesting to plan to some of those arguments. >> well, i was at the argument and excited when the decision came down but maybe i can just throw some cold water on some of the optimism because if i recall, the majority in carpenter says, look, here is a list of all surveillance that is perfectly fine, narrow decision, only deals with cell phone location information, it seems like it's going to be a while until the current court seriously reconsiders something
2:13 pm
that we just dissed the smart city, are there reasons to be more optimistic? >> my nonlaurie version is as people who use the devices on a regular basis start making the laws, start make to -- making decisions. >> i'm not with cato, i'm more cautiously optimistic because the court reached out to get that result. if you read it carefully, not as compelling an argument as you might make, there's a lot of gaps and a lot of questions open for future litigation but it was a moment to say we are not just going to go back to our past precedent in a digital and we will think differently. my take away from carpenters is any time you are talking about
2:14 pm
aggregated locational data, smart cars, tracking where you go, fitbit, power locational data, maybe not teddy bear because that's in your home. i'm optimistic that carpenter signals a way forward where some majority of the court will be conscious of it. they could have just said we have precedent, we are done, carpenter loses. new privacy. >> hannah i have a question about the -- in a world where we have manufacturers and producers who are aware of privacy concerns and implement certain policies, do we run a risk of having to constantly opt in to give data all of the time, every time we want to use a device some might say in the private sectors making those devices that policy, policies related to
2:15 pm
private cri might make certain products unusable or is that fear. >> will require infrastructure opting in? >> also the people using the devices will not want to use the device because they are constantly being asked about privacy or opt in to certain data releases. >> i would -- i personally think there's a way to do that well. there's a lot of really sort of wonderful research in terms of user interface design and product design in general of how do you make it clear what's going on just based on your design of the product in general and that doesn't, you know, i think we've gotten ourselves in a position where we think of censor, we popped up warning screen and text, you can use
2:16 pm
this forever, i don't know legal words, but that's not the only way that it has to be done, right? you know, we have, you know, little icons that when you're waiting for a website to load, there's a little spinning wheel, when you up load something there's arrow pointing up that gives you this sort of visual indicater that something is happening, data is leaving one place and going to another place. i think that wh you start to design your products with the intent of -- of pulling back the curtain a little bit, that you can build a lot of this information into the design of the product, right, and users learn. for instance, a great example of users learning is that everybody knows that the saved icon means save, when we are talking about
2:17 pm
a generation that has never seen floppy disk which is what the icon is. users can learn this from context and interacting with them regularly. i personally maybe i'm being too optimistic now but i do believe that it is possible to design products in such a way that convey a lot of this information without it just being huge and credible added burden. >> do i want to eventually turn over to audience for q&a but i wanted to ask maybe a pessimistic question what evidence is that many people care about privacy when it comes to this sort of stuff, we have the snowden revelations but there wasn't a massive adoption of devices and systems that would keep them safe, are we worried up here on the panel that maybe there are hundreds of millions of people who will say actually i like to convenience of these things and i'm not a target, i'm not worried i'm glad
2:18 pm
that mozilla puts rating but i'm happy to buy monitor because who cares about me, is that something that keeps you up at night? >> i think it's a very valid stance, if you have taken the time to actually look at this and say, i'm fine with the baby monitor, it's half of the price of the good one, you're an educated consumer, that's a good outcome for that but i want everyone to make their own decisions and when you place preferences into context people do care. now, i hate privacy surveys that say do you care about privacy, it's abstract word value, but there's a study that i really liked and it was qualitative study from pew that gave 5 different circumstances and asked about privacy preferences and expectations, one of them was a smart thermostat and one of them was social media
2:19 pm
advertising, i forget what the other ones were but the interesting thing to come out of study is how strong people's preferences were out of each context and the reactions were very different and what i took from it is that users look at the products and they want to be respected and treated as human being and as person and when that happens and useful at the same time they are okay with it, you know, i trust google, google has my data, great. but that doesn't -- this is a very optimistic answer to your pessimistic question. but i think that people know that the tech is there and i think maybe the snowden revelations were -- were kind of the wonky version of the cambridge analytica scandal and people are paying attention to facebook mishaps and at least asking the questions and
2:20 pm
realizing what they don't know. >> also -- i agree with everything you said except for one minor line in there which is that people know that the tech is there and i'm not sure that's always true. >> you're right, it is not always true at all. >> and i just say that because i think that this is a place where the tech companies, the tech world has sort of failed users to some degree in the sense that the snow den -- snowden revelations happened, what were people's options other than figuring out signal and trying to get your entire social network, ask brother to get milk at the grocery store, like the weight of the problem is so diffuse and the difficulty of implementing the solutions for a long time is quite difficult. a lot of the products, use tour, it took a long time for tour to
2:21 pm
be usable product. they are moving in that direction and i think that's great. tech companies and the technologists in the work who do the great work, mozilla is a great example, fire fox, wonderful product. and so there is this push that it's happening to make it so that you can -- you don't have to care a ton, you can care a little bit and find products that suit your needs and aren't a huge headache to use. i think as long as people keep caring enough that small differences will drive adoption and better products, then we can keep moving in that direction. >> i think people care about privacy. i think people cared about snowden and people cared about cambridge and new york times headline on monday was of the third-party apps taking locational data and able to see where you go to work and people cared, created a buzz and
2:22 pm
scandal and education and i think that we are seeing -- the more we understand i think the more people will say this might be a problem that we need to control that it's about data control, we didn't know you were going to sell to this person. maybe they'll be more of a conversation of control and i don't want to get to gdpr but in some ways you are seeing experiment in europe from personal standpoint that as ups and downs but has some sense of there is another path and people do care about this for the most part and i think the fact that we are having this conversation today is part of the reason to show that some people care and maybe if we educate more people, more people would care. >> well, on that note i'm happy to open it up to the audience here for q&a. a few points, please wait to be called on and announce your name and affiliation. i would remind everyone that this is the question and answer
2:23 pm
session not the thesis, statement, monologue, biography session and with that i will begin with the gentleman here in the front. >> hello, alex howard. 3 years ago i was working at huffington post. connected toothbrushes, we will give you 15% break if you share data with us. now 3 years on look around, lots more connected devices. if you put your car on our network, maybe give you dynamic break on premiums. and it's not hard to anticipate all of the insurers asking for whatever the activity is for it to be connected back. what are the security concerns about that? i saw someone worrying about your car rebooting itself again and again.
2:24 pm
talk about the screen of death and what are the legal concerns? you already talked about data used against you in court of law, laws will be used but what framework should we be introducing into a world where you can see the insurers have a lot of skin in the game, very powerful and the tech companies want to cater to this and consumers, they are being good about brushing their teeth get a cheaper deal, how do we prevent internet in my bathroom? >> i didn't know connected toothbrushes were a thing. >> it was a father's day gift, don't judge, i have not connected it. [laughter] >> it's interesting because we need different frameworks to think about it and what you're talking about is really eligibility determinations and scoring, you brush your teeth, you probably take care of yourself better et cetera, et
2:25 pm
cetera and we have frameworks to talk about and think about but i don't think they are quite enough on point. we are talking insurance eligibility when you're looking at stuff like the fair credit reporting act and then suddenly you have the connected toothbrush and where does that fit? open question,ic. >> -- i think. >> not just a toothbrush. equivalent of a fitbit to get benefits of health insurance and life insurance and you get a better deal because they know how healthy they are and you can see how incredibly powerful it is to them and if you must have wireless electric toothbrush, you are cutting out a whole lot of people who can't afford electric toothbrush. you're changing the game and
2:26 pm
creating risk factors. car insurance is fascinating, a weird world where you are stereotype because young boys are idiots and they drive terribly and takes a while to get out of that and yet if you are responsible young boy, 16-year-old, you might want to benefit from the fact that you shouldn't have the pay premium for your idiot friend and you can see the -- the temptation to have this -- instead proxy for insurance, you're a safe driver you benefit, if you're not a safe driver, you don't benefit. i think we will see more and more on that. not just what you saw with better risk scoring but with health risk scoring, who gets insurance and who is opted out of insurance. i think you're right to raise it because my sense is that the legal framework in terms of consumer regulator protections is not that rich. it'll be affected by the entry, lobby for the things and show why it's a good thing and the people, the consumers need
2:27 pm
someone to push back, might not get insurance -- i have to wear fitbit, all these problems, discriminated against and we should have conversation now before we get too far along because what is happening in tech spaces, they are rushing in pushing products without thinking through the risks and then we are sort of playing catch-up of problems and discrimination and bias that we haven't thought through and if we are thinking about it now we can have conversations and better regulatory resistance. >> yeah, and i think as far as the risks goes, there's a couple of things that concern me. the biggest one is the more you pass around data, that's more greater service. the other thing that concerns me about, we talk about use of technology in context and that both means that users have to understand the context that it's just in but also worth noting that technology is often
2:28 pm
developed with the context in mind and if you change that context, you have to be really aware of how the technology should be expected to behave differently and if you created a connected toothbrush that was supposed to you how you brush your teeth, sudden i -- suddenly when it's tied to insurance, you care more about it, you care a lot more when it's buggy and you care about how much it is knowing. i didn't brush very long but i brushed carefully and very hard, right, all of a sudden the technology -- raises the stakes of any failure in technology and that's concerning to me. yeah. >> so i want to take a question at the back on my left, so way, way at the back, yes, the lady
2:29 pm
at the back there, keep the hand. microphone coming. >> hi, i'm nancy, urban institute. i'm curious how you think as you're talking through the movement in technology, we can keep the -- the constitution in front of us and the legal issues in front of us in a way that still continues to protect people's privacy and civil liberties. >> read more a lot articles. >> are there any movements in terms of legal community to really look at this carefully and begin to think about and talk to and convene folks around -- kind of like the civil rights movement, you know what i'm saying, we are at a new age of technology. any movement in that regard? >> i do think that data policing and surveillance, civil rights issues because it impacts
2:30 pm
communities differently and where we have seen surveillance is communities of color, slightly difference from iot space because in weird way iot space it will cost more. you're talking about people who are surveilling themselves. who needs a wired toothbrush? not necessarily the people who would be targeted by police. .. .. the tech industry tee serves this criticism, but there's definitely folks trying to be
2:31 pm
smarter and better, and there are folks on the hill trying to say, wait a second, we used to be leaders on privacy legislation what happened? and did we let this happen? so there is murmurs of what if we regulate that's in a baseline way and that would be useful. cdt put out a legislative draft. there is actually attention and i do think -- i'm optimistic, again. >> i will plug they put out a draft of baseline privacy legislation. trying to deal with his in a piecemeal way does mean you're being expected to consider the same question and get it right 17 times in a row instead of really taking one big push to say, like, here's the thing we care about, the rights we want to maintain, that's do this once and right so we don't have to
2:32 pm
keep playing whack a mole. >> the gentleman in the front. >> artificial intelligence is not magic. it's just replacing a person with a machine. so, alex replaces a personal assistant, and if you hired a personal assistant, that person would know everything about you. what kind of protection do you have from the police questioning that person and is there any reason that protection from questioning alexa should be any different? >> i'm just going to make one thing, which is that people forget and machines tend to not to. i think it would be good if they did more often but -- >> i think it's a good point. there are problem some contractuals nda you might make. maybe you don't trust your
2:33 pm
lawyer completely, they might turn on you, but -- or is themselves charged with crime, fraud, that could be there. so that says a third party, when you reveal secret to the supreme court says you have in expectation of privacy. its alex the person? i think it's somewhat distischtive but the argument will be used to push back by law enforcement and say this is no different. you ask alexa how to get rid of a dead body, ask your assistance how to get rid of dead body, you're on the hook for both. that will have to be played out. >> something we have to think about, things that differentiate that person from alexa or google or the services, but i don't think of it nearly the same way. know not to ask my personal
2:34 pm
assistant how to get rid of a body. there's a cultural norm and expectation that we just don't have with siri. it's weird. >> i will also say that technology is getting better all the time so maybe this won't be a problem, but machines aren't good at nuance. so you probably wouldn't ask your personal assistant to get rid of a body but you could sigh, i could just kill her right now, and understanding that that's not an actual threat of violence, not something that machines are always going to bet right. >> yes, i want to get the woman at the back. >> good afternoon. my name is mega taylor and i hail from d.c. few your panel.
2:35 pm
this is the one i'm here for specifically. someone referred to i believe the fourth amendment and i do not feel safe in my own home. there's just the bottom line. it's really creepy if i say something three seconds later it pops up on my screen or there's a commercial and for me that is extremely creepy, not only that but i called myself on my own phone at least three times, and we're talking landline and cellphone. so i'm asking that what protections can a consumer -- i also feel like i'm at the mercy of, right? i live in 2018. there's a certain way the world flows. either you catch up or get left behind in my home i don't want a smart home. i don't have a smart home. don't want a google or alexa but i'm continuously interacting with a world that does mitchell first question is, at the point you have actually asked -- sent you foia out to see if you're being watched ask they won't con femur or deny. what recourse does a civilian
2:36 pm
have when they know because sense and intuition is better than anything, there's something going on. how die figure out how to put my finger on it or can you commit to being my friend to help me work through the process. >> i'll commit to friendship before -- [laughter] >> i think that run of the undercurrents of this entire conversation is there is an imbalance of power. consumers don't have a huge amount of power but i like what you bring up as a point just for people to think about. the choice is make in my home affects other people. i do have a google nome any best restroom i use as an office pause i plays music and i unplug it when people stay with me because they don't know it's there and they will be creeped out. when it just starts talking to them. which has happened. there's a broadcast feature
2:37 pm
where you can talk to the hub in the other room, super creepy. that's apparently where my creepy line is. but i think that figuring out how to -- and that's why we're starting with education. if you don't know what it does, you don't know it's there you don't know that you have choices, and you don't know how those choices impact other people. but that is a step one and it's nowhere near the end goal. be a friend. i like that phrase. >> i think you speak for everyone who is struggling with what the balances and protections that exist out there. most of us don't know. we think maybe the fourth amendment protects it, really isn't going to protect group consumer space. might protect you from the government. maybe there should be lauds but we don't know what they are. we're all in the same position of struggling through this new era of technology as the devices
2:38 pm
proliferate around us. there will not be opt out. you will not be able to buy a car that isn't smarter than you in the future. just everything is going to be -- when you're at the developer you'll sign this document that gives them locational data that you can't get out of because you want the car and you didn't know this would be part of it. just sold a whole bunch of things you didn't know would be selling and everybody is in the disempowered position. one response is you convince companies to be in the space of being more secure, privacy, protected, whatever, and just opt in by buying that technology, or you ask for a government to step in in the proxy, in the deal with that imbalance and have some checks on it, depending on your political leanings you might favor one way or the other, but something needs to happen because right now the companies themselves don't see that need because there hasn't been, until
2:39 pm
you did it, comparison chart where you knew which teddy bear to buy. there wasn't -- that didn't exist. everyone saying i don't know which one to buy, which one is going to spy on me or talk to the nsa? i don't know. now there are groups doing that and i think we need more of that. more of these conversationsment more education, and more consumer choice to be able to put the thumbs on the scales of the run that are more secure and more breaktive but probably will not solve the problem of calling yourself on your phone. >> we are in the final lap. keep the questions brief so we can get to everyone. i'll take this gentleman right here. >> i have a question but the -- -- police have a warrant to look at my cellphone. when i come in at the country is
2:40 pm
can treat it like an address book. the courts upheld third. i have -- there is any prospect of changing this? >> there are lawsuits. i think -- i don't know if aes but aclu in massachusetts has filed lawsuits, trying to say you need more than just the fact that you entered our country, like this country to get everything on your cellphone and your laptop. it's incredible. not incredible in terms of the personal data you might have permanently or who you work for but even from a competitive consumer, if you work for a company in another country and you come to the u.s., means the u.s. border guards just go out of your data in your laptop. doesn't make sense but it is the law. there have been lawsuits that have tried to push back and maybe you need some kind of warrant but we're not -- certainly haven't gotten there yet. maybe there will be in the future. think it's a real vulnerability and it is going to impact lives,
2:41 pm
the fact you are giving up the security of the people you represent and work with. it's terrible. >> i'll take an opportunity to plug, i was at a senate can be commiteye hearing with someone from the aclu and georgetown talking about this particular issue. there are senate and house members who are trying to -- who do have legislation that would eliminate the so-called border exception the fourth amendment by imposing a warrant requirement but that's not the case at the moment. >> so, senators wrong widen and rand paul are interested in this issue. i will go to the back again. the gentleman back there. >> hi, thank you. jayce son from the global network. two quick questions, you haven't talked about when devices may be sending data back to other countries vis-a-vis who you
2:42 pm
don't have any protections legally and just wondering if there are any thoughts of thinking your organizations or you individually are doing on that issue. then the other one, thinking to the lady's question in terms of how to protect ourselves, i wonder if any of you have heard about either product development or work going on in standards development, processes, to allow basically consumers to protect themselves through -- there are ways you can configure your wireless route cher is the nexus between all of your in-home devices and the rest of the world so even if the devices you buy particularly well-designed or have privacy built in your wireless router can protect you from them, or if you're an individual, walking around in the world and you're interfacing with devices, your devices witch other devices, is through there a way -- is there technology you can activate on your devices to basically signal to other
2:43 pm
devices in the environment that you don't consent to having your data transmitted by them or processed by them? >> so in the final two minutes or so, can we have three of you answer those two questions. >> there are very interesting hard to explain not consumer friendly technical ways to do that if you spend some time you can look at and i don't remember what they're called. like the separating out the networks. but one -- i can't believe i haven't plugged this yet. look up mozilla's project thing. we are starting to play with and work with manufacturers on and the hope is to move the whole industry in the right direction. >> network segment indication. >> yes. >> network segment addition is making sure all the devices on your wireless router aren't just sharing the same air space and allowed to talk to each other in unrestricted ways. >> data in other countries,
2:44 pm
there is a concern about that. >> people are starting to think about especially in the european context and the german context. don't know there's anything outside of kind of the european concept of adequacy that i can speak to in other contexts but there are countries where you can have some pretty sketchy practices that nobody cares about and that's not something that manufacturers are disclosing. why would they? we're sending your data to this terrible-terrible place, let us actual you about it. great marketing. >> well, we're now handing off to some other speakers to give some -- before that, please join me in thanging our panel. thank you. [applause] >> rare pleasure to -- i am
2:45 pm
insufficiently paranoid. before i introduce or afternoon block of trash tach i want to address two messages to the folks watching at home. first in general, as regard to the conference, you can join the conversation about this on social media using on twitter using the #catospycon and on the last payment, alexa, play the daily cato podcast. before our final block of a flash tach going to focus on transparency. one of the ways we have ben able to become informed but the extent of government's -- not just through leaks from folks like edward snowden but via transparency reports that major platforms are increase leg producing to try to provide a measure of visibility by the public, not obviously into the specifics of who the
2:46 pm
government is looking for energies on but the scale what authorities are being used for that purpose. so we'll have two talks from major platforms about their transparency, or course, but start with facebook's head of strategic initiatives and response, alexander galloway. >> hello. i'm alex galloway and one thing our team is in charge of at facebook is our transparency hub. formerly known as the government request report but we have been expanding it significantly. so, i'll start without with high-level points on why do we do the transparency report? and so as julian just said, originally this was just focused on the government requests and that ways in aim of the report to hold governments conditionable for the information they request to provide insight into how often
2:47 pm
we provide data governments and the rigorous blows process we have in place to review the requests and the scale of how many users the governor is requesting information on. i it has evolved. we try to add more information every year. now it includes a little bit more in helping the world understand better how to measure the work, companies like ours do to make users safe, both physically and also their data, and to hold ourself publicly accountable for the actions we take in these spaces. so, i won't go into all the pieces here but this is sort of a timeline of all the things we have been adding since we launched our first report in 2013. so, each year, we each half we release a report and try to add a little bit more information based on feedback from people like you in the room about what is important. so, i'll briefly focus on the government-relate spokes of the report but just go over a high level of what the five reports we currently publish are.
2:48 pm
the news e one this community standards report. we also do our government requests for data, content restrictions based on local law, intellectual property infringement and internet disruption. so, in our communities standards enforcement report, we cover information about what actions we're taking to take content off the platform for violations of community standards sufficient as graphic violence, terrorist propaganda, child nudity and sexual exploitation and we're seeking to answer five key questions: how prevalent with the violations, and we mean how often are you likely to see these violation is. how much content did we take action on? how much violating content decide we find before users reported it? how we pro-actively detect? how quickly we take action on violations and how often we correct our mistakes. so, currently we published data on the first three questions. the last two questions are still in development but we have them on our site because we want to
2:49 pm
hold ourself accountable to the commitment to publish this information eventually. so, content restriction is based on local law. content restrictions based on local law is this is content that we geoblock or restrict in certain areas based on local laws, even. i it doesn't violate our community standards. the example is this is -- we will block it in -- holocaust denial. we will not take it off the platform completely but where it is illegal. the many questions we're trying to answer for people currently are how often do governments ask us for data, how many accounts are requested with those requests, how often we comply with these requests, and how often we're disclosing based on national security orders, and intel electric property infringements consecutives the copyright and trademark infringement and internet disruptions is how often people are blocked from using facebook
2:50 pm
and services in places where they're otherwise available. so our government requests. for the last report we covered data from the first six months of 2018, and for the first time we had over 100,000 requests this half. so, we complied by producing at least some data in 74% of the instances. the highest number of requests by far comes from the united states with over 40,000 requests, followed by india, united kingdom, germany, france and brazil. ing to, the united states and the other top five countries represent almost 80% of the total requests we receive worldwide. our global requests for the first half of 2018 were up 26%, which is pretty on trend with how we have been seeing growth. the reason this growth is happening is because facebook is growing and there's more users all the time and also law enforcement is getting more tech savvy and coming to us more often and understanding how to
2:51 pm
request data from us. we see the upward trend in all ejohns in airs park, up 30 seniors in latin america, up 10%. north american countries just over 20%. and in the europe, middle east and africa, we saw got up 20%. there are two types of requests we get from governments and we started breaking these out because it's important to understand the difference between why we disclose here. so, emergency requests are when law enforcement submits requests to us without legal process and we may voluntarily disclose if we have a good faith reason to believe the matter involves the risk of serious bodily injury or death. for emergency requests they are small fraction of what we receive. the rest of the requests are legal process which means we comply with those based only on applicable law and our terms of service. things like search warrants or subpoenas we receive. the united states makes up a
2:52 pm
third of all of our emergency requests, followed by the united kingdom, canada, india, israel, and mexico. so there's actually a different number of highest requesters for emergency requests than there is for illegal process requests. our content restriction is based on local law. trends over time are a little bit misleading here because sometimes we see really high spikes based on a certain piece of content. we restricted multiple times. for example, when you see july to december 2015 it was a result of one piece of information being reflected. if you take out the up incidents we have governments requesting us to take down content based on local law, over half. injuries disruptions. this is when facebook products or services are blocked in certain countries. it is trends over time aren't very helpful because these are
2:53 pm
very specific to the political climate in veiled countries which fluke fluke wait. the first half of 2018 be saw eight condition tuesday blocking facebook services, 48%. 83% or or in india, restrict access to facebook products and services, for just over 11 weeks where people cooperate access them. interestingly, although india blocked our services more times, in other countries they were blocked for longer. so in cameroon, ethiopia and iran they were blocked over 25 weeks followed by chad, was blocked for 13 weeks and the rest under one week each. >> so, in terms of what is next, we're continuing to grow the transparency property and try to photo what information is meaningful so, going forward,
2:54 pm
we're working to stabilize our metrics so we can report eave single category of harmful content. right now we only report seven different types and'll look out how many mistakes we make how fast we get to thing. metrics are a problem because we want to first of all use the right metrics so it's neutral and people can understand it and use that information without being colored. so you'll see in the transparency report we go to great efforts to really just present the data. you task without a narrative around our conclusions or what we think are trying to develop a narrative. it's specifically to just really get that raw data out there. we're also developing infrastructure to move to quarterly reporting. previously we have done this by half and we'll move to quarterly. this is priority of mark zuckerberg to keep this on mark because is this just as important was those are. we're also going to start doing
2:55 pm
calls of facebook leadership accompanying every transparency release with policymakers and academics and the press so they can ask questions and hear directly from out on our take on this. we're also trying to expand third party engagement. that's a lot with the academic community and policymakers as well so we can pull back the curtain and be more collaborative around the policy making for the appeals process in addition to expanding resource projects. they finding data that is useful to people who are looking hard at these issues to share that data with them and they can give us feedback how to better do our work. so, i really look toward no working with the people in this room going forward and talking to you after this. [applause] >> thanks, alex. just for i think contrast or
2:56 pm
comparison, illuminating to look at something like this from the perspective of a couple different companies. didn't want to think we were favoritism for facebook so i'd like to ask david lieber of google to talk about they're transparency report. >> thank you, julian. yes, so we, like facebook, have a number now of different transparency reports, total of 12. that span a different sort of spectrum of contents including privacy and security, content removal and then a third category is just a catch-al but includes things like our transparency report around political advertising which is relatively new witch published or first tran sir report covering government requests for user dat and did fit no a couple
2:57 pm
of ropes the first was just to inform the public but the nature, volume and scope of government requests for user dat that we received. but the second reason was to really inform the broader debate around government surveillance. that's what i really weren't to focus my lightning talk on today about three discrete areas where i hope i think we made difference or we shed some light on some of the numbers behind the relevant debates. those three areas are foreign intelligence, surveillance act demand, national security letters, and emergency requests. this just gives you a glimpse, burden's eye view of the number of government goods we receive worldwide over the past nine or ten years since we started publishing our transparency report. in and of itself the numbers themselves aren't necessarily going to provide you with a ton
2:58 pm
of useful insight other than to seeker at alexandra recognize we're seeing an increasing number of demand bit as you also see, the number of users and conditions impacted by those demands have gone up significantly. it's difficult to extrapolate from the numbers what is behind them. user growing is one explanation but terms of explaining the growing in the number of users and conditions impact thread are lot of different reasons and some of out is speculation why over one particular reporting period you might see a significant spike viewers reporting periods. i spoke at the cato surveillance conference in 2013. i think it's important to sort of go back because we tend to lose sight of some of the gains we made in transparency here. when i spoke in october of 2013, i believe, it in this room, we
2:59 pm
were in a position where we could not disclose, we could not even acknowledge we received foreign intelligence surveillance act demands from the united states government. full stop. so when we talk but it or when i talk about it here i had to be very careful about speaking about demands that we might receive, assuming we may receive any. that's the predicament we were in at the time. constrained in terms of our ability to speak about accurate and truthful information in terms of the aggregate national security demands we received. even one month later, when our director for law enforcement and information security, rick, testified before the senate judiciary committee, he was there on behalf of the world's largest internet company, advocating support of legislation that would enable us to speak truthfully but these types of demands and yet at the same time there was an interesting exchange between senator leahy and my colleague, rick, in which senator leahy had
3:00 pm
asked him whether he was here arguing for the ability to say more about the demands we received, and my colleague said, yes, and he said yet you can't knowledge right now that you actually get the command and he said that's also true, senator. we're in a different place now in part because of the u.s.a. freedom act enacted in 2015. the u.s.a. freedom act gave companies like google and facebook the ability to say more about the foreign intelligence surveillance act demands we received and provide more insight into the nature of those demands and what you're seeing here, for example, is just some reporting around the -- some recent periods where we received fisa noncontent requests, and in this realm there's not a lot that is terribly interesting for people. you can see in a particular range how many demands we received and how many users and
3:01 pm
conditions that were impacted and you can see that's relatively consistent among noncontent. a little more interesting when you're looking at content requests. the u.s.a. freedom act enables us to speak with a little bit of granularity in terms of requests that we receive both for content and for noncontent. and to report in bands and you can report in different bands, different ranges, depending on how granular the types of disclosures you're making. we report in bands of 500. you can see the numbers are consistent here, at least in term oses the number of demands made, but very different in terms of in the numbers of users and account that have been impacted. some of you may know that the u.s.a. freedom act, believe it or not, is scheduled to entire toward the end of next year. itselfs an appropriate time for congress to revisit some of the constraints placed on providers
3:02 pm
in terms of disclosing information but demands we receive. and to be clear, the department of justice has been a very good partner in terms of enabling us to be more transparent here, and in 2013 in the aftermath of the snowden revelations it was understandable that there was some concerns about the national security implications of providers for the first time making the types of disclosures we have been making now for a number of years. but flash forward now and the end of 2018, we're reporting in bands of 500, and query whether there is a national security moment, if you will, if a company like google is disclose 5:00 to hundred 99 on the one hand and 731 on the other, and by the way that comes off the top of my head. not looking to give orange pinstripes anytime soon. so it is an appropriate now time
3:03 pm
to consider whether we should be able to speak with a little bit more granularity and shed a little more lying of the demands we receive. some folks who are familiar with fisa can understand if differences between content and noncontent but the report by authority, some folks here may be familiar with section 215 or section 702. breaking down requests would be useful and help folks understand exactly where the focus is of our government in terms of seeking data from companies like google. let me turn now national security letters. this is also been an area where i think the government has been a good partner in terms of enabling to us say more but the demands we receive. this isn't by the way just transparency for its own sake here. the u.s.a. freedom act also require evidence the u.s.
3:04 pm
department of justice to look on a more regular and periodic basis at the nondisclosure orders that typically accompany national security letters, and to ask whether there was a need for providers to be gagged, for lack of a better word, on an ongoing and indefinite basis. to its credit the department of justice has taken this responsibility seriously. they looked at national security letters issued to provided like doing until the past and they've looked and said there's no longer a need for us to have this nondisclosure obligation placed on you and we're releasing you on that and allow you publish the fact you received the letter and the text of the national security letter itself. when i say it's not just transparent si for transparency's sake, if you look at the national security letters that companies like google and facebook and others have received, and others that published the text of theler themselves, what you're seeing
3:05 pm
is that the national security letters are asking for certain types of information that are authorized under the statute, and nsl dish should have started said nantz national security letters temperature the equivalents over subpoenas. they enable the department of justice to gave limited subset of data, the type of information that you would provide when you, for example, are signing up for a google account. name, address, maybe credit card number some other basic information. but what we have also been able to, i think leadership as a result of the disclosure is that the department of justice has also invited providers to provide another set of information called electronic communication transaction records or ectors. there's been an ongoing debate whether the underlying national security letter statute authorizes authorizes the department of justice to obtain ectors with
3:06 pm
national security letter and there's been a legislative proposal to require providers to provide a broad universe of information, the ectors themselves. that's a source of ongoing debate. we have some real concerns but the point i make is that but for the able to publish these national security letters themselves, it would be opaque to the public that these records, these electronic communication transaction record have been requested. so transparency will help inform the debate to the extend this arises. one other -- emergency requested. alexandria talked but those. the domestic law in the united states here, the storage communication act, authorizes providers like google, but does not require us to disclose information where there's a serious risk of bodily injury or
3:07 pm
death on an emergency basis and without legal process. when we testified before congress in 2013, separately, in support of the e-mail privacy act or what became the e-mail privacy act, the ongoing piece of legislation that everybody thinks should pass but never does. hopefully we will in 2019. we were having a discussion and testified before congress, a law enforcement official testified and made the assertion that at least some providers have a policy of categorically refusing to disclose data on emergency basis and that providers have -- that some providers demand they receive compulsory process or search warrants. to us that wasn't the case. at the time we weren't disclosing that data. we weren't providing any insight into the emergency requests that we received. and the implication at least for
3:08 pm
some who saw the testimony was that the biggest providers were the ones that were not providing data on a voluntary basis in emergency situations. for us and i know for a number of other providers this is the most serious type of request we receive. one reason we're staffed 24/7/365 because if there is that it type of risk we want to provide the dat attempt don't want to see anybody seriously harmed or killed as a result of us not being able to provision data in a situation where time is of the essence. so we did disclose that data. we started to disclose that data. what you're seeing is the latest transparency report which shows quite clearly when we get emergency requests we disclose that data. the vast majorout of the time. we saw other providers follow suit and publish the same data, and i think what we saw as a result of this is that number of
3:09 pm
the major service providers are requiring warrants in situations where there is an emergency. that has helped to inform the debate because there haven't been proposals to require service providers, in part because there was this belief that we were not disclosing this data voluntarily. to rite service providers without a warrant to cliff close the information on a mere indication of an emergency, without a con come tenant showing that the emergency exists. this is an ongoing issue in the policy realm around the issue of the e-mail privacy act about whether companies should be required to disclose this data when law enforcement agencies come to us and say there's an emergency. i just wanted to shed a little light on the policy implications behind the information that we publish and i think we'll continue to try and innovate and to recognize situations where we
3:10 pm
think more data can help to inform the public policy debates. thank you. [applause] >> transparency, 0 voluntary basis like that is a good thing. provides a lot of useful information but sometimes it is not enough. sometimes information has to be pried from the unwilling paw of the government and in those case izquierdo the freedom of information act that is often the most effective tool so to discuss that i'd like to invite up jessy francois from "open the government."
3:11 pm
>> thank you. so i will definitely talk about the kind of deep prying that we're involved in using the freedom of action -- information act to get information regarding surveillance program, some tremendous groups used foia to unlock the information. so, a little bit about my organization. open the government is a coalition of over 100 organizations, our public interests, organizations, inclusive nonpartisan coalition that works to strengthen our democracy and empower the public by sad vaning policies that create more open, accountable and responsive government. we do at lot of work win the coalition using the freedom of information act. one of the most powerful tools we have as public interests advocates to get information from government agencies. there's a lot of great organizations win the coalition
3:12 pm
and odd of the coalition using foia to get information about surveillance programs. we published a guide on best practices how to use foia successfully to get information related to issues that we care a lot about. and several of those cases look at how foia is being used to get information about surveillance programs. both at the state and federal and national level. and i'll talk about those cases today, particularly some of the foia work that is being done to expose records that have helped inform the public but surveillance programs that raise first amendment protected -- raid concern over surveillance first amendment protect activity and secret information sharing programs that raise concerns but fourth amendment circumvenges. the first case i look at is one that the organization's color of
3:13 pm
change and senator for constitutional rights have been involved in for several years trying to get information but surveillance of process movements, particularly monitoring of the movement for "black lives matter" organizing, and what they have uncovered through this work is information showing that there's quite extensive monitoring by homeland security and the fbi of the movement for black lives organizing, including e-mails, field reports shared and circulated with law enforcement agencies across the country. they've also showed some information pointing to thing's dough meant to about what is taking place in terms of surveillance of protests organizing. some documents they have gotten referred to a document called the race paper, which raises obviously concern about the fbi programs that are being maybe carried out in racially biased
3:14 pm
manner. so this has been an important case. another somewhat similar one is internet human rights watch has done a lot of foia work to get information about how section 702 of the fisa, which is mentioned, executive order 12333 has been used. these laws that the govern large scale surveillance programs have been used from different ways that raise fourth amendments constitutional concerns and what they've -- some of those revelations have shown is that u.s. authorities regularly hide information, sources of evidence in criminal cases, which is a big concern and the documents that human rights watch has been able to get show this is happening a lot more systematically than what really previously publicly known. so this is a really important
3:15 pm
case. for foia work. more recent one, freedom of the press foundation and the ninth first amendment institute of columbia, saying that just the justice department's secret rule for targeting journalists with secret fisa court orders, this has been a really important exposure, important piece of information to get out into the public realm, as it confirms long suspicions the government has used secret fisa court orders to conduct surveillance once journalists and that important revelation that they're still efforts to try to get more information but what this signifies in terms of how often these fisa orders were used to monitor journalists and this is kind of one of these tip of the iceberg types of pieces of information that has come it
3:16 pm
through foia that has been really important. another case, project on government oversight was able to obtain e-mails showing amazon's kind of pushing this face recognition technology on agencies, these ones particularly related to amazon pitching the department of homeland security to provide them with the recognition with the k which has been talk about here earlier. pretty controversial face recognition technology that's become used more and more by law enforcement and federal agencies, and this piece was important to understand what is happening behind closed doors and also it came at a time when several amazon employees were anonymously objecting to amazon pushing this type of technology, providing it to federal
3:17 pm
agencies, particularly at dhs where there's a lot of privacy and civil liberties and civil rights concerns with the way this technology is being used. the next case, looks at how organizations are providing platforms to give tools the public to be able to use freedom of information act at the federal level and also at state using the state information laws to get records about how local surveillance is happening. so, the electronic frontier foundation provided some tremendous platforms, people to see how invasive surveillance tech -- technology is being used at the local level, and these cools are kind of giving the public the ability to create the requests themselves and to dig into what is happening at their
3:18 pm
local communities. the last example that i'll talk but is one that open the ghost, my organization, was able to obtain some documents from using the d.c. open records laws. so this is an example how state open records laws can be used in the functioning and the really effective way to get information about often times what federal agencies are also involved in, where federal agencies might be less willing to release information. so in this case, we filed information requests related to the security planning that was happening, the leadup to the 2017 inauguration, presidential inauguration, and we get were some documents back that showed there was some monitoring of protest groups in the leadup to the inauguration day can where
3:19 pm
there were large scale protests being planned, and the documents point to this but they're very heavily redacted, the documents are almost entirely blacked out with only kind of small snippets to what was happening in terms of intelligence monitoring taking place before hand, and even though the d.c. mayors ordered the d.c. police to release more information about this case to remove their redactions and search for me records based on an appeal we filed, to date the d.c. police have not responded with the information and indicated that the federal secret service is asking them to withhold the information on their request. so this is an interesting case that is still unfolding and we'll let you know how that one unfolds. so in the guide we were able to come up with some
3:20 pm
recommendations based on all of these examples, based on these kind of best case examples of how foia work has successfully led to disclosure of important information that has been really critical in advocacy campaigns and raising public awareness. and so we come up with some kind of fundamental tips and recommendations on what makes -- what composes successful foia work. so doing the background research is one where understanding what else is happening out there in the landscape, who else is filing similar requests and what successes have they had or obstacles they ran into and how to craft your own request accordingly, also so you're not duplicating effort and bogging down foia offices. >> describing the pick records, getting in as much detail as you took identify the specific documents you're looking for. also, using other means like mandatory declassification review, if you're able to
3:21 pm
identify a specific enough document using that is a pathway to get information, national security related information. appealing the denial by an agency is often an avenue to get more information after they have initially were held information in response to your original foia requests. and then on foia collaboration. a lot of success happens when groups are working in unison, so an organization are teaming up with journalists or technologists, grassroots organizations, litigators, a lot of really get a synergy that happens when they team up to file requests and follow those through to get the information that they want. so learning what other groups are doing and learning the landscape is an important
3:22 pm
element to collaborate. planning litigation, strategies early. so before you file a request, talking with litigators to see what else has been litigated, so if you think it's going to be rejected, you're not wasting your time and going forward with the case that you're not going to be able to litigate and if you are, planning it accordingly to predict that fight. journalist us is a big one to get the story out and also get ideas from journalists but what is news worthy and then journalists that are filing requests getting information from foia experts about what they should be targeting, partnering up in that way can be effective way to increase access to information broadly. promoting foia reforms you see again and again difference relaxes that block sees accessed information so promoting
3:23 pm
legislative changes that can actually fix this, which is often a long-term strategy but a pretty critical one. and using state and local open records laws. we have seen that across the board state laws vary quite a bit but some laws in different states are pretty effective in getting information that the federal agency might not be releasing to you. looking archives overseas, some cases where groups have been partnering with organizationses in other countries such as mexico, using their information law to get documents but u.s. foreign policy in that country and what is happening with u.s. government relations which, again, is often a way to get information that the u.s. government might not be willing to release, but through using laws overseas might have some success. we have seen some of those cases. so, there are a lot of foia resources we have. a lot of groups in our coalition
3:24 pm
outside. just a few listed up here. but we're also trying to continue to document successful foia work and what is entails. if you have stories about the successes you have had her to challenges you have had in your foia work, let us know, and the foia guide is also available and the reporter's committee from defreedom of the press and the foia wiki site. so let in the mow what worked, what has been successful and where you have had luck getting information through your foia work. thank you. [applause] >> the debate around encryption i think of an old user routine, a british come meetan, talking about invading russia but it applies just as well, trust me
3:25 pm
to debates about crip top gage and backer toes and people have different use and head for the russian border and it's the same idea and run away realizing it's not going to work, and i think something tim already happens here there. there's a coined after constant call for the very smart federals federals in silicon valley. and turns out to be a version of an old idea that is still pretty fab. we're having another cycle of that lately. i'd like to invite the steeped cryptographer, mat green, of johns hopkins university to walk us through the new proposals being floated for exceptional access to encrypted communications and the problems as well.
3:26 pm
>> so this is going to be a slight didifferent talk, technical talk but very, very slightly technical so hopefully not. so thank you for that introduction, just in case anybody has not been following the last few years of debate over encryption i want to give a very brief background. ever since the smart phone came along a couple years after that, we have been having a very interesting national debate between law enforcement agencies and silicon valley about whether end to end encryption is something that should be allowed unfettered or whether there should be some way for law enforcement to essentially eavesdrop, add back doors to encrypted communications anding this in the past -- this is a continuation of a long debate back to in the 1990s and the numbers have changed. never 1990 asks when we encrepts things we had tolls like pgp and
3:27 pm
i'm guessing knock here uses pgp. maybe. but how many folks in the audience use what'ses app? and home people use apple i-message or facetime? smaller numbers but like me, overall worldwide we're talking it bow several billion people. so these are very, very significant differses and this issue has gotten more interesting. now, the result of this, i should actually use the tool -- the result of this starting in 2013-2014 was the beginning of soming that the fbi at least here in the u.s. called the going dark debate and some people call it going spotty. the british prefer going spotty. basically it's the question of should law enforcement have access to these communications and while well know a little bit about the background, there are two thing is want to high lite. the first is that the debate actually covers very different technologies. most people are familiar with the debate as far as it's kind
3:28 pm
of the biggest place where it's exploded in the word which was the apple versus fbi case where we had basically an argument about april encrepted phone that had potentially information on it. this phone was encrypted and could not get into it without the pass code and apple, unfortunately designed the phone in such way it could not be accessed without the pass code and/or without apple collaborating and basically building tools to help the fbi encrypt it but aning was opposes and we had a big war between the two organizations which ended when the fbi was able to hack the phone by themselves using acity -- assistance from a contractor. this debate has not always upbun about phones. in practice there are two different sides to this debate and technically they're different. we have device enchristmas which is base lay default on all
3:29 pm
phones. we also have a much bigger complicated area of text messages and group phone calls, video calls. all face-time calls are encrypted. they support group encrypted phone calls up to 32 parties. what's an and signal and other apps off ever phone callings cad those are end-to-enencrypted. they're very different problem is super boring and super import. the reason is this everthing when i incent my phone i'm encrept toying myself. when i send you an encrypted text-message or make a phone call i'm encrypting to you and the problem -- one of the oldest problems in encryption computer security -- the problem with incredibling to you has nothing to do with encryption, nothing to do with the actual algorithms we use to do this. the problem has to do with how die know you are you and we can really i define this as the key distribution problem if you want to get boring, but it's really a
3:30 pm
usable security problem. it's a question of how die know that when i increment this big number which claims to be your key, that actually belongs to you and not somebody else who is pretending to be you. how die know when i have a group call with five people there's actually six persons and that sixth person should not be on the call. ... two or three weeks ago a interesting public proposal that was put forward by two members of gc hp, if you're not familiar with gc hq, there that you case version of the nsa . nsa focuses on one thing which is collecting intelligence abroad whereas the gchq does technical work
3:31 pm
forthe equivalent of the fbi in the uk . the two people who were involved in this public proposal were also interesting, you don't see anybody at the us intelligence community giving public speeches about how we should break the encryption. in fact, it's been hard to get any information out of the us government about how they want their proposals implemented but the gchq folks have ideas and they're going on a charm offensive across the country trying to convince people their ideas are strong. two other things i want to mention, this is a technical director at the cyber security institute who is part of gchq, the other person whose picture i was not able to find anywhere on the internet is the head of cryptanalysis for gchq and if you saw the imitation game, this is the person who
3:32 pm
inherited alan turing's job, the person who breaks all the crypto so these are heavy hitters in the crypto world, not people you would expectto be telling you how to do math analysis . this is a very brief talk, the proposal is extremely vague and extremely simple. the basic idea, and i've heard this proposed as part of the talk but they have this great affair article is to target a part of the system that's called the identity management system. when we talk about people making encrypted phone calls, we tend to think about one person talking to another person. maybe there's a group called but these are the only two parties, their end to end encrypted. this is not what happens in these systems. there's always the third party, the provider that manages the process of making sure that these two individuals can get together
3:33 pm
and talk. when you go and you open up an apple i message conversation, you press new conversation, type in the person's name, what happens invisibly is your phone sends a message to apple servers that says this phone number, give me the key and you get the key back on apple. is it the right key? maybe, probably it almost certainly is if you trust apple but maybe it's not . you don't get just get one key. if that person has 20 devices and i have kids and we have a lot of devices attached to my account, you're getting the whole collection of keys, one for every device that person has.so you could get a huge pile of these and you don't know which ones are real or if there's an extra key, you're not going to know about it because apple does not expose this information to the user.the beautiful
3:34 pm
thing about these systems, you type in a name and you start texting. this comes down to you and the same thing has to happen to the other party so they can verify the information and send messages back to you in this part of the system is in apples world, it has a whole technical name but we generally call it the identity mask. if you trust this to work it's a beautiful system, if you don't trust it, bad things can happen. i want to show you another aspect of this. imagine we are dealing with group calling. in apples case, the interesting thing about this is group calling and group chat is what's happening every time you make a call because like i said, if i have 20 devices attached to my account and if i'm doing a one on one chat, i'm still doing a group chat for all my devices but if you use group chats, group sms, the same thing is happening for three people and hopefully if your system works well, you'll know who is part of your group i message or your group facetime because the phone
3:35 pm
will tell you who has joined this call and it's apple's job to make sure that happens. it's apple's job to make sure the application tells you in the group chat and all those things are features we rely on every time. which brings us to the proposal, the proposal is basically somehow to break the trust relationship between apple and its users to ask apple to add extra individuals into the calls, in the case of apple i message, that would be the equivalent of adding a new device onto your account . if we're talking about what's apple or signal, that would be the equivalent to adding a third into the, a fourth participant into a three-way call. it's a very simple thing to do. it has significant implications though and one of the implications is that we are already building systems that are supposed to
3:36 pm
defend against this. if you're doing a group chat with two or three other people and another person joined your call, this can be done, it's possible for the whatsapp servers say have this additional person join, your phone is supposed to be truthful and tell you if this is happening. what happens is a warning message comes up or you can press a button and see who's in the call . in order for gchq's proposal to work, the number of those mechanisms would have to be disabled. your phone would have to suppress the warnings that tell you a new person joined and it would have to suppress at least one person from the list so law enforcement could be there. how do you put in that new code? that code has to exist on the provider side in the app. you can't do it at the point where you're targetinga criminal, you have to put this code into every persons device .
3:37 pm
so that there is a way to send and add a user invisibly to your phone so this means you areessentially telling apple , whatsapp and anybody to remove security features from their phone and to do this across their entire customer base. the only way i can see, i don't see apple and facebook doing this voluntarily so this is going to require some sort of legal measure to do this and there is such a legal matter, i'm not an expert in uk law but there is this thing called a technical capability notice which gchq used the force limited changes to infrastructure, telecom infrastructure so there is at least a hypothetical way to do this using these technical capability notices. australia passed a law which allows them even broader, more capable capabilities which might include this as well. this is a technique for doing this.
3:38 pm
the second problem is that what we're doing is finding a whole and where trying to drive through that hole and i respect gchq for at least coming up with a technical proposal. the problem here, once you identify a hole in the security, there's an instinct for the people who designed that security system to fix the whole and once you make a presentation that says we want to drive through this whole, you don't have to drive through the hole, saying you plan to do it is notice for apple and company to start fixing it. how do you prevent companies from repairing this thing that they're going to see a flaw? it's difficult to say. one of two things happen. one of two things is companies like apple and facebook shut the whole and this proposal disappears. unfortunately the other thing that could happen is gchq could come in and say you can't fix it, you can't
3:39 pm
deploy any fix whether intentionally designed to stop this attack or not that will prevent us from exploiting this vulnerability and the system must remain permanently vulnerable because we rely on exploiting this feature. this has the weird property of essentially making gchq for various law enforcement agencies into the technical system designers for a huge class of these communication systems. there will be essentially a freeze in improvements that allow us to protect the way these systems work and this is a big deal because distribution doesn't just affect consumers sending text messages. these are two of the areas, key distribution and security design that are the unsolved problems in our field. these affect government communications and all other communications system so we could essentially if this is allowed to go through, but the development of these systems back by adecade or more so this is a problem .
3:40 pm
where the only way this proposal could work is essentially to stop the development of these systems. it's a very simple proposal. the last thing i want to say about this is despite the fact that i'm criticizing this proposal, i'm extremely happy at gchq that they've given us something concrete to talk about because through this entire debate of the last several years, the overall approach to this problem is to do a huge industry and accuse academia of not being able to come up with ways to assist law enforcement when our argument has been that adding these surveillance factors weakens the security of the system and there's no way to get rid of that trade-off but having a concrete proposal on the board allows us to start doing that analysis and we can put concrete results behind our claims that this is not secure so that's it, thank you verymuch . [applause]
3:41 pm
>>. [inaudible] >> it's entirely possible because systems like apple could be exploited right now. it would be disastrous if we found out. it would be terrible for bad applesbusiness and a lot of people's confidence and i'm hopeful . i assume that there has been attempt to make this happen but they haven't been publicized. there are rumors the us government is trying to do this with security levels but none of them have ever panned out and one hopes that now people are taking steps to harden these things so that will never happen .
3:42 pm
>> i will note one of the things i noticed in the disclosures is you can look back and find complaining about certain platforms, about how close they are to getting or we will have difficulty getting images and we're having trouble withthis . it's interesting who they don't complain about. it doesn't necessarily mean that's a pattern to look for. i want to give everyone 15 minutes to stretch their legs, use the restroom and then i will encourage you strongly to redraft for our final panel on surveillance in journalism and if you are watching from the office, let yourself out. at 5:30, we will lead a tour group for a fantastic
3:43 pm
exhibition, sites unseen so now would be the time to try to hit up cato for that. i'll seeeveryone back in 15 minutes .>>. [inaudible] >>. [inaudible]
3:44 pm
>> as i mentioned in my introductory remarks, there are so many fascinating issues surrounding surveillance in colleges, technologies that if we were to cover them all, as you saw this would be a conference that would last approximately two weeks because even i have a limited capacity to focus on these for that long, we have for the last couple of years been inviting activists to present shorter drops that focus tightly on a single subject and present work for
3:45 pm
analysis that they've been doing in a way that allows them to get a sense of the range of hard questions we face as citizens and policymakers. our morning block of flash talks covers issues from facial recognition to social media surveillance to the global war on encryption. i'll quickly introduce the speakers, if you want the full biography look for the conference website on cato.org, you'll find links on the speakers names or more extensive biographies. we will begin with announcements recently passed in australia that seeks to mandate law enforcement access to encrypted software, encrypted messaging tools and it's the first of its kind but it will be a model for
3:46 pm
elsewhere and for that i want to invite from new america, sean bradford. >> thank you, i'm sharon bradford , technology institute and if you had told me a year ago i would be here today talking to you about australia, i would have thought you were joking but i'm glad to have the opportunity to speak with you today about the law just passed earlier this month in australia and how this could allow the united states to look down for an encryption backdoor. got to get the clicker working. there we go. so for those of you who may not actually be already
3:47 pm
familiar with the long-standing encryption debate , it is a battle that pits accurately against security. for years the us justice department and federal bureau of investigation have been arguing they are going dark. due to the increasing use of encryption. they complain they can no longer access many electronic communications even when they have a valid order. many companies now have encryption by default in their products and services they simply do not have access to their users encrypted communication. the justice department and fbi what to require that tech companies guarantee that government has exceptional access or what they now call the use of so-called responsible encryption. so that they will always be able to access these encrypted packages. otherwise they say they are hampered in their ability to keep americans safe from
3:48 pm
terrorists and other criminals but security researchers, tech companies and privacy advocates have pointed out an encryption backdoor that could be exploited by others . there is no way to guarantee that only the us government would be able to use any such threat, rather, this amounts to deliberately building vulnerabilities into products and services and undermining security for all would harm everyone in cyber security and it would create new threats that we will all be victims of criminal activity. in addition, as we explore in a half-day forum that ot i posted, encryption protects economic security and personal safety and freedom of journalists, individuals in vulnerable communities including victimsof domestic violence . if debate which has been going on for years has now gone global with a quick flareup down under in australia. this past august, the australian government
3:49 pm
released what they called an exposure draft of its telecommunications and other legislation amendment or assistance and access 2018. unlike the u.s. congress, which takes months and months or more likely years before it passes anything, the australian parliament managed to wrap up its consideration in a mere four months. following a public comment period, slightly modified version of the bill was introduced in parliament and referred to the parliamentary joint committee on intelligence and security, or pkc i asked with open a new public comment period. my conversation in the technology institute organize an international coalition of civil society organizations, companies and trade associations and final three rounds of public comments on the bill outlining our concern . the committee held a series of hearings and then just at the beginning of last week the pkc i asked issued a
3:50 pm
report recommending passage of the bill with certain amendments incorporated. early in the morning, just last thursday, december 6, a parliament released an updated version of the bill including 173 amendments but that no one had ever seen before but by the end of the day, the australian parliament passed the bill into law. so what does the australian law do? as one australian commentor put it, they combine stupidity and cowardice of coalition of labor needs now that any ip products, hardware or software made in australia will be too risky to use for anyoneconcerned about cyber security . so we're focusing here on schedule one of the australian law which is the one that designed to undermine the safeguards of encryption. there are other sections of the law that based additional privacy threats that face
3:51 pm
government hacking but we're focusing on schedule one. the law includes what appears to be an encouraging statements that purports to prohibit a government from demanding the creation of encryption backdoor and i have it on the slide here, section 317 g says the government may not request or require communication providers to implement or build a systemic weakness or vulnerability and also, the government must not prevent communication providers from rectifying a systemic weakness or vulnerability. however, the bill granted new authorities that undermine this promise. specifically,a law create three new and powerful tools for the australian government . technical assistance requests or dar, technical assistance notices or tas and technical
3:52 pm
capability notices or gcm. the request are supposed to be voluntary whereas the notices are mandatory and the difference between the gcn depends on which government official is authorized to issue the notice. all of these authorize the australian government request or demand any quotes, lifted acts or thing. that's a long list in the bill and it includes things like removing one or more forms of electronic protection that are or were applied by or on behalf of the provider and it also includes modifying or facilitating the modification of any of the characteristics of a service provided by the designated communications provider. in short, these are powers to demand companies weaken the security features of their products. for example, the australian government could now make the
3:53 pm
same request to apple at the fbi made in the 2016 san bernardino shooter case . that they build a new operating system to circumvent iphone security features. and apple explained in the san bernardino case, building requested software would have made technique widely available,threatening the cyber security of other users . as we know, in the lawsuit here in the us, united states government argued that under the somewhat obscure all missed which dates back to 1789, they were permitted to make this command of apple butapple supported by other companies and privacy advocates argue that this demand was unconstitutional . the justice department ultimately withdrew its demands because the court, before the court could resolve the question because the fbi was able to pay the outside vendor to hack. but in australia, they now have the specific authority to make these demands.
3:54 pm
another worrisome scenario is that australia may see to use his authority in the same way the united kingdom is looking to use its new powers. >> just last month, pm levy and kristin robinson of the ceq which is essentially the uk's nsa put out a proposal and law. under this proposal, tech companies would be asked or required to add ach to as a silent and ended encrypted chat and the company with suppressednotification to the user. they argue that , you don't even have to touch encryption to add ghq as a ghost user inside the encrypted chat. >> there are several other threats posed by the new australia law approach to encryption. in our coalition comments in addition to explaining the breadth of the powers created by the bill we address the other key concerns.
3:55 pm
first, law lacks any requirements or prior independent review or adequate oversight. many features of australia's new law is authorization or technical capabilities notices were modeled on the uk's investigatory powers act acid 2016. uk law also raises the digital security and human rights section 4 of the uk's acts does require judicial commissioners review and approve proposed technical capability notices before they made the issue. although we still have questions about the adequacy and independence of this review under the uk law , australia's authority is even greater threats to security and individual rights because there is no provision requiring any type of prior let alone independent review. in addition, australia has no bill of rights. while procedures which, while
3:56 pm
there are procedures for which tech companies may challenge government requested order, these challenges will be more difficult. companies will not have the same legal argument available to them based on protecting individual rightsas they would in countries like the uk and us . second, the law requires statistical transparency reporting by the government and permits the physical transparency reporting by tech companies, it also includes nondisclosure requirements. whatever the government issues a request or notice to a tech company. violation of these receivables is a criminal offense national life five years in prison. and there are no limits to the duration of these gag orders such as we have here for nfl in the us when the reason for the conventionality no longer exists. third, the definition of covered designations to medication provider is
3:57 pm
overbroad. includes anyonewho provides an electronic service that has one or more end-users in australia . this means any company doing business in australia or anyone providing electronic services in australia is subject to government demand thatthey weaken the security features of their product . this is bad for australia but what does it mean for us here in the united states? australia's legislation appears to be part of a coordinated effort by the five alliance, for those of you who may not be familiar with that term, the five eyes is an intelligence consortium comprised of new zealand, and the united states that they expect world war ii. since 2013, the five nations formed a five country ministerial which is an annual convening on strategy and information sharing on law enforcement and national security. for the past two years five nations have focused on strategies and policy to
3:58 pm
weaken encryption. just this past august, august 2018, the five countries released a statement on principles on access to encryption and that statement includes if these governments continue to quote, encounter impediments in their efforts access encryption , they may pursue legislative mandate for encouraging backdoors, the same month that statement came out, australia release the exposure draft of the encryption bill. >> so now, australia law provides the united states and other governments with a backdoor to encryption backdoor. australia now has the authority to compel providers to create encryption backdoors once providers are forced to build weaknesses into their products, other governments can exploit those weaknesses.i've already mentioned the example of apple fbi. now, if australia issued a technical capability notice to compel apple to build a new operating system to
3:59 pm
circumvent iphone security features, guess what? the fbi demanded in the san bernardino case, apple applied and the system, it could no longer argue that capacity turnover data to the government in similar cases . >> i am also for my sins a former journalist so i have taken up prior to make myself the moderator of our final panel, the surveillance conference. i am very please with me to talk about the effects of surveillance on journalism and journalist response, spencer ackerman who is our national security correspondent at the daily beast. as previously worked at wired and was part of the pulitzer prize winning team reported on the disclosures of the guardian.
4:00 pm
to his left, you're right, letting market with a cyber securitytrainer at the freedom of the press foundation . rob mahoney who is the deputy executive director of the community to protect journalists and jack gillam who is a data focused reporter for pro public and has worked for the washington post and associated press. there that rarest of rare unicorns, a political journalists with a serious technology background of some sort. and i want to begin by noting that i think partly because of this revelation but also because we are seeing a pattern of increased government willingness to release to the united states, to target reporters in weak investigations and in part because changing technology has made it easier celebrate volumes of data.
4:01 pm
that there is i think a growing awareness of the need for communication security to protect sources. but also a growing realization of government willingness to investigate journalists and make them part of an investigation. earlier thisyear, we know that holly watkins , a reporter from the new york times was targeted as part of a leak investigation into her partner, of some years. who was worked on the senate committee staff as social security professional, was suspected of leaking information to reporters and so as part of the investigation, they ended up obtaining several years worth of her email back to her time in college. we know that under the previousministration , not
4:02 pm
all troubling things start with donald trump area times it can seem that way in reporting. we know we look at phone records of large organizations that the associated press in search of that particularly her but in the process, i think exposing in a serving way the entire organizations at least a big chunk of the organizations pattern of newsgathering communication. to try and get a handle on this and how reporters are responding , i think it helps to get a kind of 30,000 foot view around the world and the community to protect journalists does excellent work trying to track breath of all kinds reporters around the world. in particular by government so i was hoping ron could provide context for us talking about what's you're seeing now here and elsewhere
4:03 pm
in terms of journalists as targets of government surveillance it's a growing problem and i'd like to take the global view because we got both journalists who can talk a little bit more about the united states but first of all, let me say that outside of a very few sophisticated journalists, most journalists are incredibly about their own digital security, their information and take tremendous risks to diagnose their sources and one of the things we want to do is make journalists aware of just what risks they're taking. we have cases of journalists who have track jail because they're being surveilled area we have looking with alarm at the spyware that is now out there, pegasus in particular.
4:04 pm
we've been looking at being used against journalists in mexico, one of the journalists that were killed there javier valdez worked for an outfit, the day after he was killed, they receive text messages that link to download the software. whether there was a connection or not, i don't know what we also had in the mall to show the case, that he was in touch with the saudi dissident in canada that had instructed communications equipment. so there are two examples of journalists who were killed and we believe were being surveilled so this is a very, very serious problem. we come out with an annual census of attacks on the press and this year alone, 51 journalists have been killed, more importantly 30 of those were murdered and we find in a lot of cases murder of
4:05 pm
journalist slowly, they are tracked or surveilled or in other ways monitored, broadcast their movements so we are looking at ways partnering with people's racist awareness. it is a problem and i'm only talking about safe actors, we don't know what nonstate actors are using. but we think that the school at the university of toronto has done a great service in the last exposing this israeli origin, originated spyware. >>. >> for those who are not familiar, can you say a word about it? >> both of them know where you can get on to your devices, get on your phone . get on your computer area it spreads, can protect x messages or through email, usually doesn't like and what well area and are going on for a real website or places with people are sending to
4:06 pm
journalists, messages that are specifically targeted to journalists, you think of the source, you think it's an invitation and you download this malware and you don't see, you don't know that it's there. you believe that there are 45 states in a particular piece of software and what i'm saying is that most people don't know that that's out there. they don't know how to mitigate it. they don't know how to prevent themselves from being exposed because once it's on your device, you don't know. it can record you, access all your contacts. knows where you are, if you're the chinese government or saudi arabian government, is full because you want to call the journalists and in very quickly draw a map of all their networks, who their sources are and come down and
4:07 pm
these are risks that journalists take in the middle east today and they have very little awareness of it. >> there's a disturbing backdrop and the consequences are more disturbing. so i'm curious since you talk about it, whether full are working reporters are seeing a trend in a good direction on that front? you said you were on a national security beat well before the snowden disclosures so you were even then conscious of the possibility of fairly sophisticated adversaries that might have interest in your communications but how do you get a sense of the profession as a whole but the ball and in particular, if
4:08 pm
you want to the snowden case in particular, as a sort of extreme case where there's certainty for the most sophisticated adversary is targeting you, how do you start thinking about that and changing your practices as a result? >> it's good when thinking about security vulnerability to reason backward from what you're trying to do. it's always a good approach for expanding your risk awareness so they naturally protect the integrity of your communications, overwhelmingly protect the identity and location and so forth of your source. from there, i don't want to get into specifics, that doesn't seem to be a great idea but a really good rule of thumb, is the kind of operate, i joked with you many times about this that
4:09 pm
when we were doing this sort of reporting at the guardian, when they had the snowden information, we were operating more like an intelligence agency. we get a lot of five sharing among who we shared information with and how . we had a lot out of the awareness of general newsroom . we had different internal systems. to attempt to minimize exposure and we always sort of trying to operate with the presumption in mind that if you were trying to compromise what it was we were doing, in general then, i think through no small measure, because of the snowden leak becoming so, such a worldwide story, it had the salutary effect of increasing sort of on two tracks of risk awareness on the part of both journalists
4:10 pm
on one track and sources or potential sources on the other where basically more and more people started being for lack of a better term optimistically paranoid where you just sort of took it more as a foundational aspect of your digital hygiene, how it was you went and took reasonable but cumbersome steps as circumstances to mitigate. it's been encouraging to see that journalists outside of national security, outside of national news, outside of politics are thinking more in depth about the degree of which they have to worry about not just disclosing sources but exposing themselves, not just communications messages but over sort of a holistic ends of how you operate online. jack and i were joking the other day that ideally, people covering agriculture
4:11 pm
would be i'm sure i don't know anyone covering agriculture . attentive to these sort of risks . most importantly, is that second track where the people you communicate with our sort of hyper aware of not just risk mitigation. >> another thing i think to bring out one of the things problem which is you talk about the journalists becoming aware, one of the things that strikes me is the extent to which not everywhere, it's getting better but it's still expected to be a problem that the journalists are thinking of in a way that leaves it open that the reporter and national security reporter have different practices in a way that you wouldn't expect. the newsroom does not as a rules a okay reporters, you
4:12 pm
should find a good computer and set up and install word processing software and figure out how to, what email like you want to use as a network policy. around providing that capability to the reporter. we're seeing some of that, what is your sense that this is something reporters are very often modeledthrough themselves # . >> i know you can that as well. feel free to jump in. not just the first person i talked to you. >>. >> go ahead. >> the realities of doing pressurized reporting are such that yes, the answer is we're all just going to have to muddle through. i've never worked with a place where every holistic concern i'd like to see about journalistic security practices would be,
4:13 pm
thoroughly institutionalized. the good thing is, you gave interest among your colleagues to say we just want, me and a colleague at the guardian say if you'd like to learn more about how to go aboutsecuring your information , just meet us in the conference room, at such and such time. we're going to basically spend five minutes over lunch talking about some practical tips. in particular, journalists who frequently travel across international borders like a guardian tried to consider our presentation but also a really good idea is to turn to people like olivia who train newsrooms daily beast in particular piece. >>. >> security is hard and it's hard on many levels. we can take it on a psychosocial level.
4:14 pm
journalists are overworked, they are tired. they don't necessarily have the time to learn a full new skill set. and then you take it on the technical level. it's not just because you know, technology is hard or abstract and alienating to people who are use to working in directly with sources on the face-to-face level. but also just because there is no panacea. there is no single magic formula that i can go into the daily before pro-public got and you know, send a business day with everyone and then i can leave and feel great. everyone just as the tools that they need to take on any sort of project in any geographical location with any source. just doesn't exist so you see here a real sort of, there is
4:15 pm
this the sort of knowledge gap where on your day-to-day, your security knowledge might just be a baseline so then you take on a new project. you're working with a highly sensitive source and if you don't have the prior knowledge on these sort of source protection techniques that are more sophisticated, then you face the first contest where your source as reached out to you and perhaps a reach out to you on their employer's device. so they already shot themselves in the foot. so in order to sort of inculcate this culture of security, it's got to be something that is permeates the entire sort of, all the verticals of the news organization. and it is unfortunately the onus is on journalists then
4:16 pm
sort of be available everywhere and the very public about the means by which a source and contact them in a sensitive manner. you see really incredible pages on the new york times page where there are many different methods by which you can contact a journalist there and then there's protection recommendations that are already there. this is a sort of landscape that we're all responding to. sort of as it happens. and we have to do other jobs as being more proactive about meditating this context conundrum so any time, that's why at freedom of the press foundation, ourtrainers , we take a spoke approach to dealing with every prospective clients. before we even talk about the logistics of one, were going to show up at your news organization or work
4:17 pm
one-on-one with a given freelancer. we have a discussion about each individuals psychosocial and technical sort of hindrances or skills so that we can craft a sort of digital security curriculum that is going to be adept at sort of preventing these really cataclysmic problems. >> your reminding me on a couple occasions of as a journalist friend i patient along the lines that yes, you're on signal. and they'll say oh yes, i only use them for sources or sensitive communications with sources and i don't all, that's not a great idea. because you're already been volunteering data. if that's the only thing you're using your encrypted four, then you're giving away data.
4:18 pm
about which of your communications are focused on. >> and you can use signal which is an end-to-end encrypted messaging, also supports and an encrypted calls with improved means that the only person who can sort of view the message or hear the call in its actual laying text form is the endpoint so the intended recipient of each of these messages no point a and point b. so this is a really wonderful tool and exists in the same manner with what's up and i'm sure we have some users in the room if we don't have the users in the room but there are actually a bunch of ways that you can also shoot yourself and what portion of your sources!. and
4:19 pm
actually using these tools as sort of a pro level. you can just make the assumption that this is and the end encrypted, i'm good area not the case because you have and this also brings up this pegasus model where it someone who is trying to survey you has taken over your phone, the endpoint where everything is sitting in plaintext, then you have actually given up your end-to-end encrypted communication so there's a lot of actual expert level knowledge in terms of how do you do these tools. so putting on disappearing messages. so your phone actually forces the deletion after a certain amount of time, you're forcing your own data retention policy which is
4:20 pm
really useful but not all journalists know this. and if you're not incorporating it into your day today, it can be difficult to use these tools and use the granular security profit processes when appropriate. >> you mentioned the first contact problem, one of the tools that been widely adopted as been secured drop and it's something the daily beast and buzz use. this is either on the development side of the users . do you want to explain where that is as a tool? you got someone who wants to come forward but they don't already have a relationship with a reporter at the paper, so you'd like to establish a secure means of medication but you need to find a way to get in contact with the first time, at least in the case where you've got someone coming in cold. how does secured drop express
4:21 pm
that? >> i love to introduce secured drop and have some of the users talk about how it works in a journalistic sense . secured drop is an open source tool that acts essentially as a communications platform between a journalist or a news organization and a potential source. it does this and it promises sort of, it promises anonymity in a way something like signal cannot so it uses various sort of levels of encryption to do this . and we actually interestingly enough, we first started seeing the first news organizations are not secured drop shortly after or around the time of the snowden
4:22 pm
revelation and time, developers at freedom of the press foundation and the rest of our team had to convince news organizations to adopt secured drop because people were more nervous. there was this chilling effect about the pervasiveness of the nsa and our sort of lack of agency over it. so we had to convince some of the early adopters, but after the 2016 election, my god. our phone wasringing off the hook . there's a sort of new appetite to sort of start adopting what is to some extent the industry standard of slowing platforms. and so we now have it deployed in 75 no news organizations. around the world. and this has been you know, in a matter of six years?
4:23 pm
>> soulful actually make use of secured drop, has anyone on the used side, obviously it's not a security detail but i've experienced, can you hear me okay? i've experienced a couple different levels downstairs, i was joking that i was the one who set up the secured drop server and i was at the associated press, at one point all these computer pirates on my bosses desk in the bureau to going what are you doing. >> we're putting it all together, it's fine. we put it together and surprisingly a lot of the kits we got nothing to do with washington. they were domestic because av is obviously a little new news organization and, i will reveal what stories they led to some of them from people quite literally small town
4:24 pm
america like somebody in a government office who saw something horrible or some sort of malfeasance and that's how they use community so i think was a very eye-opening way to realize this is just not for the post snowden world where you have information they want to give you a washington reporter. there's a lot of people who want to get information and i wanted to step back and say and maybe throw some it departments under the bus with respect to the organization by work for. journalism at least when i went to journalism school and istarted about 11 or 12 years ago , every news organizations, i think after snowden changed a little bit but particularly like a metro newspaper and a local city, and really think about this sort of security problem. it can remotely come on their radar. compare that with the financial crisis among daily newspapers, you're going to
4:25 pm
get an it department that has eight year budget, is going to have the lowest quality dell laptop or not dell what any pc you can get for $200, is going to be outdated by three major versions of windows that doesn't have great updates, true story. it happened once. so they're not going to think maybe we should have source protection. they're like, here's the thing you can type your story and publish it in the paper. that is the way we were taught as reporters .source protection never entered my, even that phrase never entered my journalist lexicon and i think the snowden revelation's open data and now that tools like signal are more easy, it's an easy, free download, that helps but getting into the mindset is half the battle in understanding maybe not what to do but certainly what not to do. >> as an example of security,
4:26 pm
one thing initially we realize, good. to explain what that means, i noticed a fair amount of in very early implementations of secured drop was that instead of having the address being newspaper.com/ secured drop, they would have secured drop .newspaper.com as a way to sort of demarcated. the problem with that of course is if you're making an encrypted connection to a website andconnecting from your workplace , if you just connect at newspaper.com, it may not be clear what page you went to but if you connect to secured drop .newspaper.com, there's a record in the dns that the phone book lookup essentially functionality of the internet that shows that you have someone in the workplace looking for the secure
4:27 pm
contact site, not just visiting the newspaper. and to some extent, that defeats the purpose having an anonymous connection if you think people might be using it. is that something you guys anticipated or noticed? or found yourself trying to talk people down from? >> i can answer that in a multitude of ways. first i will say is that if you're interested in a landing page requirement to be listed on our public directory on secured drop, visit secured drop.org to see our public documents where we describe this problem in exactly as her describing it. this is a problem sort of of metadata. you can create like, you can sort of training, triangulate information to identify a source without even having to
4:28 pm
talk to them, without even having to know, to have taken over their computer or take it physically and analyze it. you can know when eight came in. you can be, let's say i'm nike person who is analyzing the network of the source that is giving secured drop, a news organization.com and in just with that information you can implement a source and this is a new problem. this is a modern problem in source protection. so to be listed in our public directory as an endorsed secured drop instance because we don't own any of them. these are all independent instances. you have to fulfill that requirements, a certain level
4:29 pm
of security has to be on that actual page that will direct your sources, because we are not onlyinterested in protecting the news organization, we're interested in protecting the source . we take some ofthat on in our public documentation . >> we have a responsibility because we are not talking about the tax, we're talking about people and it's interesting what you said well years ago, no one knew anyone was coming after you. i can assure you if you are russian or chinese or vietnamese or working in the middle east, you would know because you know your colleagues, your friends, your family was to be trusted because of what they were writing. what you have outside of the united states as they track the reporters, when the reporters get too close, they
4:30 pm
tell the reporters they have to be killed. they might go after the sources, they only have protection for journalists and in western europe so they go after the sources, not the reporters. that's why source protection is no longer a justifiable solution on thepart of the recorder. the obama administration used the espionage act more times than any other administration before it . that is the avenue that so-called liberal western governments will use to shut down journalistic investigations , to block the exit of information from the government or to lock in the secrets that they hold so the threat is different depending where you are in the world. >> that the end around around the first amendment inthe united states . >> ..
4:31 pm
if you were as a reporter, where they are obligation, you do not give it the source. because you giving up the source is maybe a practical matter almost the only way they will figure out who it was. you know with technology, they did not call you from home. and i think to some extent, it is changing with the mindset is you know, very serious sort of sense of professional ethics and duty around not disclosing your sources but it has led to the same that you have certain obligation to assist. and one of the problems there is, a lot of sources are not snowed in. so -- snowden.
4:32 pm
you say that's not risk or to take if i'm not confident i can do this. somehow i guess as reporters and then from an institutional perspective, you saw the probably just don't have to educate journalists, it does enable them to educate the source. and ideally, pretty quickly. >> have the luxury and reporting on national security where i don't really have to do that. so much. so much as i have two make sure that both of us are using secure methods that we feel comfortable using and that work for us. there is never going to anything that tops you know, in person contact and you know the tried and true pieces of paper filing system. amongst the public -- the first
4:33 pm
user problem, one of the things it's kind of made me somewhat more confident then a lot of sort of demand side security has expended both in sophistication, finding more people -- just for you general purposes, like the -- out know if there's natural way of estimating. but most tips that you get are from crazy people. particularly, unsolicited tips. so it feels kind of like you know, almost a virtue way, healthy for democracy but not some of the crazies are submitting their secure ground. it seems a more and more people are doing that. you know, in general, the expansion mobile encryption
4:34 pm
tools and communication methods, has also done a good deal of work in getting people sort of habituated and using secure communications tool in which you may contact, they can have rigorous discussions about your methods or interaction. because he and such. when you can time to go beyond virtual and how we go to go about securing transference of documents. in general, i see, i see the point that i think we do to move to a mode of affirmatively making sure that the sources are aware. you can sort of do that the more, sort of tiller it out. it is different from when you're talking to people inside the government and people who
4:35 pm
you know, haven't been in a situation like this before but it is definitely light years ahead of where we were five years ago. with encryption tools. >> so you are not really on a national security is just, div different experience, trying to i guess, convinced sources that you know maybe when they are not the time, talking to anything. >> funny you mention that because a lot of times i think sources don't realize that they are a source. [laughter] i think they, and often times, they think only think of classic sources like ed snowden, that is a pretty unique and rare event. an incredible one where you have dozens of pages of highly classified material that creates is transcendent series of stories. most often in my experience, it is somebody who works at a
4:36 pm
place in disgruntled or their essentially whistleblower, they've endured to the never document. they can be with the private company. right now i'm helping to rebuild or expand upon a team where we cover tech and civil rights issue. it is not just social media companies but looking at algorithms, big data, house, police used surveillance technology and newest duet. we sought her span a wide range it could be people who are in local government officer troubled by something. tony facebook, i think what happens that when there is that intech problem, maybe if they accidentally shot themselves in the foot by you know, sending a message on linkedin or friending your facebook or what have you. it is like okay, we are going to start on doing that and will not communicate this way again. frequently have had to tell people, please don't add me on
4:37 pm
linked in, and follow me on twitter. i just know it's the easy things, right? and then it was not coming for my coworkers to ship from different mailbox like a burner front to somebody and have them call me. i'll call them on the phone you're not used to. signal meets easy because you can say hey, download this. it is easier. i like to say it's because it works on both android and iphone. so i like this is easy. super easy. whatever. and then that is a lot better way the same this is a secure way, you won't get caught. i think signal makes it easier but also if they don't know better, politely along the way say that's, can you unfollow me on twitter? thanks. >> can i just take it back. it's not that you don't want to make sources scared, you don't give them false -- a good rule of thumb is to operate
4:38 pm
like you know, drug dealer rules are back. there was one drug dealer in baltimore who had, it's an amazing series of -- for the purposes of this discussion, -- you are not going to get to like absolute security and you're certainly not going to real practical information by you know, operating on that so literally. but the more you convince both as a source and journalistic end, with the attitude of mind, mitigation, it will open up to you. you definitely don't want to i guess this will be a good way, and affirmatively to make your way. it's just like -- you just sort of got a start out from a
4:39 pm
perspective that communications you know, our secure. from your end. and then just see what steps you can take to mitigate the risks. >> i tell you, would actually buy a book about the drug kingpins. if you are around to write the book you've done pretty well. you're talking about we have events that the prospect of being hanging in the morning concentrates them wonderfully. this is the extreme men and often very reverent consequences for outreach and other part of the world. many should be forced to figuring out quickly. you find out whether fast. you talked about they to spread
4:40 pm
knowledge to other parts of the world but there may be isn't the same knowledge of some of the technical tools, journalists and the u.s. are using. i am wondering, one, what are the things you're trying to get adopted abroad? and where there's more elevated risk. also are there practices in those places where they are in a sense, head of the level of surveillance. that we can import back. >> without going into detail of what you tell someone to make sure they are not tracked by the ssp. i think you city for a very low threshold. one of the great winds that we had a workshop was getting people to get past what is on the phone. that is, these were people with other restrictive jewish section. they don't bother to take will call basic digital hygiene.
4:41 pm
it's like brushing your teeth. they don't do that. they are not aware of some of the things that can happen. >> we got people to do that. we plead with them not to take the electronic devices across the border. because at that point you really are vulnerable to actual physical intrusion. when you devices are taken off, the report especially for journalists and non-us citizens coming to the u.s. from middle east, from latin america and at the border stop, poor agents have the right to take advice. and you know it's going back to and we are trying to make people aware. so it is great risk to you and your contacts. >> so you starting from a very low threshold. especially in countries where it is the technology but i'm
4:42 pm
thinking countries like many latin american countries, pakistan, india. you open yourself up on surveillance but you open yourself up to impersonation downline. is your twitter account. that's very embarrassing to put in your name if he's a reputation in a society where reputation is already important. >> or to lure someone out. so these are some of the, it is just, that is real awareness. in these are human behavior modifications. when i talked about getting people it is not temporary fixes. they want to implement them. has to be through the morning concentrating your mind. well, you look at chinese journalist in hong kong and vietnam, they were incredibly
4:43 pm
sassy. that is enough they put one foot wrong they would get others arrested. >> these old-fashioned tactics. they change their routine, they don't get into the habit of eating the same way they work they do not meet people in public places. they do not use most regular kind of communications equipment journalists use. these typewriters sometimes. very regressive kind of ways of working.that is the surrounding world. here, i am not a journalist in washington but these tempers now because of that. >> and i still use writers. >> i've gone on. >> one of the things that we
4:44 pm
mention talking officers earlier is, we've been focusing on communication and security. near point out that there is a lot of other aspects. they have a lot of other sort of tax services for someone trying other than the direct communication channel for someone trying to figure what a news organization is up to. so extract from you like some of your complaints about why news organizations are offbeat saying they pay more attention to. we live in society it's hard to avoid these things but i was thinking in this conversation it would be like you know, let's say, i have a source meeting in rio washington. they get on the metro and they walked on the street and made an arrest. the say that we go dutch and repay her own separate meals. there is a series of cameras in metro that got you on the facial recognition.
4:45 pm
use the card that ties for credit card. d.c. has a couple of surveillance cameras on the street. the restaurants now have a record of you and your source swiping a credit card at the same time. at the same establishment. there you go back every generous loves to get there stuff reimbursed. use a third-party app like this intensified. and it goes in the cloud which is a third-party vendor. which is outside of the control of the news organization. so i don't know how to get around that necessarily appear to have my own thoughts from another discussion but i think it is important to be cognizant of where metadata follows around like -- it's like a cloud everywhere. we need to do our best to make sure people meet up. maybe i-drive to a sources
4:46 pm
house or simply go meet at ice cream shop. a coffee shop down the street. and i will leave my cell phone at home. my only carly i know my mom started. all things. again is not foolproof. i don't want to get to a source that is a foolproof method of security but i had at least try. if i can get on my -- these are not, these are peoples lives at stake. even if it's not their physical lives, the threat is their livelihood. forget about getting security clearance how about working for a local company that has a very strong mba with a talking reporter and you're out the door. that was the mortgage, there goes being able to figure family. this is deadly serious stuff. even if you don't cover the most high-level national security stuff. i think we have an obligation to at least try to protect our
4:47 pm
sources. the best we can. even if we don't cover national security. >> can i just add one thing? an excellent overview. i'm embarrassing myself or now because i don't honestly know if a company like uber or lissa, do they have to comply? what are their internal policies? how often, i don't know what kind of sick transparency they provide. but we should think of those companies not as transportation companies but i think uber, i think travis was made a point of saying, is true that uber is a data company. this we are talking about. need to think about them and expect not just different patterns of behavior but expect the company to do things like the companies have been using
4:48 pm
like facebook and google and it's impressive. and public information about how they go about honoring or under some circumstances, not honoring law enforcement requests. the idea, ridesharing is like under earlier panels discussed increasingly network home devices. we should think of ridesharing in the sense of additional actors, things that we can accidentally use or disclose different formation about sources. what a tremendously rich -- not just about the interaction between your phone and your habits of travel but the patterns that develop and the
4:49 pm
continuity that could be apparent. at least you have someone targeted by law enforcement through that. we have to assume that. i would think it would be a very foolish police department and a very foolish federal law enforcement agency and develop foolish intelligence agency that did not see this as an incredibly valuable thing to go about. >> in terms of legal 30s , uber and live for legal purposes i think pretty clearly the service providers, the ride asked grand ride provider. all of the authority can be served on facebook and google or is easily on uber in some cases arguably without some of the same protections. >> i don't want to be cynical but i'll think, one of the things we've learned from the
4:50 pm
snowden documents is the degree to which intelligence agencies legal guardians, the general counsel and agency will push the envelope as far as it possibly can and come up with arguments for why, x, y, data protection while does not apply to them or how far they can go outside it. then there are three agencies, if they want to justify what they say. >> these are creating resources that didn't exist before. >> yes two -- 20 as we wanted to know physically where every "washington post" reporter was going. you would have needed a lot of people to do is go surveillance. you know, 10 years ago, maybe you had to have a whole lot of different court orders for cell phone records. but now you have a study in "washington post", a corporate
4:51 pm
uber account. and i have a list of everything wanted to expense. despite that, si communications, is that something you both to training. is that something you focus on? or is it more, i thought there so much more in the communications space. that may be make sense but are those other aspects things that you focus on? >> i think what everyone is sort of pinpointing is that your digital security as a journalist is just one component of the holistic security role. so i am an additional security trainer. i'm not a physical security trainer. i am not you know, a psychologist who focuses on resilience. but there are these interconnected and inextricably linked aspects of your security. that one should take in mind as
4:52 pm
a practicing journalist. you have your psychosocial health, you have your digital security, have the physical security and now you have to have like a lot of legal know-how as well. there is this knowledge that is required, really, to do your job in the most responsible way. to protect yourself, and protect the larger organization, to protect your sources. and so, something that we do in trainings, always start off with a risk assessment to the exercise. because the idea is that every single individual has a different sort of risk level. depending on where they are, what project they're doing. this is for every individual as the day goes by. and as the career progresses or digresses. and so, in order to actually
4:53 pm
work through this like best decision-making process, the idea is you have to have, you have to go through these decisions in an informed manner. and so, this is the sort of disconnect here where we recognize the need for information. we might understand the decision-making framework but it's really hard, and i would love for y'all to speak to this but it's really hard to stay abreast or keep abreast of the four buckets i mentioned. the legal, digital, physical and psychological. you know, my focus is on digital and i'm a professional and it is hard for me even to stay one step behind.you know? so you know, -- >> just look at i mean this is
4:54 pm
a spectrum. not all journalists have to take extreme measures to cover their trail. in their tracks because they're not as at risk. but you are involved in investigative journalism that you are at risk, we have had three investigative journalists in the last 15 months, all in europe. and they were reporting on the nexus between political corruption and organized crime. those people were shot, blown up because they were getting close to something that no one wanted, someone want to kept quiet. so there is a profile but did they take the precaution? in some cases, no. so you go from there to the local reporter covering local school board. there might be a leak but it's a completely different profile
4:55 pm
and what technical people, train, they want to give you the whole, they want to give us as human beings, what happens is we don't do any of it. psychology 1 to 1. >> you are, we are talking by totally different problem. think about communication security as a question of protecting the content of your communications or protecting in some case, who the endpoints of communication are. you're talking about a case where really the threat is, you cannot have an adversary no that you are approaching a particular story even. which is a more wicked problem it seems. give me some thoughts about what additional challenges that, how to do investigative work without, making that
4:56 pm
clear. >> there were several contortions international -- has many advantages because you can go out and reporters can come at the same problem from different angles. you will get little pieces of the puzzle. unfortunately, but it's also had the result that when one of these journalists is taken out, either physically killed or imprisoned, the others can continue to work. the story doesn't die with the reporter. >> and you think of this as a mixer you anyway trying to obscure to where it communications coming from or came from, by having essentially a lot of communication in a way that makes it hard to figure out. i want to turn to another
4:57 pm
question but before that i am sure run data suggest each of you grab reporters, get nothing else, a member this one thing. and maybe that is the answer is, as you're saying, there isn't the one thing. but if there was one thing you could install your college brain, nominate a candidate, keep a computer throughout the process of reporting. keep as much information to remain to a sensitive story off-line. >> actually the lion share trainings we do i would say like the sort of around the time that we spend in aggregate, the lion share of that goes to simply talking about account security.
4:58 pm
so using two factor authentication methods and having specifically the form of using a password which is a tool and it sort of takes care of memorizing and generating these you know robust passwords unique to every account. and what this does is, it is you know, one low hanging fruit that applies to everyone in the organization. it is relatively easy to adopt the time and financial intensity in terms of resources. it is limited. in the also does a really good job of wording many of the attacks that sort of attacks that have caused some of the worst tax to organizations or individual journalists. so these phishing attacks made to either harvest these passwords and usernames or to
4:59 pm
infect the devices with malware. that is awareness training. it is something that we always do and doesn't matter if we training on secure drop or doing a 1 to 1, that is something we talk about. >> i would say big brother really is watching. >> the most fundamental thing, the drug dealer rules. just being convinced that there is an adversary. i would say the internet is forever. there journalists i know who put information in cloud-based services, who knows where it is stored and for how long even if it is deleted. just, be very careful and go out and sunshine maybe talk to someone in person. >> i want to open this up to
5:00 pm
the audience. and let folks pose some questions to our speakers before we repair for a beverage and perhaps -- ... how do you balance out the increasing need to provide source anonymity with being able to assess source motives and authenticity and how can you check against something like a hostile government
5:01 pm
trying to spread malicious propaganda and trying to spread disinformation, things like that, when you are increasingly needing to rely on anonymous methods for sources? >> good question,i'll take it first . that first port of contact is very difficult. beyondthat, i just don't deal with people unless i know who they are . there's even overwhelmingly the case that you won't identify someone but anonymity from the proceeding of the story just won't fly, i won't do that. >> i was going to say when it comes to documents, if there's somebody that gives us something, it would be no different than if they delivered it via the mail anonymously. wewant to avoid the bush national guide memo problem . most of journalism is like
5:02 pm
getting a tip and verifying it as true so we've got a great smoking gun document and the work begins that we see it through so i just use the old-fashioned rule of journalism in that case and i agree with them when it comes to them, an actual human being, you need to meet me halfway and i need to know who you are to evaluate the motives but think things like secure job, they say here's a leaked document and the old rules don't apply . >> there are these editorial decisions that go into vetting and verifying sources and verifying the authenticity of documents but also one of my favorite trainings to conduct is when i'm on metadata analysis read action and the metadata can be meaningful for a journalist doing background on the source to say that
5:03 pm
they have received a document via secure drop so they have an electronic version of this document that they can analyze and then that can be helpful on background and then there is the responsibility of the journalist or the institutions to redirect the metadata so you're not also implicating your source when you go to publish and there are a bunch of, obviously i love teaching this. there are a bunch of novel ways to also obscure the original metadata of the document. there is never going to be a perfect solution, let me tell you that there are also technical manners by which you can add to the authenticity of a source. >> obliquely gesturing at the fact that, not that you wouldn't necessarily but a reality winner who is currently in jail or leaking
5:04 pm
information about state elections board being hacked to a press organization, the documents released apparently had metadata in them that made it, not that it wouldn't have mattered otherwise but made it faster and easier for them to identify who had leaked the data. that's a difficult challenge because you can sort of figure out what the adversary can use in a document you've gotten . is this something you thought of in terms of source protection? we're going to release a document and what steps are we taking to make sure our source is not unwittingly outed by that? is that something you have considered? >> the best thing we do is often we get a document, even if we considered publishing it, we will scan it or they send it through the copy machine to rescan it and
5:05 pm
obliterate any metadata but for someone that can't speak for the reporters but the reality winner thing rattled my cage and from that point onward i was hesitant about putting anydocuments online . i think it's important we have to be as transparent with readers as we can but it's not just the printer docs that we're referring to where each printer has its own microscopic way that it identifies itself but who's to say a police chief trying to smoke out of source doesn't drop two words in a sentence on a page . that's the honeypot and we figure out who it was. in that case, we will quote from it and we will paraphrase it. it depends, everything is a different situation but it is with me, that's for sure. >> it's important tobalance the editorial and technical concerns . what you're pointing out is essential .
5:06 pm
>>. [inaudible] a couple technical questions for you. [inaudible] >> well, you will be happy to know that if you go to freedom doppler ask /training, we have a we have exactly what you're pointing out which is digital security training on an institutional level. they don't scale. i can't work with every freelancer, every lawyer so what we like to do is to publish original content vetted technical guides for a general audience. so we have dcn guides where
5:07 pm
and because i won't recommend it to a specific vpn because itis a bit of a nuanced process, to select one for your own uses . but what it will do is its kind outlined that decision-making framework for you and we have recommendations there that adhere to our technical and other sort of requirements for us to make a recommendation. and in terms of encrypted communications, i think we have some really great folks on this panel who mentioned they are using signal which is an open source tool that's available on the google play store for androidand in the app store for iphone . it's also a desktop app, there are other nuanced concerns there but -- [inaudible] okay, yeah.
5:08 pm
so again, this is going to be really annoying digital security answers. it depends. it's going to depend onyour risk assessment . we have a guy that we published lastweek if you'd like to familiarize yourself . we talk in general about how to think about what the appropriate tool to encrypt external storage media or sort of partition on your computerlocally . that's going to depend. i'dhave to ask you what operating system you're using, what's the purpose . but you can also contact us if you have pacific desires. >> i want to comment there because it's important that comes out of that which is that there are resources available but to freelancers,
5:09 pm
they are really vulnerable. if you're working for a well-funded news outlet, you have it people, security people and you have a lot of help but if you're a freelancer and you don't have anything , go to freedom of the press, the committee to protect journalists . they will appoint you at places where you can get that information as a freelancer without a great budget to buy on. >> cryptographers like to say it's always a bad idea to roll your own. there's lots of open source welded libraries and if you're a company trying to develop an encryption solution and you are in house buildingyour own algorithms , you're making a serious mistake . and i think the public extension is it's probably dangerous if you don't think you can navigate how to
5:10 pm
implement something well just on your own. so it's finding resources to help with those things. i don't know that those regions exist and are important . someone in the back? >> hello, thank you for your good workfirst and foremost . i believe that as a former un security man, are any of you familiar with the movie? i didn'twant to reference it but it's the whistleblower . the information i got that came out of it, is this more to the work than the digital footprint but say she found out that jen core and there
5:11 pm
were other un higher-ups, it was an amazing investigation that led to major outcomes that did lead to any outcomes for civil society so what do you all feel or if you could share the responsibility of journalists, because for people like me, i live in a vacuum and a lot of information that i know is just not known to the general public that could be very beneficial in coming on to court or finding some kind of common ground so what do you all think, especially for commercial, those of you with major outlets, what is your individual responsibility as well as your organizational, in terms of making information that may not be in the forefront, known or known better in terms of things like the whistleblower, where she found actual trafficking and things that created complicity between an organization and the un ? >> i feel like that's her job
5:12 pm
too. maybe i'm misunderstanding your question . are there cases to tie into this, are there cases where you have questions about whether you can report something out in a way that does not endanger your source and that influences your question about what itis that's something that can be recorded safely ? >> without getting into too many specifics, by definition you're not going to describe it in, i have left information out of the story to the point where it substantially changed the story. made it a story about a different thing that i had to do additional work to pursue because i had reason to believe that my initial story was so specific about a particular person with enough
5:13 pm
, even without a great deal of identification because of his position that someone was in would identify someone who would be in a reasonable position of being physically harmed because of it. i ended up just not doing that story. i didn't think it was worth it to do even though i considered and still consider there to be a clear public interest in knowing this. but i don't know how i get past that problem and i'm not willing to put someone in physical danger , so i just ended up not doing this, just very reluctantly not seeing a way around it. and i think most journalists would probably make that decision
5:14 pm
there's probably not going to be a story that's worth someone's life . i'm sure all of us hypothetically can think of thoughtexperiments where that's not true , but when you actually have to face a likely circumstance of someone being harmed because of what you've done, the decision just becomes easier. >> i think it's a case where technology line. any longer. theanonymity and protection to a source . you used be able to do that. mark felt remained anonymous as deep throat for generations. i don't think you can do that now. as journalists we have the moral responsibility of giving information, not to mislead them and make them aware that if they are giving us information, there's a goodchance they will be unmasked . >> sounds like an even harder
5:15 pm
job . i think we are pretty close to the panel and we have time for one more and i will grab our guest and let the panel answer that while i do that. i'll be right back. >> howard, recovering journalist. that 16 was difficult not just for protecting sources but figuring out who the sources were and what motivation. >> the question i have, how is your approach to yourjob and vetting sources and the reason they might be coming to you with information , whether it's from a government source or from a
5:16 pm
party or anyone else, is newsworthy? how are you changing your approach to interact with those sources given what we know about how intelligence sources might delay your communications? >> speaking for myself, not a lot has changed. the job isn't the receipt of information, it's the vetting of information. and assessing motives, inquiring about motives and so forth and doing your due diligence to notify. that's just going to be there. it helps to have again, the depth of understanding that there is deliberate misinformation out there. we started out from the snowden trope, considering what if this is a giant instance of potential misinformation ?
5:17 pm
i don't see much of a difference in terms of time and the difference is awareness but the way the job operates, whatever technological changes or technological vulnerabilities, the job functionality is thesame . >> i'd like you to join me in thanking our panelists . [applause] and before we repair, perfect. i want to turn the podium over for a moment to miss grant them, you are welcome to join us for a drink in our atrium but also if you are interested in joining us, there's a fantasticexhibition on extensions in the american
5:18 pm
art museum and i want to give you a free preview of that so you candecide if that's something you want to see under her guidance . >> last time i was here i was a young lawyer working first amendment issues . how this comes out from my perspective is that two days ago, one of your conference organizers said there's an exhibit at the smithsonian which is almost precisely in the subject matter of this conference . if your conference is on surveillance and we weren't already being given our exhibit a more artful title, we couldhave called it surveillance .so the thought was okay, maybe you guys would like to come over and see the exhibit but in fairness, you ought to have a chance to decide whether it's of interest to you so i threw together some power points which i swore when i stopped workingi would never do again . and that's not the way you forward it .
5:19 pm
what did i do wrong? okay. hello, okay. so what we have at the smithsonian is an exhibit by trevor paglia, some of youmay know of him . he's an artist and a photographer, he has all his right credentials from chicago. he's a scientist with a phd in geography which is very important when you guys are talking about location and geolocation and echolocation, all of that comes into play and he's also an activist and what i've done is given you -- he talks about secrecy as a series of constraints and he's not trying to disclose secrets.
5:20 pm
he makes it very clear this is between him and some of the other folks but what he's trying to do is look at the whole structure of, the whole infrastructure of secrecy and surveillance so he's trying to look at thedoor, not behind it . okay, i didn't push hard enough. i thought i would give you a feel for where he's coming from with two quotes and give you a few pictures. on secrecy and surveillance, i don't know if you've read any of his articles, he writes about the terror state and the rise of the terror state and the consequent decline of our civil institutions. and the result is he says that when something unexpected happens, the government will respond. with all of the powers of the terror state, but he also talks about the internet which, sorry i'm standing near themike , he talks about
5:21 pm
the internet and all of the potential it has to be the great boon to sharing and communications, but also the greatest threat to civil liberties and the greatest tool potentially for totalitarianism and i heard the tail end of the last presentation that talked about what they can figure out from metadata and all that sort of stuff so now i'm going to show you some of his pictures and the picture selection are notnecessarily his best . but what i was able to get at the last minute yesterday, and you'll see the formatis somewhat different because i stole them from different places on the internet . i don't know why i'm doing, this takes a long time. as i said, he is a geographer and he also pays homage to
5:22 pm
some of the photographers of the last century and one of them is timothy o'sullivan who was part of the first exploration surveying team that was sent out by the war department after the civil war and so this is a photograph taken almost 150 years, 150 years ago of pyramid lake and it was done for surveillance and paglia and talks about the 19th century photographers as basically being the equivalent of the photo reconnaissance of the 20th and 21st centuries so take a good lookat that photograph . you'll see the focus is on the land and the water, the sky is not all that important, in fact it's hard to see the sky . then take a look when this
5:23 pm
comes up another one. also of pyramid lake, however this one was done about a decade ago and this is, there are two big differences. there's a third difference which i haveto point out . but there are two other big differences that are relevant . and that is that the focus has changed from the earth to the overhead. and the other, if you have to look you would see much better on thephotograph itself , but if you look way up, up there , what looks like an error or a scratch or something on the photograph is in fact the track of the satellite. and those of you who are scientists know it's not that
5:24 pm
easy necessarily to find and photograph secrets satellites. a roster of known satellites and then it takes a lot of work to figure out where the unknown satellites are. to figure out how to photograph them. and so there's a lot of very clever photography, much of which is also quite beautiful. so there's several of this kind of seeing the trail and the tracks of both geosynchronous and low orbit and a variety of other satellites. >> okay. here's an artist and many of his photographs are really i think very beautiful. and this is an example of one. this again is to identify spacecraft. these are all from i think
5:25 pm
from the ballistic missile defense organization . i'm told there are two airplanes in the picture but i'll be damned if i can find them. i suppose that's what surveillance is all about. one of the things he talks about is, and this i think would be very resonate here is the ubiquitous nest of surveillance. governmental and commercial. and so these looks often how does the surveillance instruments see things, what do they see, because he's also interested in machines writing things for machines. and this one is one of what does the thing look like, what is it if you humanize the camera, what does it look like and you see a very lovely photograph of a sky
5:26 pm
over the american southwest. if you look when you come to the museum, you will also see that right there is the drone. it's hard to see, but it's there. it's a predator. and again, he's looking at the ubiquitous list of the surveillance system in the united states. after the, some of the disclosures of the nsa surveillance of other programs, you started thinking seriously, more serious meaning about the internet and the role of the internet, and so he started looking at, i went to fast . okay. >> he started looking at the
5:27 pm
internet and this is one of, he started looking at where are the chokepoints of the internet? and he has, we have three. i think i'll show you to. this is the one i use because i grew up on longisland . and that's a masterpiece and it turns out that about where that little lifeguard station is is where the major cables, a short from europe. and in part again, i think what he's showing is the other monday nest in some ways of how to begin to look at secrecy and security. and then he's showing you the same thing overlaid on a standard nautical chart. along with some things like that which you see is nsa guidance on how to figure out where the best place to tap into in a particular cable
5:28 pm
might be. and there was another one which somehow seems to have missed but doesn't really matter. go back to and see what i get . i see, they just came outside. this is another one of those lending things, this one hawaii and again, also showing his background as a landscape photographer on that one and on both of those in the same thing. and in recent years easy, much more focused on the internet. so i thought i would show as the last, thank you. alas of the photographs before you make your decisions, were going to walk a few blocks south to the museum. is a box which will not be a
5:29 pm
surprise any of you. but it is an amazing number of americans, particularly middle-aged americans and that anonymize or, if using core technology and for pretty similar to what my company uses 10 years ago when we brought anonymize her. basically, it defeats metadata analysis. but kids love to come on, take out their machine, log on to this and see that it's not, they're not on the normal network. on a network that will infact be tracked . he's also, there's a section of the exhibit which i haven't talked about where he starts to look at artificial intelligence. and then some wonderful autographs of what the machines see as a learning. >> gradually. and how that changes and how
5:30 pm
does our metaphysical or nightmarish those become and those are from a photographic point of view some of the most interesting i think, i hope i've given you enough for you to be to decide where you want to walk down a few blocks and join us for a walk through the exhibit. >> thank you so much. i hope all of you, all of you are able to turn out for a quick drink in atrium, i hope those of you interested will join ms. benson for a tour of the museum. >> and i would like to thank everyone watching at home for tuning in to the conference, thank you all again. and i'll see you next year. [applause]
5:31 pm
>>. [inaudible] >> c-span washington journal, live every day with news and policy issues that impact you . coming up saturday, we will discuss the latest on aca open enrollment and how healthcare is playing out the midterm elections with samuel excellent the hill and then a look at student debt and college employment. joining us game fall of the institute or college access and success. and the center for public integrity terry levine discusses campaign finance
quote
5:32 pm
laws as they relate the so-called payments michael cohen made during the campaign and questions of alleged illegal foreign donations the president from inaugural committee. also, a news reporter ariel wittenberglooks at the trump administration's proposal to alter the clean water act protections for wetlands and waterways . be sure to watch the washington journal live seven eastern saturday morning. join the discussion. >> just a few minutes ago, president trump released a statement through twitter saying i am pleased to announce that mick mulvaney, director of the office of management and budget will be named acting white house chief of staff placing john kelly was served our country with distinction. nick has done an outstanding job in the administration. i look forward to working
5:33 pm
with him in his capacity to make america great again. john will be staying until the end of the year. he is a great patriot and i want to thank him for his service. >> coming up this weekend on the tv, saturday at 8 pm, highlights from former first lady michelle obama's tour across the country promoting her best-sellingautobiography becoming where she reflects on her life and time in the white house . >> the notion that there's a little girl from the south side of chicago when the time was named michelle obama, married to barack obama was going to dive deep into the midwest and iowa going door-to-door in people's homes and they were opening up their homes and welcoming me around their kitchen tables and what connected us was our story. >> on sunday at 9 pm eastern on "after words", citizens united president david fosse and the trump campaign
5:34 pm
manager corey lewandowski discussed their book trumps enemies : how the deep state is undermining the presidency. interviewed by journalists cheryl atkinson. >> i don't want to be a conspiracy theorist but we refer to many of these people as the november nightclub meaning they became a fan of president trump the day after he got elected. they didn't support him during his campaign and didn't vote for him on election day they found an opportunity to join an administration which was young and experienced to further their own agenda. as part of becoming president, he listened a lot to republican leaders in washington and took advice from folks that i don't know he would do that same thing today. during the transition and in the first or two of his ministration, the learning curve was the, just like it is for every single president of the united states. there'sno classes, there's no degree on being president and it's a learning curve . watch this weekend on c-span two. >>.
5:35 pm
>> coming up saturday, a discussion on the role of national council. we hear from ken starr, former independent counsel on the white water and whiskey investigations. it meant posted by the freedom for a institute is live at 2 pm eastern on c-span. x. >> sunday night on q&a, class american nazi party had 20,000 supporters who came to rally at madison square garden and as that footage showed the middle of new york, storekeepers giving the nazi salute were next to a picture of george washington. probably was for george washington's birthday.it wasn't active american fascist movement in the 20s and 30s earlier than people think but it was associated with the america first. >> university of london professor sarah churchwell looks at the history of the terms america first and the american dream in her book behold america. su a

36 Views

info Stream Only

Uploaded by TV Archive on