Mark Zuckerberg says it might be right for Facebook to let people pay to not see ads, but that it would feel wrong to charge users for extra privacy controls. That’s just one of the fascinating philosophical views the CEO shared during the first of his public talks he’s promised as part of his 2019 personal challenge.
Talking to Harvard Law and computer science professor Jonathan Zittrain on the campus of the university he dropped out of, Zuckerberg managed to escape the 100-minute conversation with just a few gaffes. At one point he said “we definitely don’t want a society where there’s a camera in everyone’s living room watching the content of those conversations”. Zittrain swiftly reminded him that’s exactly what Facebook Portal is, and Zuckerberg tried to deflect by saying Portal’s recordings would be encrypted.
Later Zuckerberg mentioned “the ads, in a lot of places are not even that different from the organic content in terms of the quality of what people are being able to see” which is pretty sad and derisive assessment of the personal photos and status updates people share. And when he suggested crowdsourced fact-checking, Zittrain chimed in that this could become an avenue for “astroturfing” where mobs of users provide purposefully biased information to promote their interests, like a political group’s supporting voting that their opponents’ facts are lies. While sometimes avoiding hard stances on questions, Zuckerberg was otherwise relatively logical and coherent.
Policy And Cooperating With Governments
The CEO touched on his borderline content policy that quietly demotes posts that come close to breaking its policy against nudity, hate speech etc that otherwise are the most sensational and get the most distribution but don’t make people feel good. Zuckerberg noted some progress here, saying “a lot of the things that we’ve done in the last year were focused on that problem and it really improves the quality of the service and people appreciate that.”
This aligns with Zuckerberg contemplating Facebook’s role as a “data fiduciary” where rather than necessarily giving in to users’ urges or prioritizing its short-term share price, the company tries to do what’s in the best long-term interest of its communities. “There’s a hard balance here which is — I mean if you’re talking about what people want to want versus what they want– you know, often people’s revealed preferences of what they actually do shows a deeper sense of what they want than what they think they want to want” he said. Essentially, people might tap on clickbait even if it doesn’t make them feel good.
On working with governments, Zuckerberg explained how incentives weren’t always aligned, like when law enforcement is monitoring someone accidentally dropping clues about their crimes and collaborators. The government and society might benefit from that continued surveillance but Facebook might want to immediately suspend the account if it found out. “But as you build up the relationships and trust, you can get to that kind of a relationship where they can also flag for you, ‘Hey, this is where we’re at’”, implying Facebook might purposefully allow that person to keep incriminating themselves to assist the authorities.
But disagreements between governments can flare up, Zuckerberg notes that “we’ve had employees thrown in jail because we have gotten court orders that we have to turnover data that we wouldn’t probably anyway, but we can’t because it’s encrypted.” That’s likely a reference to the 2016 arrest of Facebook’s VP for Latin Amercia Diego Dzodan over WhatsApp’s encryption preventing the company from providing evidence for a drug case.
The tradeoffs of encryption and decentralization were a central theme. He discussed how while many people fear how encryption could mask illegal or offensive activity, Facebook doesn’t have to peek at someone’s actual content to determine they’re violating policy. “One of the — I guess, somewhat surprising to me — findings of the last couple of years of working on content governance and enforcement is that it often is much more effective to identify fake accounts and bad actors upstream of them doing something bad by patterns of activity rather than looking at the content” Zuckerberg said.
With Facebook rapidly building out a blockchain team to potentially launch a cryptocurrency for fee-less payments or an identity layer for decentralized applications, Zittrain asked about the potential for letting users control which other apps they give their profile information to without Facebook as an intermediary.
Zuckerberg stressed that at Facebook’s scale, moving to a less efficient distributed architecture would be extremely “computationally intense” though it might eventually be possible. Instead, he said “One of the things that I’ve been thinking about a lot is a use of blockchain that I am potentially interesting in– although I haven’t figured out a way to make this work out, is around authentication and bringing– and basically granting access to your information and to different services. So, basically, replacing the notion of what we have with Facebook Connect with something that’s fully distributed.” This might be attractive to developers who would know Facebook couldn’t cut them off from the users.
The problem is that if a developer was abusing users, Zuckerberg fears that “in a fully distributed system there would be no one who could cut off the developers’ access. So, the question is if you have a fully distributed system, it dramatically empowers individuals on the one hand, but it really raises the stakes and it gets to your questions around, well, what are the boundaries on consent and how people can really actually effectively know that they’re giving consent to an institution?”
No “Pay For Privacy”
But perhaps most novel and urgent were Zuckerberg’s comments on the secondary questions raised by where Facebook should let people pay to remove ads. “You start getting into a principle question which is ‘are we going to let people pay to have different controls on data use than other people?’ And my answer to that is a hard no.” Facebook has promised to always operate free version so everyone can have a voice. Yet some including myself have suggested that a premium ad-free subscription to Facebook could help ween it off maximizing data collection and engagement, though it might break Facebook’s revenue machine by pulling the most affluent and desired users out of the ad targeting pool.
“What I’m saying is on the data use, I don’t believe that that’s something that people should buy. I think the data principles that we have need to be uniformly available to everyone. That to me is a really important principle” Zuckerberg expands. “It’s, like, maybe you could have a conversation about whether you should be able to pay and not see ads. That doesn’t feel like a moral question to me.But the question of whether you can pay to have different privacy controls feels wrong.”
Back in May, Zuckerberg announced Facebook would build a Clear History button in 2018 that deletes all the web browsing data the social network has collected about you, but that data’s deep integration into the company’s systems has delayed the launch. Research suggests users don’t want the inconvenience of getting logged out of all their Facebook Connected services, though, they’d like to hide certain data from the company.
“Clear history is a prerequisite, I think, for being able to do anything like subscriptions. Because, like, partially what someone would want to do if they were going to really actually pay for a not ad supported version where their data wasn’t being used in a system like that, you would want to have a control so that Facebook didn’t have access or wasn’t using that data or associating it with your account. And as a principled matter, we are not going to just offer a control like that to people who pay.”
Of all the apologies, promises, and predictions Zuckerberg has made recently, this pledge might instill the most confidence. While some might think of Zuckerberg as a data tyrant out to absorb and exploit as much of our personal info as possible, there are at least lines he’s not willing to cross. Facebook could try to charge you for privacy, but it won’t. And given Facebook’s dominance in social networking and messaging plus Zuckerberg’s voting control of the company, a greedier man could make the internet much worse.
TRANSCRIPT – MARK ZUCKERBERG AT HARVARD / FIRST PERSONAL CHALLENGE 2019
Jonathan Zittrain: Very good. So, thank you, Mark, for coming to talk to me and to our students from the Techtopia program and from my “Internet and Society” course at Harvard Law School. We’re really pleased to have a chance to talk about any number of issues and we should just dive right in. So, privacy, autonomy, and information fiduciaries.
Mark Zuckerberg: All right!
Jonathan Zittrain: Love to talk about that.
Mark Zuckerberg: Yeah! I read your piece in The New York Times.
Jonathan Zittrain: The one with the headline that said, “Mark Zuckerberg can fix this mess”?
Mark Zuckerberg: Yeah.
Jonathan Zittrain: Yeah.
Mark Zuckerberg: Although that was last year.
Jonathan Zittrain: That’s true! Are you suggesting it’s all fixed?
Mark Zuckerberg: No. No.
Jonathan Zittrain: Okay, good. So–
Jonathan Zittrain: I’m suggesting that I’m curious whether you still think that we can fix this mess?
Jonathan Zittrain: Ah! <laughter>
Jonathan Zittrain: I hope– <laughter>
Jonathan Zittrain: “Hope springs eternal”–
Mark Zuckerberg: Yeah, there you go.
Jonathan Zittrain: –is my motto. So, all right, let me give a quick characterization of this idea that the coinage and the scaffolding for it is from my colleague, Jack Balkin, at Yale. And the two of us have been developing it out further. There are a standard number of privacy questions with which you might have some familiarity, having to do with people conveying information that they know they’re conveying or they’re not so sure they are, but “mouse droppings” as we used to call them when they run in the rafters of the Internet and leave traces. And then the standard way of talking about that is you want to make sure that that stuff doesn’t go where you don’t want it to go. And we call that “informational privacy”. We don’t want people to know stuff that we want maybe our friends only to know. And on a place like Facebook, you’re supposed to be able to tweak your settings and say, “Give them to this and not to that.” But there’s also ways in which stuff that we share with consent could still sort of be used against us and it feels like, “Well, you consented,” may not end the discussion. And the analogy that my colleague Jack brought to bear was one of a doctor and a patient or a lawyer and a client or– sometimes in America, but not always– a financial advisor and a client that says that those professionals have certain expertise, they get trusted with all sorts of sensitive information from their clients and patients and, so, they have an extra duty to act in the interests of those clients even if their own interests conflict. And, so, maybe just one quick hypo to get us started. I wrote a piece in 2014, that maybe you read, that was a hypothetical about elections in which it said, “Just hypothetically, imagine that Facebook had a view about which candidate should win and they reminded people likely to vote for the favored candidate that it was Election Day,” and to others they simply sent a cat photo. Would that be wrong? And I find– I have no idea if it’s illegal; it does seem wrong to me and it might be that the fiduciary approach captures what makes it wrong.
Mark Zuckerberg: All right. So, I think we could probably spend the whole next hour just talking about that! <laughter>
Mark Zuckerberg: So, I read your op-ed and I also read Balkin’s blogpost on information fiduciaries. And I’ve had a conversation with him, too.
Jonathan Zittrain: Great.
Mark Zuckerberg: And the– at first blush, kind of reading through this, my reaction is there’s a lot here that makes sense. Right? The idea of us having a fiduciary relationship with the people who use our services is kind of intuitively– it’s how we think about how we’re building what we’re building. So, reading through this, it’s like, all right, you know, a lot of people seem to have this mistaken notion that when we’re putting together news feed and doing ranking that we have a team of people who are focused on maximizing the time that people spend, but that’s not the goal that we give them. We tell people on the team, “Produce the service–” that we think is going to be the highest quality that– we try to ground it in kind of getting people to come in and tell us, right, of the content that we could potentially show what is going to be– they tell us what they want to see, then we build models that kind of– that can predict that, and build that service.
Jonathan Zittrain: And, by the way, was that always the case or–
Mark Zuckerberg: No.
Jonathan Zittrain: –was that a place you got to through some course adjustments?
Mark Zuckerberg: Through course adjustments. I mean, you start off using simpler signals like what people are clicking on in feed, but then you pretty quickly learn, “Hey, that gets you to local optimum,” right? Where if you’re focusing on what people click on and predicting what people click on, then you select for click bait. Right? So, pretty quickly you realize from real feedback, from real people, that’s not actually what people want. You’re not going to build the best service by doing that. So, you bring in people and actually have these panels of– we call it “getting to ground truth”– of you show people all the candidates for what can be shown to them and you have people say, “What’s the most meaningful thing that I wish that this system were showing us? So, all this is kind of a way of saying that our own self image of ourselves and what we’re doing is that we’re acting as fiduciaries and trying to build the best services for people. Where I think that this ends up getting interesting is then the question of who gets to decide in the legal sense or the policy sense of what’s in people’s best interest? Right? So, we come in every day and think, “Hey, we’re building a service where we’re ranking newsfeed trying to show people the most relevant content with an assumption that’s backed by data; that, in general, people want us to show them the most relevant content. But, at some level, you could ask the question which is “Who gets to decide that ranking newsfeed or showing relevant ads?” or any of the other things that we choose to work on are actually in people’s interest. And we’re doing the best that we can to try to build the services [ph?] that we think are the best. At the end of the day, a lot of this is grounded in “People choose to use it.” Right? Because, clearly, they’re getting some value from it. But then there are all these questions like you say about, you have– about where people can effectively give consent and not.
Jonathan Zittrain: Yes.
Mark Zuckerberg: So, I think that there’s a lot of interesting questions in this to unpack about how you’d implement a model like that. But, at a high level I think, you know, one of the things that I think about in terms of we’re running this big company; it’s important in society that people trust the institutions of society. Clearly, I think we’re in a position now where people rightly have a lot of questions about big internet companies, Facebook in particular, and I do think getting to a point there there’s the right regulation and rules in place just provides a kind of societal guardrail framework where people can have confidence that, okay, these companies are operating within a framework that we’ve all agreed. That’s better than them just doing whatever they want. And I think that that would give people confidence. So, figuring out what that framework is, I think, is a really important thing. And I’m sure we’ll talk about that as it relates–
Jonathan Zittrain: Yes.
Mark Zuckerberg: –to a lot of the content areas today. But getting to that question of how do you– “Who determines what’s in people’s best interest, if not people themselves?”Jonathan Zittrain: Yes.
Mark Zuckerberg: –is a really interesting question.
Jonathan Zittrain: Yes, so, we should surely talk about that. So, on our agenda is the “Who decides?” question.
Mark Zuckerberg: All right.
Jonathan Zittrain: Other agenda items include– just as you say, the fiduciary framework sounds nice to you– doctors, patients, Facebook users. And I hear you saying that’s pretty much where you’re wanting to end up anyway. There are some interesting questions about what people want, versus what they want to want.
Mark Zuckerberg: Yeah.
Jonathan Zittrain: People will say “On January 1st, what I want–” New Year’s resolution– “is a gym membership.” And then on January 2nd, they don’t want to go to the gym. They want to want to go to the gym, but they never quite make it. And then, of course, a business model of pay for the whole year ahead of time and they know you’ll never turn up develops around that. And I guess a specific area to delve into for a moment on that might be on the advertising side of things, maybe the dichotomy between personalization and does it ever going into exploitation? Now, there might be stuff– I know Facebook, for example, bans payday loans as best it can.
Mark Zuckerberg: Mm-hm.
Jonathan Zittrain: That’s just a substantive area that it’s like, “All right, we don’t want to do that.”
Mark Zuckerberg: Mm-hm.
Jonathan Zittrain: But when we think about good personalization so that Facebook knows I have a dog and not a cat, and a targeter can then offer me dog food and not cat food. How about, if not now, a future day in which an advertising platform can offer to an ad targeter some sense of “I just lost my pet, I’m really upset, I’m ready to make some snap decisions that I might regret later, but when I make them–“
Mark Zuckerberg: Mm-hm.
Jonathan Zittrain: “–I’m going to make them.” So, this is the perfect time to tee up
Mark Zuckerberg: Yeah.
Jonathan Zittrain: –a Cubic Zirconia or whatever the thing is that– .
Mark Zuckerberg: Mm-hm.
Jonathan Zittrain: That seems to me a fiduciary approach would say, ideally– how we get there I don’t know, but ideally we wouldn’t permit that kind of approach to somebody using the information we’ve gleaned from them to know they’re in a tough spot–
Mark Zuckerberg: Yeah.
Jonathan Zittrain: –and then to exploit them. But I don’t know. I don’t know how you would think about something like that. Could you write an algorithm to detect something like that?
Mark Zuckerberg: Well, I think one of the key principles is that we’re trying to run this company for the long term. And I think that people think that a lot of things that– if you were just trying to optimize the profits for next quarter or something like that, you might want to do things that people might like in the near term, but over the long term will come to resent. But if you actually care about building a community and achieving this mission and building the company for the long term, I think you’re just much more aligned than people often think companies are. And it gets back to the idea before, where I think our self image is largely acting as– in this kind of fiduciary relationship as you’re saying– and across– we could probably go through a lot of different examples. I mean, we don’t want to show people content that they’re going to click on and engage with, but then feel like they wasted their time afterwards. Where we don’t want to show them things that they’re going to make a decision based off of that and then regret later. I mean, there’s a hard balance here which is– I mean if you’re talking about what people want to want versus what they want– you know, often people’s revealed preferences of what they actually do shows a deeper sense of what they want than what they think they want to want. So, I think there’s a question between when something is exploitative versus when something is real, but isn’t what you would say that you want.
Jonathan Zittrain: Yes.
Mark Zuckerberg: And that’s a really hard thing to get at.
Jonathan Zittrain: Yes.
Mark Zuckerberg: But on a lot of these cases my experience of running the company is that you start off building a system, you have relatively unsophisticated signals to start, and you build up increasingly complex models over time that try to take into account more of what people care about. And there are all these examples that we can go through. I think probably newsfeed and ads are probably the two most complex ranking examples–
Jonathan Zittrain: Yes.
Mark Zuckerberg: –that we have. But it’s– like we were talking about a second ago, when we started off with the systems, I mean, just start with newsfeeds– but you could do this on ads, too– you know, the most naïve signals, right, are what people click on or what people “Like”. But then you just very quickly realize that that doesn’t– it approximates something, but it’s a very crude approximation of the ground truth of what people actually care about. So, what you really want to get to is as much as possible getting real people to look at the real candidates for content and tell you in a multi-dimensional way what matters to them and try to build systems that model that. And then you want to be kind of conservative on preventing downside. So, your example of the payday loans– and when we’ve talked about this in the past, your– you’ve put the question to me of “How do you know when a payday loan is going to be exploitative?” right? “If you’re targeting someone who is in a bad situation?” And our answer is, “Well, we don’t really know when it’s going to be exploitative, but we think that the whole category potentially has a massive risk of that, so we just ban it–
Jonathan Zittrain: Right. Which makes it an easy case.
Mark Zuckerberg: Yes. And I think that the harder cases are when there’s significant upside and significant downside and you want to weigh both of them. So, I mean, for example, once we started putting together a really big effort on preventing election interference, one of the initial ideas that came up was “Why don’t we just ban all ads that relate to anything that is political?” And they you pretty quickly get into, all right, well, what’s a political ad? The classic legal definition is things that are around elections and candidates, but that’s not actually what Russia and other folks were primarily doing. Right? It’s– you know, a lot of the issues that we’ve seen are around issue ads, right, and basically sewing division on what are social issues. So, all right, I don’t think you’re going to get in the way of people’s speech and ability to promote and do advocacy on issues that they care about. So, then the question is “All right, well, so, then what’s the right balance?” of how do you make sure that you’re providing the right level of controls, that people who aren’t supposed to be participating in these debates aren’t or that at least you’re providing the right transparency. But I think we’ve veered a little bit from the original questionJonathan Zittrain: Yes.
Mark Zuckerberg: –but the– but, yeah. So, let’s get back to where you were
Jonathan Zittrain: Well, here’s– and this is a way of maybe moving it forward, which is: A platform as complete as Facebook is these days offers lots of opportunities to shape what people see and possibly to help them with those nudges, that it’s time to go to the gym or to avoid them from falling into the depredations of the payday loan. And it is a question of so long as the platform to do it, does it now have an ethical obligation to do it, to help people achieve the good life?
Mark Zuckerberg: Mm-hm.
Jonathan Zittrain: And I worry that it is too great a burden for any company to bear to have to figure out, say, if not the perfect, the most reasonable newsfeed for every one of the– how many? Two and a half billion active users? Something like that.
Mark Zuckerberg: Yeah. On that order.
Jonathan Zittrain: All the time and there might be some ways that start a little bit to get into the engineering of the thing that would say, “Okay, with all hindsight, are there ways to architect this so that the stakes aren’t as high, aren’t as focused on just, “Gosh, is Facebook doing this right?” It’s as if there was only one newspaper in the whole world or one or two, and it’s like, “Well, then what The New York Times chooses to put on it’s home page, if it were the only newspaper, would have outsize importance.”
Mark Zuckerberg: Mm-hm.
Jonathan Zittrain: So, just as a technical matter, a number of the students in this room had a chance to hear from Tim Berners-Lee, inventor of the World Wide Web, and he has a new idea for something called “Solid”. I don’t know if you’ve heard of Solid. It’s a protocol more than it is a product. So, there’s no car to move off the lot today. But its idea is allowing people to have the data that they generate as they motor around the web end up in their own kind of data locker. Now, for somebody like Tim, it might mean literally in a locker under his desk and he could wake up in the middle of the night and see where his data is. For others, it might mean Iraq somewhere, guarded perhaps by a fiduciary who’s looking out for them, the way that we put money in a bank and then we can sleep at night knowing the bankers are– this is maybe not the best analogy in 2019, but watching.
Mark Zuckerberg: We’ll get there.
Jonathan Zittrain: We’ll get there. But Solid says if you did that, people would then– or their helpful proxies– be able to say, “All right, Facebook is coming along. It wants the following data from me and including that data that it has generated about me as I use it, but stored back in my locker and it kind of has to come back to my well to draw water each time. And that way if I want to switch to Schmacebook or something, it’s still in my well and I can just immediately grant permission to Schmacebook to see it and I don’t have to do a kind of data slurp and then re-upload it. It’s a fully distributed way of thinking about data. And I’m curious from an engineering perspective does this seem doable with something of the size and the number of spinning wheels that Facebook has and does it seem like a
Mark Zuckerberg: Yeah–
Jonathan Zittrain: –and I’m curious your reaction to an idea like that.
Mark Zuckerberg: So, I think it’s quite interesting. Certainly, the level of computation that Facebook is doing and all the services that we’re building is really intense to do in a distributed way. I mean, I think as a basic model I think we’re building out the data center capacity over the next five years and our plan for what we think we need to do that we think is on the order of all of what AWS and Google Cloud are doing for supporting all of their customers. So, okay, so, this is like a relatively computationally intense thing.
Over time you assume you’ll get more compute. So, decentralized things which are less efficient computationally will be harder– sorry, they’re harder to do computation on, but eventually maybe you have the compute resources to do that. I think the more interesting questions there are not feasibility in the near term, but are the philosophical questions of the goodness of a system like that.
So, one question if you want to– so, we can get into decentralization, one of the things that I’ve been thinking about a lot is a use of blockchain that I am potentially interesting in– although I haven’t figured out a way to make this work out, is around authentication and bringing– and basically granting access to your information and to different services. So, basically, replacing the notion of what we have with Facebook Connect with something that’s fully distributed.
Jonathan Zittrain: “Do you want to login with your Facebook account?” is the status quo
Mark Zuckerberg: Basically, you take your information, you store it on some decentralized system and you have the choice of whether to login to different places and you’re not going through an intermediary, which is kind of like what you’re suggesting here–
Jonathan Zittrain: Yes.
Mark Zuckerberg: –in a sense. Okay, now, there’s a lot of things that I think would be quite attractive about that. You know, for developers one of the things that is really troubling about working with our system, or Google’s system for that matter, or having your services through Apple’s app store, is that you don’t want to have an intermediary between serving your– the people who are using your service and you, right, where someone can just say, “Hey, we as a developer have to follow your policy and if we don’t, then you can cut off access to the people we’re serving.” That’s kind of a difficult and troubling position to be in. I think developers–
Jonathan Zittrain: –you’re referring to a recent incident.
Mark Zuckerberg: No, well, I was– well, sure<laughter>
Mark Zuckerberg: But I think it underscores the– I think every developer probably feels this: People are using any app store but also login with Facebook, with Google; any of these services, you want a direct relationship with the people you serve.
Jonathan Zittrain: Yes.
Mark Zuckerberg: Now, okay, but let’s look at the flip side. So, what we saw in the last couple of years with Cambridge Analytica, was basically an example where people chose to take data that they– some of it was their data, some of it was data that they had seen from their friends, right? Because if you want to do things like making it so alternative services can build a competing newsfeed, then you need to be able to make it so that people can bring the data that they see you [ph?] within the system. Okay, theybasically, people chose to give their data to a developer who’s affiliated with Cambridge University, which is a really respected institution, and then that developer turned around and sold the data to the firm Cambridge Analytica, which is in violation of our policies. So, we cut off the developers’ access. And, of course, in a fully distributed system there would be no one who could cut off the developers’ access. So, the question is if you have a fully distributed system, it dramatically empowers individuals on the one hand, but it really raises the stakes and it gets to your questions around, well, what are the boundaries on consent and how people can really actually effectively know that they’re giving consent to an institution?
In some ways it’s a lot easier to regulate and hold accountable large companies like Facebook or Google, because they’re more visible, they’re more transparent than the long tail of services that people would chose to then go interact with directly. So, I think that this is a really interesting social question. To some degree I think this idea of going in the direction of blockchain authentication is less gated on the technology and capacity to do that. I think if you were doing fully decentralized Facebook, that would take massive computation, but I’m sure we could do fully decentralized authentication if we wanted to. I think the real question is do you really want that?
Jonathan Zittrain: Yes.
Mark Zuckerberg: Right? And I think you’d have more cases where, yes, people would be able to not have an intermediary, but you’d also have more cases of abuse and the recourse would be much harder.
Jonathan Zittrain: Yes. What I hear you saying is people as they go about their business online are generating data about themselves that’s quite valuable, if not to themselves, to others who might interact with them. And the more they are empowered, possibly through a distributed system, to decide where that data goes, with whom they want to share it, the more they could be exposed to exploitation. this is a genuine dilemma–
Mark Zuckerberg: Yeah, yeah.
Jonathan Zittrain: –because I’m a huge fan of decentralization.
Mark Zuckerberg: Yeah, yeah.
Jonathan Zittrain: But I also see the problem. And maybe one answer is there’s some data that’s just so toxic there’s no vessel we should put it in; it might eat a whole through it or something, metaphorically speaking. But, then again, innocuous data can so quickly be assembled into something scary. So, I don’t know if the next election–
Mark Zuckerberg: Yeah. [ph?] I mean, I think in general we’re talking about the large-scale of data being assembled into meaning something different from what the individual data points mean.
Jonathan Zittrain: Yes.
Mark Zuckerberg: And I think that’s the whole challenge here. But I philosophically agree with you thatI mean, I want to think about the– like, I do think about the work that we’re doing as a decentralizing force in the world, right? A lot of the reason why I think people of my generation got into technology is because we believe that technology gives individuals power and isn’t massively centralizing. Now you’ve built a bunch of big companies in the process, but I think what has largely happened is that individuals today have more voice, more ability to affiliate with who they want, and stay connected with people, ability to form communities in ways that they couldn’t before, and I think that’s massively empowering to individuals and that’s philosophically kind of the side that I tend to be on. So, that’s why I’m thinking about going back to decentralized or blockchain authentication. That’s why I’m kind of bouncing around how could you potentially make this work, because from my orientation is to try to go in that direction.
Jonathan Zittrain: Yes.
Mark Zuckerberg: An example where I think we’re generally a lot closer to going in that direction is encryption. I mean, this is, like, one of the really big debates today is basically what are the boundaries on where you would want a messaging service to be encrypted. And there are all these benefits from a privacy and security perspective, but, on the other hand, if what we’re trying to do– one of the big issues that we’re grappling with content governance and where is the line between free expression and, I suppose, privacy on one side, but safety on the other as people do really bad things, right, some of the time. And I think people rightfully have an expectation of us that we’re going to do everything we can to stop terrorists from recruiting people or people from exploiting children or doing different things. And moving in the direction of making these systems more encrypted certainly reduces some of the signals that we would have access to be able to do some of that really important work.
But here we are, right, we’re sitting in this position where we’re running WhatsApp, which is the largest end-to-end encrypting service in the world; we’re running messenger, which is another one of the largest messaging systems in the world where encryption is an option, but it isn’t the default. I don’t think long term it really makes sense to be running different systems with very different policies on this. I think this is sort of a philosophical question where you want to figure out where you want to be on it. And, so, my question for you– now,
I’ll talk about how I’m thinking about this– is all right, if you were in my position and you got to flip a switch is probably too glib, because there’s a lot of work that goes into this, and go in one direction for both of those services, who would you think about that?
Jonathan Zittrain: Well, the question you’re putting on the table, which is a hard one is “Is it okay,” and let’s just take the simple case, “for two people to communicate with each other in a way that makes it difficult for any third party to casually listen in?” Is that okay? And I think that the way we normally answer that question is kind of a form of what you might call status quo-ism, which is not satisfying. It’s whatever has been the case is—
Mark Zuckerberg: Yeah, yeah.
Jonathan Zittrain: –whatever has been the case is what should stay the case.
Mark Zuckerberg: Yeah.
Jonathan Zittrain: And, so, for WhatsApp, it’s like right now WhatsApp, as I understand it, you could correct me if I’m wrong, is pretty hard to get into if–
Mark Zuckerberg: It’s fully end-to-end encrypted.
Jonathan Zittrain: Right. So, if Facebook gets handed a subpoena or a warrant or something from name-your-favorite-country–
Mark Zuckerberg: Yeah.
Jonathan Zittrain: –and you’re just like, “Thank you for playing. We have nothing to–” <overlapping conversation>
Mark Zuckerberg: Oh, yeah, we’ve had employees thrown in jail because we have gotten court orders that we have to turnover data that we wouldn’t probably anyway, but we can’t because it’s encrypted.
Jonathan Zittrain: Yes. And then, on the other hand, and this is not as clean as it could be in theory, but Messenger is sometimes encrypted, sometimes not. If it doesn’t happen to have been encrypted by the users, then that subpoena could work and, more than that, there could start to be some automated systems either on Facebook’s own initiative or under pressure from governments in the general case, not a specific warrant, to say, “Hey, if the following phrases appear, if there’s some telltale that says, “This is somebody going after a kid for exploitation,” it should be forwarded up. If that’s already happening and we can produce x-number of people who have been identified and a number of crimes averted that way, who wants to be the person to be like, “Lock it down!” Like, “We don’t want any more of that!” But I guess, to put myself now to your question, when I look out over years rather than just weeks or months, the ability to casually peek at any conversation going on between two people or among a small group of people or even to have a machine do it for you, so, you can just set your alert list, you know, crudely speaking, and get stuff back, that– it’s always trite to call something Orwellian, but it makes Orwell look like a piker. I mean, it seems like a classic case where you– the next sentence would be “What could possible go wrong?”
Jonathan Zittrain: And we can fill that in! And it does mean, though, I think that we have to confront the fact that if we choose to allow that kind of communication, then there’s going to be crimes unsolved that could’ve been solved. There’s going to be crimes not prevented that could have been prevented. And the only thing that kind of blunts it a little is it is not really all or nothing. The modern surveillance states of note in the world, have a lot of arrows in their quivers. And just being able to darken you door and demand surveillance of a certain kind, that might be a first thing they would go to, but they’ve got a Plan B, and Plan C, and a Plan D. And I guess it really gets to what’s your threat model? If you think everybody is kind of a threat, think about the battles of copyright 15 years ago. Everybody is a potential infringer. All they have to do is fire up Napster, then you’re wanting some massive technical infrastructure to prevent the bad thing. If what you’re thinking is instead, they are a few really bad apples and they tend to– when they congregate online or otherwise with one another– tend to identify themselves and then we might have to send somebody near their house to listen with a cup at the window, metaphorically speaking. That’s a different threat model and [sic] might not need it.
Mark Zuckerberg: Yeah.
Jonathan Zittrain: Is that getting to an answer to your question?
Mark Zuckerberg: Yeah, and I think I generally agree. I mean, I’ve already said publically that my inclination is to move these services in the direction of being all encrypted, at least the private communication version. I basically think if you want to kind of talk in metaphors, messaging is like people’s living room, right? And I think we– you know, we definitely don’t want a society where there’s a camera in everyone’s living room watching the content of those conversations.
Jonathan Zittrain: Even as we’re now– I mean, it is 2019, people are happily are putting cameras in their living rooms.
Mark Zuckerberg: That’s their choice, but I guess they’re putting cameras in their living rooms, well, for a number of reasons, but–
Jonathan Zittrain: And Facebook has a camera that you can go into your living room- <laughter> Mark Zuckerberg: That is, I guess–
Jonathan Zittrain: I just want to be clear.
Mark Zuckerberg: Yeah, although that would be encrypted in this world.
Jonathan Zittrain: Encrypted between you and Facebook!
Mark Zuckerberg: No, no, no. I think– but it also–
Jonathan Zittrain: Doesn’t it have like a little Alexa functionality, too?
Mark Zuckerberg: Well, Portal works over Messenger. So, if we go towards encryption on Messenger, then that’ll be fully encrypted, which I think, frankly, is probably what people want.
Jonathan Zittrain: Yeah.
Mark Zuckerberg: The other model, beside the living room is the town square and that, I think, just has different social norms and different policies and norms that should be at play around that. But I do think that these things are very different. Right? You’re not going to– you may end up in a world where the town square is a fully decentralized or fully encrypted thing, but it’s not clear what value there is in encrypting something that’s public content anyway, or very broad.
Jonathan Zittrain: But, now, you were put to it pretty hard in that as I understand it there’s now a change to how WhatsApp works, that there’s only five forwards permitted.
Mark Zuckerberg: Yeah, so, this is a really interesting point, right? So, when people talk about how encryption will darken some of the signals that we’ll be able to use, you know, both for potentially providing better services and for preventing harm. One of the– I guess, somewhat surprising to me, findings of the last couple of years of working on content governance and enforcement is that it often is much more effective to identify fake accounts and bad actors upstream of them doing something bad by patterns of activity rather than looking at the content.
Jonathan Zittrain: So-called meta data.
Mark Zuckerberg: Sure.
Jonathan Zittrain: “I don’t know what they’re saying, but here’s who’s they’re calling” kind of thing.
Mark Zuckerberg: Yeah, or just like they– this account doesn’t seem to really act like a person, right?
And I guess as AI gets more advanced and you build these adversarial networks or generalized adversarial networks, you’ll get to a place where you have Ai that can probably more effectively
Jonathan Zittrain: Go under mimic [ph?] cover. Mimic act like another person– <overlapping conversation>
Mark Zuckerberg: –for a while.
Mark Zuckerberg: Yeah. But, at the same time, you’ll be building up contrary AI on the other side, but is better at identifying AIs that are doing that. But this has certainly been the most effective tactic across a lot of the areas where we’ve needed to focus to preventing harm. You know, the ability to identify fake accounts, which, like, a huge amount of the– under any category of issue that you’re talking about, a lot of the issues downstream come from fake accounts or people who are clearly acting in some malicious or not normal way. You can identify a lot of that without necessarily even looking at the content itself. And if you have to look at a piece of content, then in some cases, you’re already late, because the content exists and the activity has already happened. So, that’s one of the things that makes me feel like encryption for these messaging services is really the right direction to go, because you’re– it’s a very proprivacy and per security move to give people that control and assurance and I’m relatively confident that even though you are losing some tools to– on the finding harmful content side of the ledger, I don’t think at the end of the day that those are going to end up being the most important tools
Jonathan Zittrain: Yes.
Mark Zuckerberg: –for finding the most of the–
Jonathan Zittrain: But now connect it up quickly to the five forwards thing.
Mark Zuckerberg: Oh, yeah, sure. So, that gets down to if you’re not operating on a piece of content directly, you need to operate on patterns of behavior in the network. And what we, basically found was there weren’t that many good uses for people forwarding things more than five times except to basically spam or blast stuff off. It was being disproportionately abused. So, you end up thinking about different tactics when you’re not operating on content specifically; you end up thinking about patterns of usage more.
Jonathan Zittrain: Well, spam, I get and that– I’m always in favor of things that reduce spam. However, you could also say the second category was just to spread content. You could have the classic, I don’t know, like Les Mis, or Paul Revere’s ride, or Arab Spring-esque in the romanticized vision of it: “Gosh, this is a way for people to do a tree,” and pass along a message that “you can’t stop the signal,” to use a Joss Whedon reference. You really want to get the word out. This would obviously stop that, too.
Mark Zuckerberg: Yeah, and then I think the question is you’re just weighing whether you want this private communication tool where the vast majority of the use and the reason why it was designed was the vast majority of just one-on-one; there’s a large amount of groups that people communicate into, but it’s a pretty small edge case of people operating this with, like– you have a lot of different groups and you’re trying to organize something and almost hack public content-type or public sharing- type utility into an encrypted space and, again, there I think you start getting into “Is this the living room or is this the town square?” And when people start trying to use tools that are designed for one thing to get around what I think the social norms are for the town square, that’s when I think you probably start to have some issues. This is not– we’re not done addressing these issues. There’s a lot more to think through on this
Jonathan Zittrain: Yeah.
Mark Zuckerberg: –but that’s the general shape of the problem that at least I perceive from the work that we’re doing.
Jonathan Zittrain: Well, without any particular segue, let’s talk about fake news.
Jonathan Zittrain: So, insert your favorite segue here. There’s some choice or at least some decision that gets made to figure out what’s going to be next in my newsfeed when I scroll up a little more.
Mark Zuckerberg: Mm-hm.
Jonathan Zittrain: And in the last conversation bit, we were talking about how much we’re looking at content versus telltales and metadata, things that surround the content.
Mark Zuckerberg: Yeah.
Jonathan Zittrain: For knowing about what that next thing in the newsfeed should be, is it a valid desirable material consideration, do you think, for a platform like Facebook to say is the thing we are about to present true, whatever true means?
Mark Zuckerberg: Well, yes, because, again, getting at trying to serve people, people tell us that they don’t want fake content. Right. I mean, I don’t know anyone who wants fake content. I think the whole issue is, again, who gets to decide. Right. So broadly speaking, I don’t know any individual who would sit there and say, “Yes, please show me things that you know are false and that are fake.” People want good quality content and information. That said, I don’t really think that people want us to be deciding what is true for them and people disagree on what is true. And, like, truth is, I mean, there are different levels of when someone is telling a story, maybe the meta arc is talking about something that is true but the facts that were used in it are wrong in some nuanced way but, like, it speaks to some deeper experience. Well, was that true or not? And do people want that disqualified from to them? I think different people are going to come to different places on this.
Now, so I’ve been very sensitive, which, on, like, we really want to make sure that we’re showing people high quality content and information. We know that people don’t want false information. So we’re building quite advanced systems to be able to– to make sure that we’re emphasizing and showing stuff that is going to be high quality. But the big question is where do you get the signal on what the quality is? So the kind of initial v.1 of this was working with third party fact checkers.
Right, I believe very strongly that people do not want Facebook and that we should not be the arbiters of truth in deciding what is correct for everyone in the society. I think people already generally think that we have too much power in deciding what content is good. I tend to also be concerned about that and we should talk about some of the governance stuff that we’re working on separately to try to make it so that we can bring more independent oversight into that.
Jonathan Zittrain: Yes.
Mark Zuckerberg: But let’s put that in a box for now and just say that with those concerns in mind, I’m definitely not looking to try to take on a lot more in terms of also deciding in addition to enforcing all the content policies, also deciding what is true for everyone in the world. Okay, so v.1 of that is we’re going to work with–
Jonathan Zittrain: Truth experts.
Mark Zuckerberg: We’re working with fact checkers.
Jonathan Zittrain: Yeah.
Mark Zuckerberg: And, and they’re experts and basically, there’s like a whole field of how you go and assess certain content. They’re accredited. People can disagree with the leaning of some of these organizations.
Jonathan Zittrain: <laughter> Who does accredited fact checkers?
Mark Zuckerberg: <laughs> The Poynter Institute for Journalism.
Jonathan Zittrain: I should apply for my certification.
Mark Zuckerberg: You may.
Jonathan Zittrain: Okay, good.
Mark Zuckerberg: You’d probably get it, but you have to– You’d have to go through the process.
Mark Zuckerberg: The issue there is there aren’t enough of them, right. So there’s a large content. There’s obviously a lot of information is shared every day and there just aren’t a lot of fact checkers. So then the question is okay, that is probably
Jonathan Zittrain: But the portion– You’re saying the food is good, it’s just the portions are small. But the food is good.
Mark Zuckerberg: I think in general, but so you build systems, which is what we’ve done especially leading up to elections where I think are some of the most fraught times around this where people really are aggressively trying to spread misinformation.
Jonathan Zittrain: Yes.
Mark Zuckerberg: You build systems that prioritize content that seems like it’s going viral because you want to reduce the prevalence of how widespread the stuff gets, so that way the fact checkers have tools to be able to, like, prioritize what they need to go– what they need to go look at. But it’s still getting to a relatively small percent of the content. So I think the real thing that we want to try to get to over time is more of a crowd sourced model where people, it’s not that people are trusting some sort, some basic set of experts who are accredited but are in some kid of lofty institution somewhere else. It’s like do you trust, yeah, like, if you get enough data points from within the community of people reasonably looking at something and assessing it over time, then the question is can you compound that together into something that is a strong enough signal that we can then use that?
Jonathan Zittrain: Kind of in the old school like a slash-dot moderating system
Mark Zuckerberg: Yeah.
Jonathan Zittrain: With only the worry that if the stakes get high enough, somebody wants to Astroturf that.
Mark Zuckerberg: Yes.
Jonathan Zittrain: I’d be–
Mark Zuckerberg: There are a lot of questions here, which is why I’m not sitting here and announcing a new program.
Mark Zuckerberg: But what I’m saying is this is, like,–
Jonathan Zittrain: Yeah,
Mark Zuckerberg: This is the general direction that I think we should be thinking about when we haveand I think that there’s a lot of questions and–
Jonathan Zittrain: Yes.
Mark Zuckerberg: And we’d like to run some tests in this area to see whether this can help out. Which would be upholding the principles which are that we want to stop–
Jonathan Zittrain: Yes.
Mark Zuckerberg: The spread of misinformation.
Jonathan Zittrain: Yes.
Mark Zuckerberg: Knowing that no one wants misinformation. And the other principle, which is that
A lengthy antitrust probe into how Facebook gathers data on users has resulted in Germany’s competition watchdog banning the social network giant from combining data on users across its own suite of social platforms without their consent.
The investigation of Facebook data-gathering practices began in March 2016.
The decision by Germany’s Federal Cartel Office, announced today, also prohibits Facebook from gathering data on users from third party websites — such as via tracking pixels and social plug-ins — without their consent.
Although the decision does not yet have legal force and Facebook has said it’s appealing. The BBC reports that the company has a month to challenge the decision before it comes into force in Germany.
In both cases — i.e. Facebook collecting and linking user data from its own suite of services; and from third party websites — the Bundeskartellamt asserts that consent to data processing must be voluntary, so cannot be made a precondition of using Facebook’s service.
The company must therefore “adapt its terms of service and data processing accordingly”, it warns.
“Facebook’s terms of service and the manner and extent to which it collects and uses data are in violation of the European data protection rules to the detriment of users. The Bundeskartellamt closely cooperated with leading data protection authorities in clarifying the data protection issues involved,” it writes, couching Facebook’s conduct as “exploitative abuse”.
“Dominant companies may not use exploitative practices to the detriment of the opposite side of the market, i.e. in this case the consumers who use Facebook. This applies above all if the exploitative practice also impedes competitors that are not able to amass such a treasure trove of data,” it continues.
“This approach based on competition law is not a new one, but corresponds to the case-law of the Federal Court of Justice under which not only excessive prices, but also inappropriate contractual terms and conditions constitute exploitative abuse (so-called exploitative business terms).”
Commenting further in a statement, Andreas Mundt, president of the Bundeskartellamt, added: “In future, Facebook will no longer be allowed to force its users to agree to the practically unrestricted collection and assigning of non-Facebook data to their Facebook user accounts.
“The combination of data sources substantially contributed to the fact that Facebook was able to build a unique database for each individual user and thus to gain market power. In future, consumers can prevent Facebook from unrestrictedly collecting and using their data. The previous practice of combining all data in a Facebook user account, practically without any restriction, will now be subject to the voluntary consent given by the users.
“Voluntary consent means that the use of Facebook’s services must not be subject to the users’ consent to their data being collected and combined in this way. If users do not consent, Facebook may not exclude them from its services and must refrain from collecting and merging data from different sources.”
“With regard to Facebook’s future data processing policy, we are carrying out what can be seen as an internal divestiture of Facebook’s data,” Mundt added.
Facebook has responded to the Bundeskartellamt’s decision with a blog post setting out why it disagrees. The company did not respond to specific questions we put to it.
One key consideration is that Facebook also tracks non-users via third party websites. Aka, the controversial issue of ‘shadow profiles’ — which both US and EU politicians questioned founder Mark Zuckerberg about last year.
Which raises the question of how it could comply with the decision on that front, if its appeal fails, given it has no obvious conduit for seeking consent from non-users to gather their data. (Facebook’s tracking of non-users has already previously been judged illegal elsewhere in Europe.)
The German watchdog says that if Facebook intends to continue collecting data from outside its own social network to combine with users’ accounts without consent it “must be substantially restricted”, suggesting a number of different criteria are feasible — such as restrictions including on the amount of data; purpose of use; type of data processing; additional control options for users; anonymization; processing only upon instruction by third party providers; and limitations on data storage periods.
Should the decision come to be legally enforced, the Bundeskartellamt says Facebook will be obliged to develop proposals for possible solutions and submit them to the authority which would then examine whether or not they fulfil its requirements.
While there’s lots to concern Facebook in this decision — which, it recently emerged, has plans to unify the technical infrastructure of its messaging platforms — it isn’t all bad for the company. Or, rather, it could have been worse.
The authority makes a point of saying the social network can continue to make the use of each of its messaging platforms subject to the processing of data generated by their use, writing: “It must be generally acknowledged that the provision of a social network aiming at offering an efficient, data-based business model funded by advertising requires the processing of personal data. This is what the user expects.”
Although it also does not close the door on further scrutiny of that dynamic, either under data protection law (as indeed, there is a current challenge to so called ‘forced consent‘ under Europe’s GDPR); or indeed under competition law.
“The issue of whether these terms can still result in a violation of data protection rules and how this would have to be assessed under competition law has been left open,” it emphasizes.
It also notes that it did not investigate how Facebook subsidiaries WhatsApp and Instagram collect and use user data — leaving the door open for additional investigations of those services.
On the wider EU competition law front, in recent years the European Commission’s competition chief has voiced concerns about data monopolies — going so far as to suggest, in an interview with the BBC last December, that restricting access to data might be a more appropriate solution to addressing monopolistic platform power vs breaking companies up.
In its blog post rejecting the German Federal Cartel Office’s decision, Facebook’s Yvonne Cunnane, head of data protection for its international business, Facebook Ireland, and Nikhil Shanbhag, director and associate general counsel, make three points to counter the decision, writing that: “The Bundeskartellamt underestimates the fierce competition we face in Germany, misinterprets our compliance with GDPR and undermines the mechanisms European law provides for ensuring consistent data protection standards across the EU.”
On the competition point, Facebook claims in the blog post that “popularity is not dominance” — suggesting the Bundeskartellamt found 40 per cent of social media users in Germany don’t use Facebook. (Not that that would stop Facebook from tracking those non-users around the mainstream Internet, of course.)
Although, in its announcement of the decision today, the Federal Cartel Office emphasizes that it found Facebook to have a dominant position in the Germany market — with (as of December 2018) 23M daily active users and 32M monthly active users, which it said constitutes a market share of more than 95 per cent (daily active users) and more than 80 per cent (monthly active users).
It also says it views social services such as Snapchat, YouTube and Twitter, and professional networks like LinkedIn and Xing, as only offering “parts of the services of a social network” — saying it therefore excluded them from its consideration of the market.
Though it adds that “even if these services were included in the relevant market, the Facebook group with its subsidiaries Instagram and WhatsApp would still achieve very high market shares that would very likely be indicative of a monopolisation process”.
The mainstay of Facebook’s argument against the Bundeskartellamt decision appears to fix on the GDPR — with the company both seeking to claim it’s in compliance with the pan-EU data-protection framework (although its business faces multiple complaints under GDPR), while simultaneously arguing that the privacy regulation supersedes regional competition authorities.
So, as ever, Facebook is underlining that its EU regulator of choice is the Irish Data Protection Commission.
“The GDPR specifically empowers data protection regulators – not competition authorities – to determine whether companies are living up to their responsibilities. And data protection regulators certainly have the expertise to make those conclusions,” Facebook writes.
“The GDPR also harmonizes data protection laws across Europe, so everyone lives by the same rules of the road and regulators can consistently apply the law from country to country. In our case, that’s the Irish Data Protection Commission. The Bundeskartellamt’s order threatens to undermine this, providing different rights to people based on the size of the companies they do business with.”
The final plank of Facebook’s rebuttal focuses on pushing the notion that pooling data across services enhances the consumer experience and increases “safety and security” — the latter point being the same argument Zuckerberg used last year to defend ‘shadow profiles’ (not that he called them that) — with the company claiming now that it needs to pool user data across services to identify abusive behavior online; and disable accounts linked to terrorism; child exploitation; and election interference.
So the company is essentially seeking to leverage (you could say ‘legally weaponize’) a smorgasbord of antisocial problems — many of which have scaled to become major societal issues in recent years at least in part as a consequence of the size and scale of Facebook’s social empire — as arguments for defending the size and operational sprawl of its business. Go figure.
In a statement provided to us last month ahead of the ruling, Facebook said: “Since 2016, we have been in regular contact with the Bundeskartellamt and have responded to their requests. As we outlined publicly in 2017, we disagree with their views and the conflation of data protection laws and antitrust laws, and will continue to defend our position.”
An investigation into the WhatsApp-Facebook data-sharing by the UK’s data watchdog was only closed last year after Facebook committed not to link user data across the two services until it could do so in a way that complies with the GDPR. Although the company does still share data for business intelligence and security purposes — which has drawn continued scrutiny from the French data watchdog.
On the links between privacy and competition law, the EU’s data protection supervisor, Giovanni Buttarelli, also told us last fall that the bloc is looking to evolve its regulatory regime to respond to the rise of digital monopolies — suggesting joint enforcement and increased co-operation between privacy and competition regulators will be a key part of the change.
There has never been a better time to start, join or fund a startup as a student.
Young founders who want to start companies while still in school have an increasing number of resources to tap into that exist just for them. Students that want to learn how to build companies can apply to an increasing number of fast-track programs that allow them to gain valuable early stage operating experience. The energy around student entrepreneurship today is incredible. I’ve been immersed in this community as an investor and adviser for some time now, and to say the least, I’m continually blown away by what the next generation of innovators are dreaming up (from Analytical Space’s global data relay service for satellites to Brooklinen’s reinvention of the luxury bed).
First, let’s look at student founders and why they’re important. Student entrepreneurs have long been an important foundation of the startup ecosystem. Many students wrestle with how best to learn while in school —some students learn best through lectures, while more entrepreneurial students like author Julian Docks find it best to leave the classroom altogether and build a business instead.
Indeed, some of our most iconic founders are Microsoft’s Bill Gates and Facebook’s Mark Zuckerberg, both student entrepreneurs who launched their startups at Harvard and then dropped out to build their companies into major tech giants. A sample of the current generation of marquee companies founded on college campuses include Snap at Stanford ($29B valuation at IPO), Warby Parker at Wharton (~$2B valuation), Rent The Runway at HBS (~$1B valuation), and Brex at Stanford (~$1B valuation).
Some of today’s most celebrated tech leaders built their first ventures while in school — even if some student startups fail, the critical first-time founder experience is an invaluable education in how to build great companies. Perhaps the best example of this that I could find is Drew Houston at Dropbox (~$9B valuation at IPO), who previously founded an edtech startup at MIT that, in his words, provided a: “great introduction to the wild world of starting companies.”
Student founders are everywhere, but the highest concentration of venture-backed student founders can be found at just 5 universities. Based on venture fund portfolio data from the last six years,Harvard, Stanford, MIT, UPenn, and UC Berkeley have produced the highest number of student-founded companies that went on to raise $1 million or more in seed capital. Some prospective students will even enroll in a university specifically for its reputation of churning out great entrepreneurs. This is not to say that great companies are not being built out of other universities, nor does it mean students can’t find resources outside a select number of schools. As you can see later in this essay, there are a number of new ways students all around the country can tap into the startup ecosystem. For further reading, PitchBook produces an excellent report each year that tracks where all entrepreneurs earned their undergraduate degrees.
Student founders have a number of new media resources to turn to. New email newsletters focused on student entrepreneurship like Justine and Olivia Moore’s Accelerated and Kyle Robertson’s StartU offer new channels for young founders to reach large audiences. Justine and Olivia, the minds behind Accelerated, have a lot of street cred— they launched Stanford’s on-campus incubator Cardinal Ventures before landing as investors at CRV.
StartU goes above and beyond to be a resource to founders they profile by helping to connect them with investors (they’re active at 12 universities), and run a podcast hosted by their Editor-in-Chief Johnny Hammond that is top notch. My bet is that traditional media will point a larger spotlight at student entrepreneurship going forward.
New pools of capital are also available that are specifically for student founders. There are four categories that I call special attention to:
- University-affiliated accelerator programs
- University-affiliated angel networks
- Professional venture funds investing at specific universities
- Professional venture funds investing through student scouts
While it is difficult to estimate exactly how much capital has been deployed by each, there is no denying that there has been an explosion in the number of programs that address the pre-seed phase. A sample of the programs available at the Top 5 universities listed above are in the graphic below — listing every resource at every university would be difficult as there are so many.
One alumni-centric fund to highlight is the Alumni Ventures Group, which pools LP capital from alumni at specific universities, then launches individual venture funds that invest in founders connected to those universities (e.g. students, alumni, professors, etc.). Through this model, they’ve deployed more than $200M per year! Another highlight has been student scout programs — which vary in the degree of autonomy and capital invested — but essentially empower students to identify and fund high-potential student-founded companies for their parent venture funds. On campuses with a large concentration of student founders, it is not uncommon to find student scouts from as many as 12 different venture funds actively sourcing deals (as is made clear from David Tao’s analysis at UC Berkeley).
In my opinion, the two institutions that have the most expansive line of sight into the student entrepreneurship landscape are First Round’s Dorm Room Fund and General Catalyst’s Rough Draft Ventures. Since 2012, these two funds have operated a nationwide network of student scouts that have invested $20K — $25K checks into companies founded by student entrepreneurs at 40+ universities. “Scout” is a loose term and doesn’t do it justice — the student investors at these two funds are almost entirely autonomous, have built their own platform services to support portfolio companies, and have launched programs to incubate companies built by female founders and founders of color. Another student-run fund worth noting that has reach beyond a single region is Contrary Capital, which raised $2.2M last year. They do a particularly great job of reaching founders at a diverse set of schools — their network of student scouts are active at 45 universities and have spoken with 3,000 founders per year since getting started. Contrary is also testing out what they describe as a “YC for university-based founders”. In their first cohort, 100% of their companies raised a pre-seed round after Contrary’s demo day. Another even more recently launched organization is The MBA Fund, which caters to founders from the business schools at Harvard, Wharton, and Stanford. While super exciting, these two funds only launched very recently and manage portfolios that are not large enough for analysis just yet.
Over the last few months, I’ve collected and cross-referenced publicly available data from both Dorm Room Fund and Rough Draft Ventures to assess the state of student entrepreneurship in the United States. Companies were pulled from each fund’s portfolio page, then checked against Crunchbase for amount raised, accelerator participation, and other metrics. If you’d like to sift through the data yourself, feel free to ping me — my email can be found at the end of this article. To be clear, this does not represent the full scope of investment activity at either fund — many companies in the portfolios of both funds remain confidential and unlisted for good reasons (e.g. startups working in stealth). In fact, the In addition, data for early stage companies is notoriously variable in quality, even with Crunchbase. You should read these insights as directional only, given the debatable confidence interval. Still, the data is still interesting and give good indicators for the health of student entrepreneurship today.
Dorm Room Fund and Rough Draft Ventures have invested in 230+ student-founded companies that have gone on to raise nearly $1 billion in follow on capital. These funds have invested in a diverse range of companies, from govtech (e.g. mark43, raised $77M+ and FiscalNote, raised $50M+) to space tech (e.g. Capella Space, raised ~$34M). Several portfolio companies have had successful exits, such as crypto startup Distributed Systems (acquired by Coinbase) and social networking startup tbh (acquired by Facebook). While it is too early to evaluate the success of these funds on a returns basis (both were launched just 6 years ago), we can get a sense of success by evaluating the rates by which portfolio companies raise additional capital. Taken together, 34% of DRF and RDV companies in our data set have raised $1 million or more in seed capital. For a rough comparison, CB Insights cites that 40% of YC companies and 48% of Techstars companies successfully raise follow on capital (defined as anything above $750K). Certainly within the ballpark!
Dorm Room Fund and Rough Draft Ventures companies in our data set have an 11–12% rate of survivorship to Series A. As a benchmark, a previous partner at Y Combinator shared that 20% of their accelerator companies raise Series A capital (YC declined to share the official figure, but it’s likely a stat that is increasing given their new Series A support programs. For further reading, check out YC’s reflection on what they’ve learned about helping their companies raise Series A funding). In any case, DRF and RDV’s numbers should be taken with a grain of salt, as the average age of their portfolio companies is very low and raising Series A rounds generally takes time. Ultimately, it is clear that DRF and RDV are active in the earlier (and riskier) phases of the startup journey.
Dorm Room Fund and Rough Draft Ventures send 18–25% of their portfolio companies to Y Combinator or Techstars. Given YC’s 1.5% acceptance rate as reported in Fortune, this is quite significant! Internally, these two funds offer founders an opportunity to participate in mock interviews with YC and Techstars alumni, as well as tap into their communities for peer support (e.g. advice on pitch decks and application content). As a result, Dorm Room Fund and Rough Draft Ventures regularly send cohorts of founders to these prestigious accelerator programs. Based on our data set, 17–20% of DRF and RDV companies that attend one of these accelerators end up raising Series A venture financing.
Dorm Room Fund and Rough Draft Ventures don’t invest in the same companies. When we take a deeper look at one specific ecosystem where these two funds have been equally active over the last several years — Boston — we actually see that the degree of investment overlap for companies that have raised $1M+ seed rounds sits at 26%. This suggests that these funds are either a) seeing different dealflow or b) have widely different investment decision-making.
Dorm Room Fund and Rough Draft Ventures should not just be measured by a returns-basis today, as it’s too early. I hypothesize that DRF and RDV are actually encouraging more entrepreneurial activity in the ecosystem (more students decide to start companies while in school) as well as improving long-term founder outcomes amongst students they touch (portfolio founders build bigger and more successful companies later in their careers). As more students start companies, there’s likely a positive feedback loop where there’s increasing peer pressure to start a company or lean on friends for founder support (e.g. feedback, advice, etc).Both of these subjects warrant additional study, but it’s likely too early to conduct these analyses today.
Dorm Room Fund and Rough Draft Ventures have impressive alumni that you will want to track. 1 in 4 alumni partners are founders, and 29% of these founder alumni have raised $1M+ seed rounds for their companies. These include Anjney Midha’s augmented reality startup Ubiquity6 (raised $37M+), Shubham Goel’s investor-focused CRM startup Affinity (raised $13M+), Bruno Faviero’s AI security software startup Synapse (raised $6M+), Amanda Bradford’s dating app The League (raised $2M+), and Dillon Chen’s blockchain startup Commonwealth Labs (raised $1.7M). It makes sense to me that alumni from these communities that decide to start companies have an advantage over their peers — they know what good companies look like and they can tap into powerful networks of young talent / experienced investors.
Beyond Dorm Room Fund and Rough Draft Ventures, some venture capital firms focus on incubation for student-founded startups. Credit should first be given to Lightspeed for producing the amazing Summer Fellows bootcamp experience for promising student founders — after all, Pinterest was built there! Jeremy Liew gives a good overview of the program through his sit-down interview with Afterbox’s Zack Banack. Based on a study they conducted last year, 40% of Lightspeed Summer Fellows alumni are currently active founders. Pear Ventures also has an impressive summer incubator program where 85% of its companies successfully complete a fundraise. Index Ventures is the latest to build an incubator program for student founders, and even accepts founders who want to work on an idea part-time while completing a summer internship.
Let’s now look at students who want to join a startup before founding one. Venture funds have historically looked to tap students for talent, and are expanding the engagement lifecycle. The longest running programs include Kleiner Perkins’ class=”m_1196721721246259147gmail-markup–strong m_1196721721246259147gmail-markup–p-strong”> KP Fellows and True Ventures’ TEC Fellows, which focus on placing the next generation’s most promising product managers, engineers, and designers into the portfolio companies of their parent venture funds.
There’s also the secretive Greylock X, a referral-based hand-picked group of the best student engineers in Silicon Valley (among their impressive alumni are founders like Yasyf Mohamedali and Joe Kahn, the folks behind First Round-backed Karuna Health). As these programs have matured, these firms have recognized the long-run value of engaging the alumni of their programs.
More and more alumni are “coming back” to the parent funds as entrepreneurs, like KP Fellow Dylan Field of Figma (and is also hosting a KP Fellow, closing a full circle loop!). Based on their latest data, 10% of KP Fellows alumni are founders — that’s a lot given the fact that their community has grown to 500! This helps explain why Kleiner Perkins has created a structured path to receive $100K in seed funding to companies founded by KP Fellow alumni. It looks like venture funds are beginning to invest in student programs as part of their larger platform strategy, which can have a real impact over the long term (for further reading, see this analysis of platform strategy outcomes by USV’s Bethany Crystal).
Venture funds are doubling down on student talent engagement — in just the last 18 months, 4 funds have launched student programs. It’s encouraging to see new funds follow in the footsteps of First Round, General Catalyst, Kleiner Perkins, Greylock, and Lightspeed. In 2017, Accel launched their Accel Scholars program to engage top talent at UC Berkeley and Stanford. In 2018, we saw 8VC Fellows, NEA Next, and Floodgate Insiders all launch, targeting elite universities outside of Silicon Valley. Y Combinator implemented Early Decision, which allows student founders to apply one batch early to help with academic scheduling. Most recently, at the start of 2019, First Round launched the Graduate Fund (staffed by Dorm Room Fund alumni) to invest in founders who are recent graduates or young alumni.
Given more time, I’d love to study the rates by which student founders start another company following investments from student scout funds, as well as whether or not they’re more successful in those ventures. In any case, this is an escalation in the number of venture funds that have started to get serious about engaging students — both for talent and dealflow.
Student entrepreneurship 2.0 is here. There are more structured paths to success for students interested in starting or joining a startup. Founders have more opportunities to garner press, seek advice, raise capital, and more. Venture funds are increasingly leveraging students to help improve the three F’s — finding, funding, and fixing. In my personal view, I believe it is becoming more and more important for venture funds to gain mindshare amongst the next generation of founders and operators early, while still in school.
I can’t wait to see what’s next for student entrepreneurship in 2019. If you’re interested in digging in deeper (I’m human — I’m sure I haven’t covered everything related to student entrepreneurship here) or learning more about how you can start or join a startup while still in school, shoot me a note at firstname.lastname@example.org. A massive thanks to Phin Barnes, Rei Wang, Chauncey Hamilton, Peter Boyce, Natalie Bartlett, Denali Tietjen, Eric Tarczynski, Will Robbins, Jasmine Kriston, Alicia Lau, Johnny Hammond, Bruno Faviero, Athena Kan, Shohini Gupta, Alex Immerman, Albert Dong, Phillip Hua-Bon-Hoa, and Trevor Sookraj for your incredible encouragement, support, and insight during the writing of this essay.
Facebook’s lead data protection regulator in Europe has asked the company for an “urgent briefing” regarding plans to integrate the underlying infrastructure of its three social messaging platforms.
In a statement posted to its website late last week the Irish Data Protection Commission writes: “Previous proposals to share data between Facebook companies have given rise to significant data protection concerns and the Irish DPC will be seeking early assurances that all such concerns will be fully taken into account by Facebook in further developing this proposal.”
Last week the New York Times broke the news that Facebook intends to unify the backend infrastructure of its three separate products, couching it as Facebook founder Mark Zuckerberg asserting control over acquisitions whose founders have since left the building.
While WhatsApp’s founders left Facebook earlier, with Brian Acton departing in late 2017 and Jan Koum sticking it out until spring 2018. The pair reportedly clashed with Facebook execs over user privacy and differences over how to monetize the end-to-end encrypted platform.
Acton later said Facebook had coached him to tell European regulators assessing whether to approve the 2014 merger that it would be “really difficult” for the company to combine WhatsApp and Facebook user data.
In the event, Facebook went on to link accounts across the two platforms just two years after the acquisition closed. It was later hit with a $122M penalty from the European Commission for providing “incorrect or misleading” information at the time of the merger. Though Facebook claimed it had made unintentional “errors” in the 2014 filing.
A further couple of years on and Facebook has now graduated to seeking full platform unification of separate messaging products.
“We want to build the best messaging experiences we can; and people want messaging to be fast, simple, reliable and private,” a spokesperson told us when we asked for a response to the NYT report. “We’re working on making more of our messaging products end-to-end encrypted and considering ways to make it easier to reach friends and family across networks.”
“As you would expect, there is a lot of discussion and debate as we begin the long process of figuring out all the details of how this will work,” the spokesperson added, confirming the substance of the NYT report.
There certainly would be a lot of detail to be worked out. Not least the feasibility of legally merging user data across distinct products in Europe, where a controversial 2016 privacy u-turn by WhatsApp — when it suddenly announced it would after all share user data with parent company Facebook (despite previously saying it would never do so), including sharing data for marketing purposes — triggered swift regulatory intervention.
Facebook was forced to suspend marketing-related data flows in Europe. Though it has continued sharing data between WhatsApp and Facebook for security and business intelligence purposes, leading to the French data watchdog to issue a formal notice at the end of 2017 warning the latter transfers also lack a legal basis.
A court in Hamburg, Germany, also officially banned Facebook from using WhatsApp user data for its own purposes.
Early last year, following an investigation into the data-sharing u-turn, the UK’s data watchdog obtained an undertaking from WhatsApp that it would not share personal data with Facebook until the two services could do so in a way that’s compliant with the region’s strict privacy framework, the General Data Protection Regulation (GDPR).
Facebook only avoided a fine from the UK regulator because it froze data flows after the regulatory intervention. But the company clearly remains on watch — and any fresh moves to further integrate the platforms would trigger instant scrutiny, evidenced by the shot across the bows from the DPC in Ireland (Facebook’s international HQ is based in the country).
The 2016 WhatsApp-Facebook privacy u-turn also occurred prior to Europe’s GDPR coming into force. And the updated privacy framework includes a regime of substantially larger maximum fines for any violations.
Under the regulation watchdogs also have the power to ban companies from processing data. Which, in the case of a revenue-rich data-mining giant like Facebook, could be a far more potent disincentive than even a billion dollar fine.
We’ve reached out to Facebook for comment on the Irish DPC’s statement and will update this report with any response.
Here’s the full statement from the Irish watchdog:
While we understand that Facebook’s proposal to integrate the Facebook, WhatsApp and Instagram platforms is at a very early conceptual stage of development, the Irish DPC has asked Facebook Ireland for an urgent briefing on what is being proposed. The Irish DPC will be very closely scrutinising Facebook’s plans as they develop, particularly insofar as they involve the sharing and merging of personal data between different Facebook companies. Previous proposals to share data between Facebook companies have given rise to significant data protection concerns and the Irish DPC will be seeking early assurances that all such concerns will be fully taken into account by Facebook in further developing this proposal. It must be emphasised that ultimately the proposed integration can only occur in the EU if it is capable of meeting all of the requirements of the GDPR.
Facebook may be hoping that extending end-to-end encryption to Instagram as part of its planned integration effort, per the NYT report, could offer a technical route to stop any privacy regulators’ hammers from falling.
Though use of e2e encryption still does not shield metadata from being harvested. And metadata offers a rich source of inferences about individuals which, under EU law, would certainly constitute personal data. So even with robust encryption across the board of Instagram, Facebook and WhatsApp the unified messaging platforms could still collectively leak plenty of personal data to their data-mining parent.
Facebook’s apps are also not open source. So even WhatsApp, which uses the respected Signal Protocol for its e2e encryption, remains under its control — with no ability for external audits to verify exactly what happens to data inside the app (such as checking what data gets sent back to Facebook). Users still have to trust Facebook’s implementation but regulators might demand actual proof of bona fide messaging privacy.
Nonetheless, the push by Facebook to integrate separate messaging products onto a single unified platform could be a defensive strategy — intended to throw dust in the face of antitrust regulators as political scrutiny of its market position and power continues to crank up. Though it would certainly be an aggressive defence to more tightly knit separate platforms together.
But if the risk Facebook is trying to shrink is being forced, by competition regulators, to sell off one or two of its messaging platforms it may feel it has nothing to lose by making it technically harder to break its business apart.
At the time of the acquisitions of Instagram and WhatsApp Facebook promised autonomy to their founders. Zuckerberg has since changed his view, according to the NYT — believing integrating all three will increase the utility of each and thus provide a disincentive for users to abandon each service.
It may also be a hedge against any one of the three messaging platforms decreasing in popularity by furnishing the business with internal levers it can throw to try to artifically juice activity across a less popular app by encouraging cross-platform usage.
And given the staggering size of the Facebook messaging empire, which globally sprawls to 2.5BN+ humans, user resistance to centralized manipulation via having their buttons pushed to increase cross-platform engagement across Facebook’s business may be futile without regulatory intervention.
Facebook founder Mark Zuckerberg may yet regret underestimating a UK parliamentary committee that’s been investigating the democracy-denting impact of online disinformation for the best part of this year — and whose repeat requests for facetime he’s just as repeatedly snubbed.
In the latest high gear change, reported in yesterday’s Observer, the committee has used parliamentary powers to seize a cache of documents pertaining to a US lawsuit to further its attempt to hold Facebook to account for misuse of user data.
Facebook’s oversight — or rather lack of it — where user data is concerned has been a major focus for the committee, as its enquiry into disinformation and data misuse has unfolded and scaled over the course of this year, ballooning in scope and visibility since the Cambridge Analytica story blew up into a global scandal this April.
The internal documents now in the committee’s possession are alleged to contain significant revelations about decisions made by Facebook senior management vis-a-vis data and privacy controls — including confidential emails between senior executives and correspondence with Zuckerberg himself.
This has been a key line of enquiry for parliamentarians. And an equally frustrating one — with committee members accusing Facebook of being deliberately misleading and concealing key details from it.
The seized files pertain to a US lawsuit that predates mainstream publicity around political misuse of Facebook data, with the suit filed in 2015, by a US startup called Six4Three, after Facebook removed developer access to friend data. (As we’ve previously reported Facebook was actually being warned about data risks related to its app permissions as far back as 2011 — yet it didn’t full shut down the friends data API until May 2015.)
The core complaint is an allegation that Facebook enticed developers to create apps for its platform by implying they would get long-term access to user data in return. So by later cutting data access the claim is that Facebook was effectively defrauding developers.
Since lodging the complaint, the plaintiffs have seized on the Cambridge Analytica saga to try to bolster their case.
And in a legal motion filed in May Six4Three’s lawyers claimed evidence they had uncovered demonstrated that “the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones”.
The startup used legal powers to obtain the cache of documents — which remain under seal on order of a California court. But the UK parliament used its own powers to swoop in and seize the files from the founder of Six4Three during a business trip to London when he came under the jurisdiction of UK law, compelling him to hand them over.
According to the Observer, parliament sent a serjeant at arms to the founder’s hotel — giving him a final warning and a two-hour deadline to comply with its order.
“When the software firm founder failed to do so, it’s understood he was escorted to parliament. He was told he risked fines and even imprisonment if he didn’t hand over the documents,” it adds, apparently revealing how Facebook lost control over some more data (albeit, its own this time).
In comments to the newspaper yesterday, DCMS committee chair Damian Collins said: “We are in uncharted territory. This is an unprecedented move but it’s an unprecedented situation. We’ve failed to get answers from Facebook and we believe the documents contain information of very high public interest.”
Collins later tweeted the Observer’s report on the seizure, teasing “more next week” — likely a reference to the grand committee hearing in parliament already scheduled for November 27.
But it could also be a hint the committee intends to reveal and/or make use of information locked up in the documents, as it puts questions to Facebook’s VP of policy solutions…