So if you could situate me back at the moment when this starts brewing for you in terms of having this task, is it in Switzerland or where are we?
What's going on?
So I started at Facebook a little over four years ago and when I joined I was working as the chief of staff for our COO, Sheryl [Sandberg].
And around the time of the [2014] midterm elections I was looking for my next opportunity, my next role.
And I saw the reaction to the midterm elections and the role that Facebook and misinformation had played.
I saw the way it was being talked about externally and I saw and felt the way that we were all dealing with it internally.
And a couple months later I was accompanying Sheryl to the World Economic Forum and we met with policymakers and publishers and other tech companies.
And what I was so struck by was the fact that everyone agreed that this was a huge problem.
But there was no real consensus on the solutions.
And when I say no consensus on the solutions, I don't mean on foreign interference or on fake accounts.
There's a lot of areas where there's lots of clear consensus and we just need to continue to do better to execute.
But I mean no clear consensus on the problem of what do you do if someone shares something on social media that isn't true.
And how do you reconcile what you do with a value of freedom of expression?
And what I was really struck with, coming back from the World Economic Forum, was how many incredibly smart people from around the world were worried about this and how much there was a need for getting clearer solutions and clearer strategies in place.
… What was the thinking inside of the company at that point about misinformation, and had there been any thinking about it?
There'd been a lot of thinking about types of abuse related to elections – about hacking, about malware.
Misinformation really wasn't on our radar the way that those other threats were before the 2016 elections, in the same way that I don't think they were on the radar of a lot of other companies, organizations, governments.
And so we were really shocked and struck by it and, frankly, a little shaken.
And you know, speaking personally, I joined Facebook because I care so much about the mission and because I believe so much in the good that can happen when people do come together.
And it was personally really hard to live through that period.
During the election, over the course of the 2016 election, there was a lot of news about misinformation.
I mean, there was famously: "The pope endorses Trump."
Do you remember that?
Absolutely.
I wasn't working on these issues at the time.
But absolutely, I do remember it.
So how is that not prompting more thinking even then?
It sounds like you're saying after the election happens you're thinking about misinformation all of a sudden.
But what about during the election?
Was there any kind of sense of like: Oh, my goodness, Facebook is getting polluted with misinformation.
Someone should do something about this.
There certainly was and there were people who were thinking about it.
You know, I wasn't yet in the role that I'm in now, so I wasn't the one directly thinking about it.
But in the lead-up to the election, as you're saying, there was increasingly the sense of some of these examples of hoaxes or of false information that was spreading.
What I don't think there was a real awareness of, internally or externally, was the scope of the problem and the right course of action.
And that's where I think we've made a lot of progress since in saying, you know: How do we understand what the problem here is?
How do we define the problem?
How do we actually tackle it in a responsible way?
And how, over time, do we actually measure success?
How will we know if we've been more effective with the upcoming elections than we were with the last elections?
I mean, I've got to say it's a little bit surprising to hear what a surprise it was, in part because I'm a journalist.
And it was well known that Facebook was becoming one of the main distributors of information that American voters or voters all around the world were depending on as an information source.
So how could it be surprising that if you're becoming the world's information source that there may be a problem with misinformation?
There was certainly awareness that there could be problems related to news or quality of news.
There were a few one-off examples of misinformation and I think what matters here is the period that you're talking about.
Right?
So in the months leading up there was certainly more of a conversation about misinformation.
A year ahead of time when we were thinking about what are the biggest threats going to be related to the elections, where do we need to be investing in order to ensure that we're doing all that we can?
I don't think that misinformation was one of the top three concerns.
You know, I think there was a lot of concern about hacking and there was a lot of work done there.
There was a lot of concern about other issues where we made progress and invested.
And I think on the misinformation concern, the concern increased as the election approached.
And I think we all recognized afterwards that of all of the threats that we were considering, whether at Facebook or across other companies or civil society, we focused a lot on threats that weren't misinformation and underinvested in this one.
I mean, at the time of the election was there a way for a Facebook user to be able to discern between what was coming from a credible news source and a noncredible news source?
There is always a way for them to see who the news source is.
Now how they determine which news sources they believe are credible or not, you know, is an ongoing challenge and a challenge because different people would say that different sources are credible.
So we've increased since then a lot of the work that we're doing around transparency to give people more information about the sources that they're seeing – whether that's the admins behind those pages or whether it's additional information about the domain or publisher that's actually behind the article that's being distributed.
But again, I guess if Facebook had been a main information source for people online of news and information, was it easily discernible for a user at that time to know what could've been a hoax or what could've been a fake news source and what was actually a legitimate news source?
I think that the real question is: How do people, when they're seeing information, whether it's on Facebook or off of Facebook, how do they determine the credibility of that information?
But why are they determining it?
Why wouldn't Facebook determine it?
Well, the way that people are mostly using Facebook is to share with their friends and family.
So say you and I were friends on Facebook and you posted an article.
If we're friends and I trust you, then I might be more likely to trust something that you've shared because I'm not thinking about my trust in that publisher.
I'm thinking about my trust in you and our friendship.
And so what you're sharing with me on Facebook, just like what you would tell me if we were sitting down having lunch together, might be more meaningful to me.
And what we have done since the midterm elections is invest more in making sure that people, before they share and then once they've shared and seen things in their News Feed, have more context around the actual sources behind that information.
What was your learning curve like in terms of what experience had you had as you came into this role dealing with a very significant problem?
What was your learning curve like at first?
I think the first real challenge, and to a certain extent it's still a challenge, is: What do we even mean when we're saying fake news or misinformation?
Because you're saying, you know, how can people tell if information is from a legitimate or not legitimate news source.
I bet if you were to make a list of sources that you think are legitimate and another person were to make a list of sources they think are legitimate there wouldn't be perfect overlap. Right?
Now there'd be some things that we'd all agree on.
If a source is, you know, a financially-motivated scammer, we'd all agree that that's a problem and that's, you know, something that we've made progress on.
If a source was actually, you know, foreign agents operating as if they were a publisher, obviously a problem, obviously something that we need to solve, obviously something we made progress on.
But if it's a news publisher, people are going to have different ideas about what they trust and not trust.
And for a given publisher what we see a lot of is some information is accurate, some information is sensationalized, and some information is in some cases fully fabricated or erroneous and false.
And so the lack of clear definitions around this problem is part of what makes it really, really hard to tackle.
And so I mean, what was the scramble like internally here?
Is it Sheryl just says: Here, Tessa, go handle this.
Like bring me in a little bit to what this moment was like for you personally and, you know, kind of what the tenor of the company was like at that point in time.
There'd been teams working on abuse for a long time, but for the most part the people who were focused on abuse were the teams themselves that were doing it.
And I think what really shifted was that we all recognized that this was a challenge that we had to face and solve together.
And so instead of it being a conversation that was happening among, you know, the teams that had been tasked with it, it became a conversation that we were all having and all wondering what role we as individuals had to play in the solutions.
And as teams were scaling up and new teams were either being created or growing, I talked to Sheryl and talked to other mentors about how I could get involved in helping to turn what was a problem we all agreed we had to tackle into clear solutions and strategies.
And I started by moving into the News Feed team to work as a product manager on the team specifically tasked with misinformation, which we define as information that's demonstrably false.
OK, and what did you learn there?
What happens there?
I mean, so they were already studying this problem?
When I joined, the team was already working on this problem.
And still, I mean, coming back to the challenge of definitions like, you know, we first had to figure out how are we going to define this broad problem space.
And so what we did is we thought about all the different teams at Facebook that were working on abuse and where we had gaps.
And we structured the different problems into three big categories.
So we talked about problematic actors like fake accounts, problematic behavior like coordinated, inauthentic activity, and problematic content like misinformation or content that's false.
And what we said was OK, most of the time these problems don't manifest in silos.
They normally happen with some combination of bad actors, bad behavior, bad content.
But how can we build up teams who are experts in these areas and then create the structures that they need to be able to collaborate effectively in order to solve these problems?
So what we started with on the misinformation side was measuring, you know, how much of the misinformation is coming from what types of tactics, what types of actors or behaviors.
And what we saw was that a lot of it was financially motivated.
And the way that that was working was financially-motivated scammers would create articles from websites that they would share on Facebook.
And their goal was to get people on Facebook to click on those links because once they clicked on those links, they would go to the websites where these spammers could monetize with ads.
And because we saw how much it was coming from financially-motivated scammers we said: OK, what is the full process by which that happens and how can we go after the actors and the behaviors and the tactics that they're using in order to reduce it?
And that's an area where we've made a lot of progress, but we also knew that there were a lot of other parts of the misinformation problem.
You know, there's misinformation that spreads in photos and videos that's not in links.
There's misinformation that we know is about politics, which have been a lot of the conversation.
But there's also misinformation about health and other topics.
And so while we continue to prioritize where we can have the most impact, we were also studying what the other challenges and areas were that we needed to be getting ahead of.
And I think what I've learned since then, you know, I came into this job asking myself: How long is it going to take us to solve this?
And the answer is this isn't a problem that you solve.
It's a problem that you contain, because there will always be people who are motivated to share misinformation, whether their motivations are financial or whether their motivations are political and ideological.
And misinformation's always existed before social media and it will exist well into the next generations of social media.
And what we have to do, what we have a responsibility to our community to do well, is to do everything we can to make it as hard as possible for that misinformation to thrive on Facebook, which it had been able to do in part because of the way that our platform worked.
In what respect?
I mean, that's interesting to me.
Like what were you discovering about how the platform worked that enabled misinformation to spread so easily?
Say you and I are friends and I share a link. Right?
And it's really, really sensational but you trust me and so you think: Look, if Tessa's sharing this then there's something here.
You might say, therefore, because it's so sensational that you definitely want to click on it and find out what happened.
Or you might say that you want to share it with your community because you're so shocked by it.
And if you click on it or if you share it, that was a signal to us that it was interesting and engaging.
And when people come to News Feed, we want them to connect with the information shared by their friends and family that's most meaningful and relevant to them.
And engaging.
And engaging because oftentimes engagement is correlated to what they find meaningful. Right?
If you read a full article it might be more meaningful to you than if you just scroll past it.
But if we look at those signals alone, those signals of how someone interacts with information in their News Feed, we could miss the fact that sometimes the fact that they were sharing it was because it was actually shocking and not true; or the reason they were clicking on it was because they were so surprised by what it was claiming to be.
And what we've learned is, you know, you look at how people actually behave and what they do on Facebook versus if you spend time asking them after the fact, you know, was this information actually meaningful to you.
If you look at clickbait for example, take one specific example.
People are really likely to click on clickbait headlines because clickbait headlines are designed to make people more likely to click on them.
But when we ask people what their main complaints were about News Feed, the number one complaint that we heard was clickbait.
So if we look at the engagement data alone, the algorithm on News Feed will prioritize information people are clicking on.
But once we understand that that's not actually creating a good experience for people and it's not what they want, we recognize that there was a discrepancy that we needed to correct.
And the same is true of misinformation and other problem areas.
… I've heard warnings about the problem with an engagement-driven algorithm going back to like 2011, about it incentivizing or basically rewarding misinformation on the platform – that rumors could go viral, that all sorts of really incendiary content or polarizing content could go viral.
Is this really the first time this is being studied internally – the fact that an engagement-driven algorithm is essentially spreading misinformation or rumors in some cases?
We had launches related to clickbait and other associated issues long before the election.
I was just using it as an example of where, if you look at people's engagement directly, there is a discrepancy which is, I think, really proving your point that we did look at that data.
And we did understand that looking at engagement alone wasn't sufficient to understand the true value and meaning that people were getting out of their feed experience.
Now at the same time …
But you were realizing this when?
The work on clickbait started well before the election.
I was – it's an easier anecdote because I don't want to use my analogy that I think you're likely to click on misinformation.
But it's easier to use my analogy that I think you're likely to click on clickbait.
Well, why wouldn't you say that?
About you.
About if you were talking with some like …
OK. Oh, I see. OK. All right.
But people are very likely to click on misinformation, right, because it's shocking or sensational.
And yet the information, the experience that they have when they read it and realize that it's not true is a bad experience and not the experience that they expect from Facebook, and not the experience that we want to provide.
But I think the bigger issue here is what we're doing now and how we're going to be able to tackle this.
And there's a tension that I think we have to acknowledge, which is we believe strongly in free expression.
And we believe strongly in the benefits of bringing people together.
But we also recognize the role that we have to play in ensuring that our platform isn't abused by people who are, frankly, trying to abuse those values – the value of free expression or the value of having people come together.
And that's really hard personally to reckon, to deal with, and had been hard for us to deal with.
And so we have to figure out how we can take steps to fight that abuse, to stay ahead of it, because we know that the tactics that are being used are constantly going to evolve, while also staying true to the values that we have.
… You know, I think the election was this milestone that, you know, a lot of people think about, right, because there was a period where a lot of the conversation was about misinformation, was about Facebook, was about how social media in general was being used [and] its relationship with democracy.
And I do acknowledge that that was a moment in time that was a wake-up for us, or for me at least.
I hadn't been working closely on these issues and so I hadn't heard the external conversation.
You know, it had started to increase maybe in the lead-up to the election.
But the actual conversation around that period was a really pivotal moment for me in coming to understand these issues and the responsibility that we had.
But because these issues are so complicated and because this company has been around now for more than a decade, it's not as if there was one moment where we started thinking about safety, security and abuse.
We've been thinking about those issues for a long time, but I think it's changed recently.
I think it's changed because we've seen that our influence has grown a lot from where it was, you know, a decade ago.
We've seen that the complexity of the challenges that we are now faced with have really increased.
And we've acknowledged that at first we were — and I was — too idealistic about our mission and too idealistic about the role that we were having in the world.
Now I still believe firmly in that mission and I still believe firmly that we have a positive role to play to bring people closer together.
But I am a lot more aware now than I was two years ago of how that mission has been abused and can be abused.
And it's scary, when you feel deeply committed to something, to see how things can go wrong and to hear from people who you love – from my family, from my friends – [that] the view that they had of Facebook three years ago is really different than the view that they have now.
And I recognize that and I hear in them their disappointment in us, their frustration and their confusion, of why some of these things we've been too slow to act on.
And that affects me.
But what it really does is it makes me aware of the problems that we have and it makes me feel a deep sense of resolve to try to address them.
And I don't think that we have it all figured out.
I think, first of all, that there's going to be challenges that maybe you'll say two years from now were obvious to you sitting here; that we might not have invested in enough.
And so we need to learn from the mistakes we've made in the past and make sure that we're staying ahead of those threats going forward.
And you know, I don't think that that means there's going to be some point in time, some magical meeting that we all sit in and say: We have to take this seriously.
It's an evolution of watching the threats evolve and waking up to our own responsibilities and trying to navigate them in a way that doesn't have unintended consequences.
Because I think the other thing that goes back to our concerns—what I said originally about recognizing that at first maybe we were too — recognizing that at first we were too idealistic about things.
We're a lot more aware now of the unintended consequences of our actions.
And in some cases that means being more aware of the unintended consequences of the actions that we're taking to try to fight these problems.
So what sorts of things in the fight against misinformation are you concerned that you could overstep?
Or what would it be that Facebook does that's overboard?
I think there's a real concern in misinformation, because of how broadly defined it is, that you start to limit speech.
And that people, you know, if you think about conversations that you have with your friends or even conversations that you have as a journalist, you'll often try to express [an] opinion or make points using different pieces of facts or different pieces of information that might be slightly taken out of context.
And there's really a large spectrum between that, which we all do, and just fabricating entirely false information.
And the challenge for us isn't in the totally fabricated misinformation that, you know, someone's dead or some event never happened.
But it is in the area of, you know: What do you do if someone takes a photo of a refugee and suggests that that photo is taking place today in order to mobilize people to care about refugees?
Now what they're doing is trying to motivate people around an important issue.
And in some ways, that's what's been done for years in how we communicate and share with each other.
But you also have to recognize that they're taking something out of context in a way that could be very misleading to people.
And so I think on some of these issues, we're aware of the fact that while we do everything we can to try to fight and reduce the amount of misinformation, we know that people who are using it as a tactic will get more and more sophisticated and bring it closer and closer to speech, or bring it into formats or smaller groups.
And we need to understand how we can stay ahead of this fight without limiting people's ability to connect and share in their communities.
What about this election coming up?
I mean what's your biggest concern when it comes to misinformation?
What's the biggest threat on Facebook at the moment when it comes to midterm elections?
You know, I think one of the threats that we have right now is that we've made a lot of progress on links, on misinformation that's spread in links.
And we did that because it was the most prevalent area and it was the area that was used for the financially-motivated scammers, so we wanted to disrupt it.
But misinformation isn't contained to links.
So the work that we've done, particularly the fact-checking work, but even other work that we've done to try to better understand, you know, things like clickbait and sensationalism, I think we're behind on really understanding that work in some of the areas of photos and videos.
And that's something that we've now been spending a lot of time on and that we need to make more progress on going into the elections.
So what is the work exactly on photos and videos that needs to be done?
Are you talking about actually combing through photos and videos on Facebook and trying to figure out what's misinformation?
Well, if the same claim, if the same false claim that so-and-so is dead or so-and-so did X, if that's spreading in a link, we're getting better now at being able to detect it, to reduce its distribution and, when fact-checkers read it, to provide more context around it.
But that same claim could also spread in a photo or in a video.
And right now our work with fact-checkers is focused on links.
And so if that were to spread in a photo, we wouldn't have the systems in place at the moment to detect it as potentially misinformation for them to review, to have them review and rate it and to reduce it and show more context around it.
But we've seen that as we've made progress on links, there's more and more misinformation that migrates to other formats.
And so we know that and we knew that at the time, and that's why we started the work on photos and videos.
But at any given moment, you're farther ahead in some areas than others.
And so at the moment, I think we're in pretty good shape on links and have more work to do on photos and videos.
And your boss, Sheryl, testified at Congress recently and was asked about "deepfakes," these videos that are completely fabricated – things that people actually didn't say but they've been made to say in fabricated videos.
Is that on your radar for this election coming up?
It's absolutely on our radar.
It's something that we are working on both in terms of technology and in terms of human review.
So on the technology side, being able to identify that a video is manipulated.
And on the human review side, making sure that we have the systems in place, whether it's our own community reviewers looking for community standards, violations, or the fact-checkers that we work with, being able to review videos.
We're working on all of that.
Now at the same time that we have to think about getting ahead of future threats, we also need to make sure that the conversations that we're having internally and the work that we're doing are about the actual issues that we're seeing and not necessarily just driven by the conversations happening externally.
So the degree to which deepfakes are a problem on Facebook today, versus just photos that are taken out of context or photos that are used with different text captions, I'm a lot more worried at the moment about those problems because they're more prevalent.
A deepfake is still pretty hard to create and we're not seeing them as much.
Now while we invest in getting ahead of them so that we don't find ourselves behind, we need to make sure that we're also focused on the things that we're seeing on the platform now.
And how do you even measure whether you're doing a good job or not?
This is one of the most challenging things because it goes back to how do you even define the problem.
And when I first took on this job, one of the first things I did was spend a lot of time with academics and experts who work on misinformation to try to get myself up to speed as quickly as possible.
And what I was struck by was just the lack of consensus on the definition of misinformation and the way to measure it.
And so what we've done is we've said that we're not going to solve this problem alone.
We're not going to solve the measurement and definition of this problem, let alone the actual way that we fight it.
And so on the measurement and definition, we have been working with a group of academics to make a lot more data available to them in a privacy-protective way.
So that they'll actually be able to get a data set of all of the links shared on Facebook since the beginning of 2017, including the links for which we have fact-checkers that have rated them false.
And we're working with them and asking them to help come up with some of the different ways to measure the volume and effects of misinformation.
Now we have tactics of doing this ourselves, but we want to make sure that we have measurements that are externally defensible and believed so that we can evaluate our progress over time and see: is there actually less misinformation on Facebook now than there was X number of months ago.
And if not, why not?
And if so, what were the steps that we took that effectively reduced that amount?
And part of what we have to understand too is, when we're measuring the amount, are we talking about the number of links, the number of people who clicked on the links, the number of people who saw the links, the number of unique individuals?
There's a whole conversation there.
We need to be much more robust about how we measure and evaluate the impact of misinformation.
… Do you think that at this point, we actually have an understanding of the role that misinformation played in the 2016 election?
I think at the moment there's a lot more work to be done to understand that impact.
Are you doing that work?
Absolutely, and other academics are as well.
But I think the challenge is, you know, there's a lot of variables that go into an election and it can be difficult to try to isolate them and to try to understand how much any one factor contributed.
And there's been a lot of progress in academic literature in the last couple of years.
But I think there's still a lot of work to do.
We're talking to a lot of academics about the importance of that work.
And we're also trying to make sure that we're learning as much as we can for the elections that are coming up, not just in the U.S. but around the world.
I was going to ask, how in the world do you police this, I mean, at scale?
You've got 2.2 billion people on this platform, billions of pieces of content shared every day.
How do you possibly police this platform for misinformation when you're dealing in so many languages all over the place, so many different people, and potentially so many different definitions for what actually is misinformation?
It is really challenging at the scale of having over 2 billion people.
What we do across all of the abuse problems that we're fighting, including misinformation, is use a combination of technology and human review.
So when it comes to misinformation, we can use technology to predict and prioritize things for human review by third-party fact-checkers.
But we can also use technology to understand the patterns and behaviors that lead to things being spread as misinformation – the tactics that are being used or the types of actors that are behind it.
And in order to really fight this at scale, we can't just be playing a game of Whac-A-Mole.
We can't just let a piece of misinformation pop up and spread and then catch it.
It's always going to be easier and faster to create fake information than to debunk in a thoughtful and evidence-based way why that information is inaccurate.
So we know the Whac-A-Mole strategy is not effective.
But what we can do and what we can scale is understand, by looking at individual examples of misinformation, the pages and the domains behind it, and start to understand the patterns behind those pages and domains repeatedly engaging in spreading that misinformation; and then dramatically reducing the distribution of those pages and domains, their ability to advertise, their ability to monetize, so that they can no longer be weaponized to spread misinformation at scale.
How many fact-checkers do you have?
We partner with 27 different organizations around the world.
And is that effective?
Fact-checking is just one part of the strategy for fighting misinformation.
I think it gets a lot of time, it takes up a lot of the conversation because it's working with external partners and it's visible within the product.
But when we think about the toolkit for fighting misinformation, it goes back to the actors, behaviors and content.
So on the actor side, fake accounts are one of the most valuable ways that we can go after misinformation because they're often involved in seeding and spreading it.
… One of the really hard calls is: Should we remove misinformation, not just reduce it, but actually remove it?
And here's how we think about that.
When we think about the abuse on our platform, there's really three actions that we can take.
We can remove things entirely.
We can reduce their distribution, which means showing them lower in News Feed so that fewer people see them but they still exist.
Or we can give people more information around them.
More context about who's behind it or other perspectives.
We've made the decision to not remove misinformation.
Now the reason that we made that decision is because we think there's a tension between the value of freedom of expression and the interest that we all have in ensuring that people see the accurate information that they want.
But we do remove a lot of the things that are associated with misinformation like fake accounts, the spammy behavior that goes into spreading it, pages that are violating our community standards.
And by removing those things, we make progress in reducing the overall amount of misinformation.
But if a real person who's their authentic self on Facebook shares something that is false, we don't believe that we should remove it because we believe that part of expression also means being able to say things that aren't true.
And we believe that no one company should decide everything that is or isn't true.
Not that's a real tension that we wrestle with.
And when you see some examples of misinformation you think, surely that one you should remove.
But when you think about the gray area that exists, that's where we've landed where we have.
And who's making the call about what's true or what's not true?
It's not you?
The decision about the veracity of a piece of content, which is one of the tactics that we're using, is made by independent, third-party fact-checkers.
What we do is prioritize things for them to review.
And we do that based on signals like feedback from the community.
If people say that — on any piece of information that you see in your News Feed, you can give us feedback that you think it's false.
And that's one piece of – that's one type of signal that we use to prioritize information for fact-checkers.
We have another decision to make, which is we could send very little to fact-checkers and have a higher amount of it actually end up being false, which might be a better use of their time.
But that means that we would leave out other things that we might not have had as accurate a prediction on, or might've only had an accurate prediction on it later once it had already [had] too many views.
So when we send things to fact-checkers we know that some of them are going to be true and some of them are going to be false based on those fact-checkers' judgments.
But that's another hard call that we have to make, of how much to send and how to prioritize.
No comments:
Post a Comment