Hello, my name is Francisco Brito Cruz, I'm a director at InternetLab
And I'm here for another interview with academics that InternetLab is searching
Academics specialists in themes of Internet policies, of Internet policies
And today we're going to talk about the right to be forgotten
And I have here with me Julia Powles, who is a specialist on the subject
And I'm going to do the interview alongside the project leader of InternetLab
in the field of privacy and surveillance, Jacqueline Abreu
I'm going to start by making the questions to Julia in English
So, Julia, in one of your interventions in the public debate about
the right to be forgotten, your affirm that
"America may look back at its condemnation of Europe's data protection
rights of one of the moments when we lost our way" and recently a
team of researchers that also has one of a public figure in Brazil,
Virgílio Almeida, which is a former chair of CGI. This group of researchers uncovered
a possibility of reidentifying people who have the made deindexation
requests and the reminding "The spirit of the Court of Justice of the
European Union legal ruling". So, how do you understand the relationship between
these issues and your conception of what would be a right to be forgotten and
which disputes and interests are involved in defining
the theoretical limits of this new but impactful legal concept?
Thanks, thanks very much and really nice to be here with you both
Look, it's great to have an opportunity to talk about that race and paper
So, this is a paper by the Virgílio team of researchers at NYU China and in
New York and got a lot of coverage out of New York Times cover. I actually spoke
on a panel at the CGI annual seminar with Virgílio and also the leader council
for google about this
And I think it to some extent misrepresents what's really going on
with the right be forgotten and so one thing about that study is that it looked at
283 URLs which is a very tiny sample of the URLs that have been requested
it's been now 1.4 6 million so we're talking about a sample set of 0.017%
of those requests and they were particularly biased said as well as I
think having some problem with statistical representation they're also
quite biased because they were compiled from lists republished by British
news organizations and for some background I have worked at the Guardian
I'm quite familiar with the British media scene and those organizations that
have republished this have quite some animosity to this rule. They have animosity
at two levels one is they of course being part of the Fourth Estate
interested in bringing truth to truth to power and everything they want to make
sure that the public records maintained and so they're concerned about the sort
of fundamental basis of what they perceive to be the right to be
forgotten. Here the "forgotten" language is quite controversial. Second dimension
dimension I think that's really interesting behind their position is
that they feel quite disempowered in the whole process because how Google -- and
we'll hopefully talk about this -- has implemented this European right to be
forgotten is in a way that doesn't engage in any way the publishers and so
what Google does is it makes a decision based on a request from an Internet user and
then it, after having made a decision, sends no information about the bullying
but it sends a URL to the newspaper and when that first started happening I was
with some of the first requests came to the Guardian in the BBC and the
reporters were up in arms because these stories that are being removed from the
internet, they quickly, actually, stepped down from that position.
So it's James Ball, at the Guardian and Robert Peston at the BBC
and they realized that it wasn't the subject of those stories that had made the request
but in the BBC case with Robert Peston, it was actually a commenter on a
story who wanted to have that story delisted so that one comment he made ten
years earlier didn't continually appear at the top of his search results.
So if we then go back to this study by the NYU researchers, what they were trying
to show was that there's a problem with how this right is being implemented and also
criticizing the decision of Google to issue these notifications and I think
that what they missed in that is that part of the issue is that it disempowers
publishers and it leads to misinterpretation. So I had actually
looked at many of those links myself previous to this paper and identified
who was the subject of the news story. It's a trivial process in fact
if the whole point is that a person who makes a request there named in a story
so if you have the URL they're not removed from the story they removed from
Google's index when you search their name. So if you look at the news you type
in the URL and your type in each name in the story and you'll reveal who the person is.
So, I don't think it was surprising at all that the researchers were able to reidentify
as you say and in fact they should have had a 100% reidentification
and the fact they didn't I think shows a major problem with this
republication of lists which is that the news organizations possibly don't
have a fully and public intent in this, they are making a political statement
and in particular, a number of those links have been reinstated so there are
a few from the Guardian that was subsequently reinstated and I was
involved in the process of the Guardian to decide that actually we wouldn't
republish the links because as the webmaster said there's some quiet pathos
in those links and what they showed is that the individual perhaps he had
two or three links of an individual's, a person's life unraveling and
who has subsequently built a new life and what the real purpose of the this
set of rights I think misnamed as the "right to be forgotten" is that we can
move on from our past effectively and it's not about people. If somebody is a
public figure, if the news is relevant they cannot claim the right to be forgotten
it's not for those people. It's for somebody like an incidental commenter, it's another
example in Virgílio's study was a gentleman who every time you search his
name
there's news stories say the word rapist. He was in an apartment where rape
occurred he was not involved, he was innocent. Completely has never had any
criminal history and now is clearly, you know, you see that by affiliation you
would have some reservations about hiring this gentleman and so it's for
somebody like that. It's particularly for people who don't have a large public profile
whose information is information in packs of high-profile news story or
in something else which is indexed high in Google search results continues to
affect their lives and so what they are forced to confront there is this sort of
perpetual present of past information and I think a good way of framing what
the intention of these rights is that we will still remember, of course
we can't actually forget and we don't want to forget, we want to build on the
memories of what we've learned and so on but that you can remember without
constantly recalling, not being confronted continually with past incidents in your
life and particularly when really the basis of the right is that information
which is inaccurate and no longer relevant
no longer timely and has no public interest can be removed from search
results specifically on your name
Julia, as you anticipated the most remarkable and well-known case about
the right to be forgotten is the Costeja versus Google Spain case
decided by the Court of Justice of the European Union in 2014. Having this in mind,
what have been the main outcomes in complexes that arise from this decision
in Europe? And despite Google's own interpretation of this decision
what in your own opinion, in your own view, do you think is the most
appropriate way to handle this request? Is this a deindexation request?
And what do you consider to be the best way to protect the interests that the Court
decided to, wanted to protect.
Yeah, so it's been super intense, the implementation of the right to be forgotten
there have now been over 500,000 Europeans who have made requests to
Google through a format set up one month after that ruling. So, Google moved swiftly
in taking on the obligations that had under this ruling there are
actually some far broader consequences I think in terms of what this European
ruling said about the application of laws 20 years old to companies
like Google. They have so far been relatively scot-free in the data
protection ecosystem and one of the potential reasons why they moved so
swiftly with the right to be forgotten was to stop any potential thought about
what other elements of the core legal funding which is that they are a data
controller subject all sorts of obligations including, for example, prior
to a request in fact considering data protection obligations and there's
various legal issues, think really interesting research questions around
that. So they implemented fast the information that they gave to the public
I think was quite limited and potentially misleading, so they
triggered what was, I think a largely unproductive debate at the very
beginning which was that "oh, look at all of these criminals and public figures
and politicians who are
raising concerns about the right to be forgotten" and indeed on their
transparency report which they commenced about four months after they started
implementing, they used, they cited some examples of cases that they accept and
reject and they were predominantly scenarios of an individual who was a
public figure, criminal and so on. Based on some work with a data
scientist Sylvia Tippmann, we did a story in the Guardian a year after that
which disclosed that based on, actually, an error in Google's own... Behind its
transparency report, the source code revealed how Google itself had indexed these
requests and it showed that less than five percent of requests, let alone knows
they were accepted, come from these categories that they had given
prominence to. So I think that totally skewed the interpretation people had
that this was and quite rightly we would have great difficulty with the
idea that people who would be able to repress information that is of public
interest, that is about criminal records and so on.
In fact what it allows is really an alignment of what happens online with
what happens offline. So the cases involving criminals who do have a right
to be delisted are those who made local laws about rehabilitation of offenders
say past crimes, that served their time or perhaps they were accused of a crime
subsequently were acquitted and the information continues to percolate
without, because of course news being as it is, the new story of someone being
accused is often far more interesting than that they are acquitted
So it's a bit of... I regard the delisting, rights to delist I think is a better way
of interpreting this particular variant -- perhaps we'll talk again about the other
other types of rights under this brand of the right to be forgotten about the
particular one in the Google case is a right to be delisted, to have certain
information delisted that's inaccurate out of date and no longer relevant and
not of public interest. So that's how the implementation has been today. In terms
you asked how could we do better with this I actually think that there would
be really great scope
in some of the discussions with the local representative of Google here were
quite productive but great scope for more nuanced solutions to the issue of
somebody's results remaining high in search
so for example you could just move it below the first three to four pages of
search results however these people often don't have much other information
so there isn't a third or fourth page about them, but there's ways of, for
example, having notifications back to the archive of those news stories perhaps
tools to pseudonymize names after a certain period of time that's something
that's practically practiced already and maybe you're in different countries and
I think one really valuable component, if Google had been more transparent
after that study I mentioned where we found that ninety-five percent of
requests were personal, we then, I along with a consortium of academics made a
number of requests to Google to say can we segment the types of inquired request
you're receiving and therefore how we might consider the legal response
because under the right to be forgotten you have everything from
the right to delist, you have everything from someone's medical information that's
online, to potentially controversial cases with a news story, for example, where
there's also a competing free speech rights and we should totally separate them
Google actually separated one category at -- and I know this connects the work of
InternetLab -- which is the revenge porn category, so they didn't connect it to
the right to delist but after eight months of enforcement they announced
global delisting of revenge porn requests and that is the sort of
approach I think would be really helpful, so we segment "ok, you have a broader
category at the moment" -- they only basically do fraud or they don't even
do things like, in particular countries that could be very sensitive what your
age is or what your religion is, so a broader set of categories that
are sensitive and that if somebody doesn't want because you know the core
of this is the information about me I should be able to control when it's not
of public interest and there's no competing right and so those categories
would be useful to delineate and then I think if we did that we would be left
with a small proportion of the cases that really do involve publisher
interest going back to that question about
the genuine interest of these major organizations ensuring that this is the
hard work that they do, continues to have an impact and that's of public-private
partnership. Virgílio, I think, supports this idea and there's a lot of support
from, for example, David Hoffman at Intel and many researchers in Germany to the idea
of the public-private partnership that would look at the request where there's
a genuine conflict where Google doesn't have experience, where there's case law
that might differ between different jurisdictions about the clash between
on the one hand freedom of expression and their protection right. So I think a
segmented approach would be really beneficial as well as perhaps what is this
resounding cry of five hundred thousand-plus Europeans and I think
global citizens, is that it's not good enough if Google can sweep the streets
of the web and put whatever they find online
no matter what purpose somebody put that information on. You know, think of a an ex
that wants to get back at you and put certain information online. Any manner of
reasons why people put something online, there is no friction to that. And some tools to really
get back a little degree of control on that vast sweep.
The third question is: by implementing the right to be forgotten provision
established by the Court of Justice of the European Union, Google found initially
limited the deindexation to national domains from where the requests came, such
as Google.de or Google.fr and in the second moment it expanded the delisting
for a geolocation pattern or model and currently the
company has been arguing with French authorities about a possibility of
worldwide deindexation and I would like to ask you how do you see
this dispute and when it comes to a removal of intimate images --as you were
saying -- order to remove or copyright content, Google seems to have a different
approach to that and
to what is delisted and how the jurisdiction problem is addressed and
somehow by deeming Google responsible for making decisions about the right to be forgotten
the development of its standards became part of the company's
own policies worldwide. How it relates with the thing that I said before and
how to access the decision of putting online platforms, such as Google, in the
position of making such decisions?
Ok. I think so. I'll separate out probably the territoriality question from the
private decision making. So, Google likes to talk about its solutions of
particular variants of its index as territorial. They are, in fact
tailored solutions to particular geographies but they don't match legal
borders necessarily and so this is a bit of a lightning rod debate on a
on a bunch of other issues that Google is concerned about in particular and I
think it is legitimately concerned about how it can ensure that it complies with
laws and and rulings that are really addressing, properly founded rights
and other situations where the law could be abused and for example
the public won't have access to information. So Google is constantly
aware of the broader implications of how it responds particular legal request
this issue
personally I support a geolocalized solution to a lot of this but I actually
think that it depends again, one of the ways that we can penetrate this --
what seems like border disputes, is to make it human, you know, that's something Google
has been very good not to do. We don't talk about the people's requests, we talk
about borders and mores and clash of jurisdictions and if we take it we go
back to the people. We might say look, if your case is about revenge porn
or if you can case is about medical information that should have never been
online, are you really telling me we have to have a geolocalized or local solution?
No, I think we should have rights that are effective wherever you come from.
Other cases, it may be that a local solution is enough.
You know, somebody might be affected just by proxy of that information
within a particular context. Think of a school kid, you know, and the way that
that information from their schooling time can affect them later on, they may
not worry about it later, you know, you're particularly sensitive to things
at a certain age. So I think that if we segment it, we might find -- this is
actually put really nicely in a paper by two researchers from the
Catholic University of Leuven, Brendan Van Alsenoy and one of his colleagues
and they put the argument based on public international law that you can --
you need to look at the particular rights and that you would then get to a
position where some rights are global and many are geolocalized and then maybe
somewhere country-specific is appropriate. So that's my way
through that mess. I personally think that I understand the position of
some of the European authorities, particularly the French, which currently
has leadership of the European data protection agencies to say "look, we're
not going to just take Google's version of territoriality, we're going to
look at real proper implementation". But I think that what a big problem for the
DPA is that they, like us, don't really know what these cases are about and
I think before they start telling Google to do it globally, we need to know what
those cases are. Because there may be some of them -- I think that Google has no
interest in delisting information that isn't strictly within the bounds of the
law but it would be nice to know, right? I mean, that's what everybody's
concerned about and I lay that firmly at the fate of Google, in many of our
inquiries they say "we simply do not have this information" and I fail to see how
the world's self-proclaimed organizer of information can't do something which is
between 20 odd examples on their transparency page and 500,000 requests
some more granularity. So the other question you asked, in addition to
territoriality was about this sort of -- Google being the decision maker
and this is really, I think --
As an example for any kind of private company making decisions.
Yeah, exactly. And this is such an interesting one for those of us who work in tech policy
because Google on the one hand is saying "we shouldn't be the
decision maker" and on the other hand is doing this in a totally opaque fashion
they're not inviting in any independent authorities, they're just getting, you know
they're doing -- my personal view is that they perhaps didn't know what they were
getting into. At the beginning, maybe they set up a process they didn't think about
it too much, now being a bit facetious and said "let's see what happens, we'll
show you what happens when we delist" and then everyone's like "actually this
is good", you know, and now they're stuck with it. I think that there's
a really strong paper I read from a Chicago-Kent professor, Edward Lee
Who said that actually the scale of a lot of these requests -- if we can have
clear categories and when you're dealing with decisions that don't
have a subjective component, like revenge porn, like medical information, like a whole
bunch of that sweep that, sort of you know, indiscriminate sweep that
Google does. If we can have some specified categories, I think it should
be private enforcement. The nature of data protection law is: if you collect
the data you have an obligation to do so responsibly and you have an
obligation to meet the interests of data subjects in their own information, that is
properly processed and that's including this reprocessing. A big distinction that I
think some -- that Google has been interested in muddying and many
scholars have muddied as well -- is that there's a separation that the actual
persistence of information in the public domain is a separate thing from its
proliferation in search results, on Google. You know, Google is not the Internet
Google is not the public record. The proliferation of information so that
it's perpetually present is the specific issue that this Costeja ruling is
addressing and the other suite of issues about information, public demands
and so on, are quite distinct.
Julia, you just mentioned that there are different kinds of requests that are
branded as right to be forgotten claims. At the end of 2014, the Brazilian
Constitutional Supreme Court announced that it would decide two right to be forgotten cases.
Both cases were requests to television stations not to
broadcast TV specials about famous crimes occurred decades ago.
The claims stated that those facts were not relevant anymore in that it would only
hurt the dignity of those people involved. In the first one, the family of
the victim claimed that the exposure was baseless and that the case was not
relevant anymore. In the second one, the defendant was acquitted and which based his
claim that the reexposure could cause him an unjustified harm.
There were also arguments making parallels with the logic of the criminal justice system
in which after a person is freed of charges or after a period of time following
certain charges or a crime, their records are clean and their past is no longer
considered relevant to the justice system. However, those cases do not
involve the internet environment, even if some new sources do not make this
distinction. Is there a difference between claims to be forgotten
online and offline?
Right, so, this is really interesting. Separately across the planet
countries with the civilian law tradition have developed out of
personality rights, dignity and intimate scenes. These cases about
what they call -- I think it's "droit à l'oubli" when it first came in France, right to oblivion
here and that's what these two cases -- the the Globo cases, I think you're referring to
are about. They are of separate legal origin, which is
interesting, to the Costeja case we've been talking about. The Costeja case comes
out of data protection rights to rectification and to ereasure, which are
part of data protection laws, more than a hundred countries in the world have them
Brazil may at some point have them, you have some of those rights but probably
not a strong rectification right. So that's statutory based and this is
constitutional law or other bases in rights of dignity. I don't think --
I think so if they're in theory you could have those rights that apply equally to
both offline and online domain, whether its case law origin or statutory origin.
So I don't think there's a distinction there. There's a distinction in terms of
what the -- how extensive the data erasure is. So what the intention of the online
search engine specifically requests is to introduce an element of obscure -- of the
obfuscation and obscurity to that information. It remains as ever in the
public record but it's about the continued prominence of information.
These requests, as I understand it, are for that information not to be again
given some presence in an original source and the cases which were
decided actually align the STJ decisions -- these that are now being appealed.
Same to align with my experience of other countries which is
for example, a gentleman who was charged with a particular crime that now they
are revisiting -- and acquitted of that crime -- sought not to be requested, not to
be mentioned in the story and then when he was he sought compensation. And he was
successful in the STJ, I would expect him to be successful in the Constitutional
Court, because that story didn't need him and he had no -- he had every legal right
not to be mentioned. He was not at all involved in the proceedings
after that point. The other case is -- again and this is quite similar, this is the Cooney case.
It's quite similar to cases in other jurisdictions which is the estate of an
individual and there's -- it's a different scenario because the individual
was the victim who was killed in a crime and it's the family not wanting to
revisit that emotion and this is -- kind of gives an example of some of the
overage people are concerned about, some requests, for example, of a couple that
divorced and the proceedings, which are very controversial, maybe they don't want
their children to see. So they're worried about a particular audience but then
they have this wide range. There are ways you can deal with this
so for example criminal and divorce cases often don't get indexed. You put
robots.txt on the information, then you don't need to remove it, you just stop
its proliferation and I think that the Cooney case is -- like in the STJ -- was
rejected. I think it's probably likely to be rejected in the Constitutional Court.
The interesting question will be whether the court conscious of the debates
about this other brand of delisting rights and the popular conceptions of
the right to be forgotten seeks to make particular distinctions in that vein
because there is a -- they may start to get into the debates and maybe the media
will be very conscious and Globo will be conscious of representing the
very public interest style arguments. But I think that particularly for the
gentleman who was acquitted, that is a core right to oblivion case. Of course
you can't remove the news reports from 30 years ago but can stop new stories
that continue to harm somebody who really wants to just move on with their life
and has every reason to do so.
So, the next question is also out of our local or at least regional context.
In Latin America, over the past decade there has been heated discussions about
the need of restoring the memory of what happened here during dictatorship
periods in which some countries suffered from heavy censorship and some people
were subjected to serious human rights violations, such as unlawful
and organized torture and execution practices. So many victims and
their families seek recognition of these facts, however in the eyes of some
legal scholars and I quote one
Professor Eduardo Bertoni from Buenos Aires, so this legal scholars
they may see that those demands are in conflict with the amnesty given both
to this state violators and to the underground political organizations
outlawed at that time. These authors say that if the European decision is
exported to the rest of the world it would run against this movement of
recognition and of building a right memory or right to truth of what
happened before and this is an apparent tension with the public
interest in accessing information, freedom of expression and transparency
in the face of the right to oblivion or something like that. So what about a
right to memory? How it can be reconcealed with the right to be forgotten, for example.
Right. Yeah, well, so I think this is really important because
I think that this framing or forgetting sets us up then to say what about to
right to memory and I think that memory is the foundation of humanity
we absolutely need to remember. I think, though, that that's probably
if we're talking particularly in this search engine context and so on, I would
separate memory from what's on Google, for example, and I am very aware of the
different historical environments in which this debate
is being discussed. I think it's really important that in Brazil and
Argentina that the discussion is owned locally and it's considering the
particular, the robustness of the legal machinery, the robustness of the
tools and particularly here the core leaver that we have in the right to
influence the public interest defense. So people who have been granted amnesty
there is a continued interest, public interest in those stories being told
You know, the understanding it's that there is a particular
legal restriction on what can happen to those individuals
doesn't stop you investigating, doesn't stop, you know, a continued rigorous
journalism and so on. So I think that it is not at all a conflict with
rights, data protection rights and particularly rights of delisting and so on
I don't see personally that conflict. I understand that depending on the
strength of the legal machinery and the people who are implementing it, it could
be misused and I think that's why we need really strong safeguards, we need
transparency about how the right's being applied, to what cases and then
you can safeguard against some of these potential abuses.
Research from InternetLab suggests that courts have been very differential to reputational rights
such as honor and image when it comes to freedom of expression here in Brazil
politicians represent a third of the plaintiffs in civil cases involving
online humor, usually suing users for damages. In fifty percent of these cases
courts have sentenced users should pay damages considering that the
reputation of the politicians had been harmed
Meanwhile numerous bills that have been introduced in Congress rely on
reputational rights as a foundation for implementing the right to be forgotten
using broad language such as "outdated" or "irrelevant information", these views might have
opened the gate for a flood of lawsuits from politicians seeking removal of
content that might hurt them in the future elections. Considering this tense
that courts had been taken in such cases here in Brazil, what would you say
are the main things to keep in mind when thinking of legislation to implement a
right to be forgotten here in our country?
Yeah, well, thanks, this connects a bit to Chico's question and I really commend
that work that the InternetLab's done, it's fantastic I think that sort of is
exactly the sort of contribution that we need, that empirical rigorous, largely
independent view on everything is really important, then we can actually see this.
Again it goes to the lever of the public interest, right, because my conception of
the right to be forgotten is that these politicians are not even in there
not even in the running
they don't even get up because there's a public interest. Unless the
information -- and we must concede that there are some components still of
politicians life that should be private. But it's not anything to do with
anybody's opinion about them politically, that is open for the masses
that's how it should be. And that should be if anyone tells you that the right to
be forgotten is something else then they're missing the point. I do think it's very interesting
what you say about this
the regard had for personality rights, reputational rights and this is
something that I think scholars who are looking at the right to be forgotten
from the context where this is foreign
so the UK, Australia, Canada, the US, they don't understand, they're concerned
about drift and how how far this can go
I think that if there are some bills in the moment far too broadly drafted about
right to be forgotten specifically and I think that they raised a lot of these
concerns and dangers and I think it needs to be shifted back to what the
core domain, who were those five hundred thousand-plus people in Europe who were making
a request, the ordinary people, they have no public profile and they're victims
of algorithmic failure. You know, so that's what this is about and
this is about meaningful data protection rights so that the building blocks of my
life, I have some degree of control when they are used against me
because somebody holding those keys to my life and using them wrongly can
really affect you for a long time. So I think that that's a really -- it's probably
a clarion call to the digital activists and academics to ensure that
the core of this right is -- you may actually not need distinct legislation
I think these constitutional cases, it's important to think about what dimensions
of that are positive, actually I've found that the case law from the
lower courts was actually very promising, it really sort of elaborated
that effectively. And this case of freedom of expression, positive case on
hundred and fifty to two hundred cases now in Europe, based on litigation from
Google which are very pro freedom of expression and those cases should
reassure anybody who is concerned that what this right could do is exactly what
you say get -- allow politicians to sue ordinary internet users, that is not
what's going on here, those people are comprehensively being rejected in their
request both by Google, by the data protection authorities, by courts and so I really
don't think that's what this is about but that the essential thing is to bring
back that humanity into the debate and that's the thing that we all have a
challenge, you know, there's this data set that's private
it's Google's data about what the core of these cases are, we continue to strive
for getting that information out and from there I think we can build a case
that people would understand in the course of things, in fact probably if you talk
to another thing, I'm sure you know, if they do a sweep and they get some
inaccurate information some in this really harmful or something that just --
it might not even be harmful but it shouldn't have been there in the first
place you should have some right. So I think climbing that back to just what we
all can understand, we can all understand youthful, you know, activities you may
want to move on from later and just information that particularly is of a
sensitive nature and really should be somehow not given the
prominence that is in Google search results. Thanks.
Thank you very much, Julia.
Thank you, Julia. See you next time.
No comments:
Post a Comment