Background Discussion
Background talk with Martin Husovec: The DSA on disinformation – on point or missing the mark?
June 2024
Monday
03
15:00 - 16:00
(CEST)
One of the most contentious issues that the EU’s Digital Services Act (DSA) addresses is online content moderation, which ties into larger discussions on how to deal with disinformation online. While the DSA itself does not contain any prohibitions on specific types of content, some of its rules deal with the way online platforms explain their moderation practices or tackle illegal content as well as disinformation.
Content moderation is a sensitive field and there are justified calls for checks and balances. For instance, politicians and platforms hold power to define what disinformation is and to shape moderation practices around these definitions. Besides nuanced questions about definitions and content moderation practices, there are also rather blunt, at times sensationalist, allegations about the DSA in general being a tool for censorship.
On June 3, 2024, Martin Husovec joined Julian Jaursch, Project Director “Policy | Platform regulation”, online to separate key questions on the DSA’s approach to tackling disinformation from mere political grandstanding. Martin is an Associate Professor of Law at the London School of Economics and Political Science and has advised various governmental and non-governmental organizations on the DSA. We discussed how the DSA deals with disinformation in practice, how regulators can address enforcement issues and what suitable checks and balances could look like.
As part of the one-hour event, guests were welcome to participate in the discussion with their questions. The talk was held in English.
Below you will find the video and the transcript of the interview. The transcript has been slightly edited for readability.
Dr. Julian Jaursch, project director at SNV: Welcome, everyone, to today’s SNV background talk.
We’re going to dive into some topics today that can be somewhat controversial. We’ll talk about how to deal – and not deal – with disinformation, about rules for online content moderation and what powers are ok for governments and companies to hold over digital communication spaces.
Before we get into that, I still briefly want to outline the backdrop to this and why I wanted to talk about this.
The backdrop to today’s discussion is the EU’s Digital Services Act, the DSA.
The DSA is pretty wide-ranging. It covers online marketplaces, search engines, social media as well as hosting providers. It contains rules about terms and conditions, user complaint mechanisms and online advertising transparency among many other things. A lot of it is rather procedural, a rulebook for corporate due diligence.
Yet, once you get to anything around content moderation, these rules touch upon online communication spaces where people discuss all kinds of topics and form their opinions. And the DSA does deal with content moderation in certain parts. For instance, the DSA requires platforms to explain their content moderation and have a notice-and-action system: Users need to be able to flag content that is potentially illegal or goes against platforms’ terms of service. Content moderation needs to be explained clearly in companies’ terms of service.
There is also the requirement for very large online platforms to conduct risk assessments. Such very large online platforms are those with more than 45 million users in the EU per month, so Facebook, Instagram, TikTok, Pornhub, Amazon, among others. These risk assessments don’t just cover illegal content but also content that might not be illegal but can negatively affect civic discourse. That might include disinformation. So, the DSA tasks big platforms to do an internal check of how the service might contribute to the spread of disinformation and how it deals with that.
Another thing to note is the so-called crisis response mechanism. In times of crises, big platforms might be required to take some specific safety measures, for a specific amount of time. What is a crisis is broadly laid out in the DSA.
We will get to the details of these topics in our discussion today, especially as they relate to tackling disinformation, and what questions and concerns have arisen around them.
That’s because risk assessments, crisis response mechanisms and generally the DSA’s relation to tackling disinformation are topics that have come up in commentaries around the DSA as undermining freedom of expression. This was visible in parliamentary debates on the DSA and its national implementation in Germany and it’s also visible in comments on the DSA, both by legal professionals and by citizens. At times, provisions from the DSA are either deliberately or inadvertently mixed up or misconstrued. For instance, in a parliamentary debate in Germany, the risk assessment and crisis response rules were mixed up, which are different, albeit connected, things.
Nonetheless, there are valid concerns. Not everything in the DSA touches upon those digital spaces where people exchange views, form opinions and make political statements. But because the DSA does sometimes touch upon these digital spaces, it’s important to be extra careful about not handing too much power to governments or companies.
This is why I wanted to have this discussion today: To figure out what are valid concerns and what are merely misunderstandings or willful misconstructions.
I’m really glad I get to ask Dr. Martin Husovec about this. He’s an associate professor of law at the London School of Economics and Political Science and has worked on the DSA for years now. He’s actually going to publish a book on it soon.
Reading Martin’s work and having heard him speak and talked to him, I got the impression that he’s generally not opposed to the DSA and can see a lot of good sides to it. But he’s also not a mindless cheerleader for it, glossing over the DSA’s shortcomings. That’s why I’m really thankful you’re here today, Martin, because this outlook will certainly be very helpful for our discussion today. I don’t expect you or anyone else today to have all the solutions to tackling disinformation and what’s ailing the DSA, but I hope we can clarify what the law might mean in practice.
Moreover, I’m aware of the debate around disinformation. I know that some people view it as alarmist. I see that risk, too. But it’s also very clear to me that individuals and groups of people can be severely affected by disinformation and that’s why ignoring it is not an option to me, either. I hope we can strike that balance today.
Martin, many thanks again for being here today. I want to start with the big topic of disinformation and use that to highlight strengths and weaknesses of some parts of the DSA.
The term “disinformation” is not mentioned or defined at all in the actual DSA rules. It’s only mentioned in the recitals, which are the accompanying explanations to the rules. Still, when you look at the Commission website on the DSA, it says in the first paragraph about the law, “Its main goal is to prevent illegal and harmful activities online and the spread of disinformation.”
So, before we even get into any aspects of the DSA addressing disinformation, I wanted to get your take on this circumstance in general. Is it fine to not have a legal definition of disinformation or do you see issues with this fuzziness?
Dr. Martin Husovec, associate professor at the LSE: Thanks for having me. It’s a great pleasure to talk to you and to the audience. I would start by saying that the DSA is meant to be a general-purpose law, so disinformation is not specifically singled out. Your question is about to what extent disinformation is actually a useful concept for a law like the DSA. And I’m skeptical about that to a certain extent. Disinformation certainly can be helpful and the evolution of the term and the way the term is being used or has been used in the past has changed as well. I guess I can understand how the term can be used analytically and can be helpful analytically for things like, if you try to improve the media literacy and if you try to make interventions how to improve people’s understanding of the text, of the information, how to be more critical. For that, I can see how that is helpful.
As disinformation covers various concepts, limited use of the term for policy-making
I’m increasingly skeptical of the use of the term for the purposes of the policy. The problem I see is that it has become such a sexy term and I guess you can see this in political circles these days, at least on one side – well, actually on all sides – of the political spectrum, that I think political representatives use this increasingly to signal something. Whereas on the policy level, I’m not necessarily sure it’s so helpful. I think this probably explains why when you look at the PR documents they are selling the DSA to the public with, you find a lot of that. But when you look at the policy itself, you don’t find much of that.
For me, the key question is then, what does it mean for the policy? If we are using this in a sort of a PR sense, what does it mean for the policy? Does it have any implications for the policy? And there, I’m a little bit skeptical that it’s useful as a starting point, because the way disinformation is defined in the EU documents could possibly mean everything from someone believing in flat earth, someone who is over-promising in a political campaign, something that is unrealistic, something that constitutes consumer fraud, something that is a foreign interference by foreign government advocates for war. Those are very different things. And so having one term for all of these things is probably not helpful. It doesn’t take a lawyer to see that, I guess.
Julian: Thanks for those examples. I think that’s why there have been new terms and some more differentiation is necessary. That’s really helpful, because you did mention the policy side of things. Let’s make it more concrete about policy and about the DSA. So, we know now there is not a clear definition and you’ve laid out why it’s hard to have a definition or deal with it on a policy level. Nonetheless, there is this risk management system in the DSA, which at least indirectly deals with disinformation. That’s been a bone of contention for some people critical of the DSA and its wide scope, the wide leeway that regulators have, especially the Commission. Can you illuminate that a little bit? What is this risk management system and what’s your opinion of it, in relation to tackling disinformation?
Martin: The risk management system in the DSA is specifically important for VLOPs, very large online platforms, and very large online search engines, VLOSEs. It’s basically a system where we acknowledge the fact that regulators do not necessarily know how to tackle the problems. So, to solve the problem of information asymmetry, we use the tool of risk management. That is, we basically ask companies to come up with solutions to certain problems and then have a certain way how the regulator marks the homework. That is in the DSA context, basically by asking auditors to review the work, asking civil society, the public, as well as finally the regulators through the dossiers that accompany the risk assessments. The long-term goal of this is to establish regulatory dialogue where basically companies know there’s increased attention put on their work and then there’s also transparency about what they’re doing, how they’re thinking about certain design choices on their platforms and system choices. This hopefully, especially if you can look and compare across these different platforms, can lead to improved management of risks.
The DSA’s risk management system: One attempt to allow regulators more insights into how platforms work
Now, the key issue here is that the risk is not defined as something that only relates to illegal content, but it broadly is either the risk that relates to illegal content or risk that relates to fundamental rights. Of course, many, many sorts of things can constitute risk to fundamental rights, even if they don’t constitute illegal behavior and this is where most people see a huge role for what they call disinformation. The key problem is that, if I come back to my four examples, at least two of those examples would be clearly unlawful content: consumer fraud or interference by foreign governments advocating war efforts. Those could be quite easily squared with managing risks related to illegal content. While the other two would be more questionable: over-promising by politicians, you know, giving unrealistic promises like, “We’re going to have more snow again, or the summer is not going to be so dry.” That, to my mind, would constitute disinformation. It is very hard, though, to imagine that we would start policing it as illegal content. The same thing [goes] for flat earth in most settings.
That’s why I guess people then wonder if there is such a thing as disinformation as a risk, which again, I’m skeptical that there is. There is an umbrella term and there are different subsets of issues and there’s something that connects them, that is the media literacy, I think in most settings. But there is a huge internal differentiation. Basically, one of the things I’ve been arguing is that you simply cannot take the term and think that all types of lying, whether intentional or not, will give the Commission and the other DSCs a mandate to enforce the law in the same way.
Julian: This could be something that we get back to: What leeway do the Commission and the regulators actually have? Before I get to that, let me just quickly have one follow-up on all your points regarding the examples and the need to differentiate between them. Does that also apply to what is called the crisis response mechanism in the DSA? This is something that’s separate from the risk assessment. Could you also briefly talk about this and whether you see similar issues that you just laid out for the risk management system?
Martin: Absolutely. I think this applies across topics. I think it applies to the risk crisis response mechanism as well as the protocols, because the underlying thing is that the DSA is built on the idea that it’s the parliament, mostly national parliaments, who actually decides what is illegal. The goal of the law is to work with that. Illegality is defined mostly by national law, partly sometimes by European law, but mostly by national law. There is no mandate on the side of the Commission to invent ad hoc rules about illegality. There is no empowering provision saying the Commission can decide ad hoc that a certain type of content or expression should be considered illegal on a specific platform. There is no such empowering provision.
The European Commission cannot decide what content is illegal or not
In absence of such an empowering provision and in the presence of the clear idea that the legality rules are actually made by mostly national parliaments, there’s a clear idea that while the Commission and other DSCs might be able to do this – mostly about the Commission in this case, though – the Commission might be able to do something about risks that are posed by behavior that is not in itself unlawful. It certainly cannot go as far as basically legislating, because the Commission was never given the power to really legislate about a specific type of content.
The question is then, where does the border lie between the two efforts? This is where we can have a conversation about how to draw the line. I think that’s the kind of basic point – in absence of empowerment, which would be difficult to construe anyway from the human rights perspective, because you can’t empower executives to make up rules about content so easily. In absence of that and the presence of a clear delineation of the parliaments making the laws about what is illegal behavior, I think there’s a clear idea that there has to be a cut and we have to distinguish these four situations that I mentioned as typical examples and they can’t be treated the same.
Julian: That’s an important clarification in addition to the examples that you made. The call for differentiation is also the reminder that, as you said, there is no empowerment of the Commission or other regulators to determine what is illegal or not. That is not in the DSA, it’s based on national laws, it’s sometimes in the EU laws. That is a crucial reminder. Thanks for bringing that up. Now coming back to what you said before and what you dove into already a little bit. You said, there is no empowerment function, parliaments get to make the laws, regulators need to enforce them. Is that the red line that you would draw? The Commission can’t make any rules, they can only enforce the rules? If so, what other red lines do you see? What other guardrails do you potentially see in the DSA against overreach or this fuzziness or lack of a clear definition here.
Martin: We have a couple of explicit safeguards in the DSA that we can definitely find. One is that whatever the Commission does cannot amount, under the risk management, to a general monitoring obligation. That’s one very explicit one. Second one, whatever the Commission does has to be proportionate. The red line I mentioned is a little bit more difficult because at least the test for it is not explicitly in the DSA, but you can derive it from the very simple thing that in the European human rights law, we always say that if you want to have an interference, that is ultimately legitimate and proportionate, you need to have a restriction that is prescribed by the law that is sufficiently foreseeable. That’s the so-called quality of the law requirement.
Distinguishing between interventions focused on specific content and interventions that are “content-neutral”
If you would try to make an argument that you can prohibit certain types of expressions on the basis of Article 35 in the DSA [the article on the risk management system], you would basically need to make an argument that the EU legislature was meant to empower the Commission to make these calls on this very fuzzy legal basis. I think that just doesn’t seem to square with the existing case law of the European Court of Human Rights. Now, the difficulty, I think, that we have with this particular red line is: Where to draw the boundary, how to fashion a test? We don’t have it in the DSA itself. Maybe this is something we could have had. We don’t have it in the DSA explicitly, but the courts will be able to derive it. So, talking of safeguards, inevitably, imagine a situation where the European Commission would try to impose something on one of the VLOPs, in the area where the expressions are lawful and it would try to formulate it almost as a de facto ban of a specific type of speech. Then a platform can obviously go to the court and get a judicial review. In this case, the general court, later the Court of Justice, will be able to basically opine on that specific issue and is, I think, inevitably bound to find some red line or draw some red line.
Whether they would draw the red line where I draw the red line, [is unclear]. My proposal is to draw the red line with how the interventions are framed. If the intervention is framed as content-specific – meaning that it’s orientated according to the content, the expression – then it’s something the Commission cannot do. But if it’s an intervention that is framed around content-neutral interventions such as increase the authentication of the users, increase the friction, increase the media literacy programs on the platforms, then I don’t think that’s necessarily crossing the red line. That’s the kind of example where I think the Commission has sufficient legitimacy to act, while at the same time, it’s not crossing the red line.
Julian: Those are all interesting and important things. To just quickly summarize if I get it all right: You already see some guardrails in the DSA itself and how it’s set up. You see potential guardrails with the very basic fact that it’s parliaments making laws about what is illegal and legal, and not the Commission. You see the potential for courts intervening at some point, even though this is probably a long-term thing. And then your point here, lastly, that you need to make a distinction between looking at interventions that are specifically about content and interventions that would touch users and the way platforms work without looking at content at all.
Martin: Yeah, exactly.
Julian: These points give a little bit of a counter point to this idea that the DSA sets up a specific system and it’s set in stone and it will never change. I think you pointed out that there is a lot of variation that can still come. Is that right?
Martin: Yeah, absolutely. Let me just remind you, obviously, as any law, you can abuse it. We have very sensible laws on the books that are about management of the risks related to public assemblies. And are the local regulators, or the municipalities, getting it wrong? Yes, most definitely. We have many cases from the recent couple of months, where in Europe, courts – or not necessarily courts but the local authorities – are getting it wrong or getting the balance wrong, because the law will be fuzzy. The law would say something like, “You have a right to prohibit a protest in case there’s a risk of disturbance, or if there’s a certain type of threat.” And you can never foresee all kinds of circumstances. The idea is that a peaceful protest should never be limited. Do we see misuse of this? Oh, most definitely. We see attempts to basically prohibit people from speaking up on the basis of that they will invite too much risk to the public fora.
Against potential overreach and abuse: The need to stay vigilant about how the DSA is enforced
So, I’m not suggesting that this is kind of a done deal and we can just go and all watch TV and forget about this. On the contrary, I think we need to be vigilant about the red line. But I don’t think it’s something that is necessarily pre-programmed in the DSA and that we need to go to the streets, protest it, because we say this is something that is clearly going to happen. I see many sensible guardrails around this and I see how there is an attempt to actually square these two positions. But we need to remain vigilant, this is something we always have to do in liberal democracy.
Julian: This was really helpful as a reminder of the benefits and drawbacks that you had this type of analysis in there as well. Speaking of checking it or staying vigilant, as you said, this is one last question that I have before I turn over to the audience. Staying vigilant within the context of the DSA needs a lot of different people and institutions. It’s not just the regulators, it’s not just the parliament drawing red lines, but there are also specific roles for researchers, for civil society groups, for users themselves to help enforce the rules. As just one example, researchers can request data from very large online platforms to study, for example, disinformation or other things. I would like to ask you to speak about this type of community of practice as well: Who is part of this community and why is it important to have it, to stay vigilant?
Martin: Let me just present two ways to look at this. One thing is that the DSA creates an ecosystem around the platforms and regulators that are meant to augment the powers of both the platforms as well as regulators. This is creating institutions like trusted flaggers and incentivizing researchers to get access and to analyze for the public benefit the risks that we observe on these platforms. This is potentially inviting NGOs to represent users when they have disputes with companies. Those are institutions that I think are key to the sort of enforcement of particularly the rules about illegality, but also understanding what actually lies ahead.
Because they [platforms] have a lot of headlines about all types of risks, but we don’t necessarily always know what evidence really says about certain types of risks. I think this is where researchers can come in. By the way, this is exactly in the design of the law, where you can see that we could have regulated the platforms in the same way that we regulate chemicals. That is basically, there’ll be some very powerful agency that gets access to information, they make the analysis. We didn’t do that for a very simple reason, because communication is extremely politically sensitive. We wouldn’t want to have any independent organization to do that. But we’re much more at ease with the researchers to do that for the public benefit. So that’s kind of one set of issues.
How a community of practice can provide checks on DSA enforcement: Supporting research and empowering users
But then there’s another set of issues, which is why I’m actually very optimistic about the risk management system. That requires that we can imagine a little bit of a different future. Most of the people, when they think about risk management, what they’re thinking about is that the state will come in and will say, “You have to do X and Y” to the companies and that a company simply implements it. This is a very top-down way of looking at online safety or risk mitigation. I think what we’re missing is that very often, for many risks, there are solutions that are much more bottom-up: solutions where the way you approach the problem, rather than prescribing the medicine, is giving tools to individuals to find ways – particularly with disinformation that is not illegal and we have actually decided consciously not to make it illegal. We can, through the parliaments, decide to make it illegal, yet we often decide not to do so, probably for a very good reason.
For this class of risks, we certainly shouldn’t necessarily abdicate doing something, but that doing something doesn’t have to mean that we tell companies top-down what the solution is. It could also mean that we tell them, “Look, you need to empower individuals to be able to make easier choices, more choices, to be able to consult third parties to give them information that they trust more.” Instead of having one set of fact-checkers, having a menu, where you can pick from the fact-checkers because you’re more conservative or you’re more left-leaning. Still, the information you’re getting is more trusted than, say, a complete alternative that is just made up, yet still legal. I think there are many solutions in the space of empowering individuals and how individuals can be part of the solution.
But part of the problem is that you have to accept that you can trust individuals to make these choices. And I sometimes fear that people would like to make these choices for individuals, even on things we are not prohibiting. I’m not talking about things we are prohibiting, because they’re the choices on the table. I sense sometimes people want to be too much top-down and don’t trust an individual to be the one who actually is able to make these decisions. If we free ourselves of that and if we see the options in the bottom-up solutions, then I think there are many ways that we can open up the platforms and open up to solutions, where we can actually foster community. So, we can foster the idea that, I don’t know, Julian is really good, he’s really good in a specific area when it comes to a specific type of content. So, I will follow his feed for certain types of things or his recommendations because he’s an authority.
Or maybe I distrust the government with respect to COVID information because they had many transgressions, but there’s this association of doctors in a specific area and they also produce official information and I prefer to have their information to supplement my feed as a type of fact-checking. I see this as a completely legitimate choice on the side of individuals, again, if we’re talking about lawful expressions. And I think there’s a possibility for more of that. If you look at the work that the Council of Europe has done on disinformation, this is the cornerstone: User agency. Starting obviously with media literacy, but building up from there, this is where most of the interventions lie. If we conceive disinformation this broadly, if we’re not talking about just illegal stuff such as foreign interference, then I think we just need to adjust the way we think about the interventions as well. The Council of Europe has done some tremendous work in this area and there are many more things that can be done in reimagining some of these services that are sources of these risks.
Julian: Thanks also for those interesting and inspiring words. It’s about user empowerment and user agency, as you said. Once again, you made that distinction about content that is illegal under the law, which there are very clear rules for, and then stuff that is not illegal under the law. This distinction sometimes gets lost. But as you pointed out over and over again today, it is really crucial in determining what you’re dealing with and what potential mitigation measures would be. Thank you very much for this part of the session. There’s already a number of questions from the audience. I’ll start with this one: Do you have any insights about how very large online platforms are actually going about identifying and defining systemic risks? Is this something that you or the public is privy to or no?
Martin: Unfortunately, the answer to that is no, for a very simple reason, because the way that the DSA was drafted, probably not on purpose, basically led to the situation, where we still don’t have the publicly available versions of these risk assessments. Regulators have seen them, obviously the companies have seen them, but we haven’t seen them. I’ve been saying though, that maybe one of the ways how to get to this type of information earlier is to include them in your data access requests under article 40 [of the DSA, which regulates researcher access to data]. Maybe this is the way you could possibly get access to this, but unfortunately, I haven’t seen these risk assessments, so I don’t know anything specific.
Call for earlier publication and discussion of VLOPs’ risk assessments
One thing I would say about this, though, is that I’m sure they’re thinking about this holistically rather than through the prism of very specific issues. One of the things that I want to emphasize is that what I talked about applies to what the regulators need to do, when thinking about risk assessments, I don’t think the same applies to companies. Companies can obviously make up their own definition of what they consider to be against terms and conditions and think about how they want to deal with that type of risk. To the extent that a regulator doesn’t impose a specific view on them, they’re free to do so.
As much as they are free to prohibit dog pictures on a cat site and the other way around, there’s some limits to that, specifically if you come to very big systemic players. I guess one of the discussions brewing here is potentially the debate to what extent, say, Meta has changed to de-rank certain types of political content, whatever that means, through the change of defaults, whether that’s something that potentially is a problem for civic discourse, the impact on elections or not. But that’s a very specific issue that applies mostly to the big players, but the starting point is contractual freedom. From that perspective, I imagine that their starting point will be very different from my starting point that is from the perspective of the regulator. Their starting point would be much more about, I think, a cluster of different issues, rather than my very clear sort of orientation at what is illegal and what is not.
Companies are keen on actually blurring the distinction, because they want to have global products where they talk mostly preferably only about terms and conditions violations, even though those include many types of illegal behavior as well. So, I would imagine those to be much blurrier on the distinction between illegal and legal. But again, that is not necessarily an issue, unless the regulator wants to impose some specific idea of what they want. For instance, if they [the regulators] say, “We want you to demonetize disinformation” and they [the companies] would come up with a specific definition of disinformation that goes beyond the law, then this is where that kind of analysis that I just mentioned comes in. But what I mentioned really applies to the regulator, not necessarily to the company. So, companies can develop their own ways. In terms of methodology, I’m not any smarter than anyone in the room about this. I’m eagerly waiting for the first risk assessments. Obviously, I’ve spoken to some people in the area, but I’m not trying to pretend that I know what the approach is. I guess we have to all wait for that.
Julian: There has been the call from various civil society organizations and scholars to find ways to potentially make the risk assessments that companies do internally available earlier or at least parts of them earlier, because there is an interest. It makes sense to understand what risk mitigation means by knowing what the risks are. Thanks for your take on that.
I’ll try to get through a couple more questions. What is your opinion on the connection between the DSA and the Strengthened Code of Practice on Disinformation? For those who haven’t followed that, that’s a voluntary code of practice that various online platforms and other actors in this field such as fact checkers and some civil society organizations have signed on to. The question here is, do you think the voluntary nature of the code of practice of disinformation is preserved by the DSA?
Martin: I’m eagerly waiting for the final way how this is incorporated, because the DSA is not 100 percent clear on what’s the final act through which the DSA official code of conduct is adopted. I’m quite curious how they do it. My sense from what I hear from folks is that it won’t be terribly many changes to the code of practice on disinformation before it becomes a DSA official code of conduct. Now, in my book that comes out in August, the way I came to see the codes of conduct is as a useful exercise, yet not as a legally binding exercise. I’ll explain why I think it’s still helpful, although there’s a risk that we’re running with the codes of conduct.
DSA codes of conduct as industry benchmarks and sources of evidence for the Commission
The risk that we’re running with the codes of conduct is that there will be many, because the member states are constrained heavily by the DSA, when it comes to local legislation, which means that they will basically concentrate most of their efforts to either these types of codes of conduct, which probably are quick wins politically, or obviously updating the DSA. We have seen this already with some of the recent files. Now, the legal status of the codes of conduct, like the code of practice of disinformation, if they become a DSA code of conduct, which they are about to become, are that they are still voluntary. It’s voluntary for the companies to participate. Compliance with the code of conduct does not mean compliance with the DSA. So, it’s not a safe harbor, unlike, say, in the UK, where the codes of conduct developed by Ofcom are really meant to be a sort of a defense for compliance, but they also aren’t legally binding in the sense that the violation of the code of conduct would necessarily automatically mean that there’s a violation of the DSA.
The Commission cannot base its decision about non-compliance with the text of the code of conduct. That doesn’t mean, though, they’re completely useless for the purpose of enforcement. One of the ways that I think they’re going to be very helpful for the Commission is that, on the risk management, one of the big issues that the Commission has to deal with is that the risk management system basically grants a lot of discretion to the companies, how they comply with certain things and a lot of discretion to the Commission.
One of the problems that they have to figure out if they want to fine a company is to establish what’s the benchmark, what’s the due diligence benchmark in the industry. How quickly you process a certain type of notifications, what does the user interface usually look like, what does the effective fact checking system look like, et cetera. The problem with that is that you need evidence for that. And it needs to come from somewhere. It can come from researchers, it can come from the Commission. One of the ways how to build it up is by convincing the industry that they agree upon something that they call industry practice, which is basically a code of conduct.
So, if industry comes in and agrees that something is a code of practice, I think they are signaling that something is an industry practice. I think that helps the Commission to build a case to say, “Well, if you’re below that industry practice, then you need to justify it, it’s not automatic, you need to justify it.” And if you’re able to justify it, you can be still okay. But if you’re not able to justify it and now there’s this benchmark that has been created by the code of practice, I think that makes the enforcement cheaper for the Commission and possibly easier. So, I don’t think they are useless by any means. I think they are potentially very helpful for the Commission. But they’re an interesting game, because for the companies, they are inevitably producing evidence against themselves in the long run. For the states, they are a tool to get companies to sign on to all kinds of promises on what they are actually already doing instead of passing legislation.
From the participation standpoint for the companies, although they are voluntary, in my book, I call it, it’s as voluntary as if your mother-in-law asks you to help with gardening on Sunday. You can possibly say no, but you know that things will go south if you do so. So yes, it’s not going to be legally binding, but there are many consequences.
To answer your question about how I see the link, I see the link as basically becoming an important evidence source. I don’t necessarily think, actually, that everything in the code of practice is enforceable through the DSA. I have my doubts about certain things, for sure. I think this is how I basically see it. I see that it does definitely help the Commission to potentially enforce, but also, I think it has its limits, clearly. So, they still have to build the case.
Julian: Codes of Conduct as a benchmark is an interesting concept, apart from the gardening metaphor, of course. Let’s jump to another topic. Here is a comment: “I don’t agree with the stance that we as individuals should be brought on board to help mitigate risks that are stemming from the platform system. Isn’t the whole point of the DSA to address systemic risks so that individuals are relieved from having to deal with negative externalities themselves?”
Martin: I don’t think this is either or. Let me give you an example. I get this reaction very often. Basically, the reaction is, “Look, we just made a company responsible for the risks, why do you want to now make individuals responsible for them?” Let me just walk you through an example. Ofcom recently proposed in the context of a conduct that is clearly illegal, so de-risking situation of grooming of children on online platforms, that one of the solutions how to deal with the grooming would be to ask companies to change the defaults, if the services are being used by children, so that no child is recommended to a stranger as a friend and no stranger is recommended to a child as a friend.
Allowing for some risks, if users have the agency to choose for themselves
By default, you basically can only communicate with those who are your friends and you don’t get any recommendations of strangers, neither are you recommended to strangers. Now, they didn’t formulate it as a ban, but they formulated it as a default, where basically a child, probably age appropriate, I don’t recall the details, could possibly switch on to have strangers recommended, but would have to go through a certain procedure, where they learn about risks that they are about to encounter. You might think, “Why should a child possibly meet a stranger online?” But maybe the child is already a 14-year-old and there are many reasons why a child wants to meet a stranger online. So, I think this is an example, where you’re trying to deal with the problem of grooming, but you’re still giving some agency to individuals.
Where our question is coming from, you’re worried that basically my approach means immediately we sort of dump all the risks on individuals and the individual has to sort of deal with them. No, I’m not against the defaults, but the defaults are still empowering, because you can switch them off. It gives you a choice that you didn’t have before, because a ban doesn’t give you a choice. A ban is a top-down imposed solution and top-down solution is an idea that the company has or the state has, but it’s not always the best way to approach this. There are many examples of this. I think for me, the key thing that you try to solve, also with things like media literacy, is you’re trying to get more resilient individuals. You’re not trying to sanitize the risks. You’re trying to get more resilient individuals. Well, guess what? You can’t get more resilient individuals without exposing them to a certain level of risk, so they can learn how to deal with the risk.
This is also the problem with all the debates about, “Let’s sanitize all the risk for children.” Then suddenly when they turn 18, we put all the risk for adults on them and where do they learn? I think the idea is not, obviously, that we put all of the responsibility now on individuals, but it’s that whenever it’s meaningful and it contributes toward the individuals actually learning and becoming more resilient, we try to give them agency. Which, by the way, doesn’t mean that you have to exercise it individually. It doesn’t mean that it’s you who is making all the choices. You can also collectively exercise it. You can delegate it, there are many different ways you can do this. That is not implied. But one of the things that is implied is that you get choices that you weren’t given before.
The case for middleware: Users and content creators choosing add-ons to platforms’ offers
Let me give you one example that shows that. Social media had a huge problem with respect to content moderation for certain smaller languages, particularly Slavic languages, but I’m sure other languages as well. I’m just not aware. If you were an administrator of the page, you had a huge difficulty basically using Facebook’s solution, because Facebook’s solution would catch very little or would be just producing too many false positives. There were a couple of startups, one in particular called TrollWall, that basically came up with the idea, “How about we come up with an AI system that is trained on local languages, Slavic languages in particular, and we offer it as an app that sits on top of your administration privileges as an app on Facebook?” You as a page administrator, I don’t know, you’re a newspaper or a politician, you can basically install it and it will give you new capabilities to moderate content.
It gives you all the possibilities to customize, whether you want to deal with just hateful comments or vulgar comments or something else, whether you want to flag them or you want to erase them, you want to hide them, what exactly you want to do, whether you want to act upon them, you want to automatically solve them. That’s an agency given to the page operator, which doesn’t exist in case the only moderation is top-down. It doesn’t, of course, and shouldn’t relieve the provider from possibly offering tools as well. But the idea that one company will have the best solutions all the time, I think it’s chimeric. It’s never going to be the case that one company is always best at resolving all the risks. You should have both at the same time. I don’t think they are mutually exclusive.
Julian: I think there could be a whole lot of follow-ups on this regarding what type of exposure to risks are good, what types of risk you don’t want any exposure to, maybe there’s categories of that and are there specific groups of people that we might be okay with being exposed, others not and what type of agency is possible in the end. The way you laid it out, hopefully this answers part of the questions or at least gives some more insights into your disagreement there.
I have another question that received some votes, but before I ask that, I need to ask you, are you familiar with the topic? I know you’re working on a lot of other things about the DSA, but this is also related to the European Media Freedom Act, the EMFA. Is that something that you feel comfortable talking about as well?
Martin: Article 18? Yeah, sure.
Julian: Well, then let’s get into that. This person is interested in your view on the interplay of Article 18 of the European Media Freedom Act – and maybe you can discuss what that means – and the obligations that VLOPs have under the Digital Services Act. There was recently a response from the European Commission and it claims that Article 18 does not apply when very large online platforms act in compliance with their obligations, regarding risk assessment and risk mitigation of the DSA. What the Commission said is that Article 18 of the European Media Freedom Act concerns the relationship between media and providers of VLOPs, when it comes to the application of the terms of service. The question here is: What about systemic risks stemming from the application in terms of service? Maybe you can give a little bit of background on what the debate is on Article 18, how it also relates to systemic risks and disinformation, because there was a long-standing debate around this beforehand.
Martin: Article 18 of the EMFA is a rebranded news media exception that did not make it to the DSA. Under the DSA, there was a debate to what extent media should be exempted from any content moderation by the news media. The argument usually goes, news media are doing their own editorial checks, so why should they be double-checked by online platforms as meta-editors? The media did not succeed to have it in the DSA, so now in the EMFA, there is specific provision.
The interplay of the EMFA and the DSA: Arguing for more rights for trusted content creators
The final version of it has a kind of a complicated process, where you sort of first indicate that you are that type of provider that is potentially also regulated or self-regulated. Once you have that status, the main benefits there, I would say one or two, three main benefits. The main one is procedural, that your content can’t be taken down within 24 hours before you’re basically given a chance to reply, so it’s kind of a window before you can remove something. But that is not absolute, so risk mitigation is actually one of the carve-outs. If the platform would say, “Well, actually, this is happening in the context of elections”, and there’s some illegal content, I don’t know, some inaccuracy about the date and place of elections or the candidates, then they could not respect this specific obligation. This is somewhere in the fine print of Article 18, which says that they don’t have to follow it in that specific case. So that’s the one benefit.
There are some additional benefits in terms of the dispute resolution and some transparency. There’s some transparency where the platforms have to lay bare how exactly they’re approaching this type of editorial content. I personally think the idea behind it, I was never against. On the contrary, as some of you may know, I’ve written this piece called “Trusted Content Creators”, where my idea was very much that we should reward trust with more rights, including procedural rights. It is only right that if we have trusted players, such as some of the media, who do an excellent job, that they should enjoy more procedural rights. To me, it doesn’t strike me controversial to say that if you have a trusted news media organization that does a high quality job, then their content shouldn’t be taken down or their account terminated so easily. That, to me, is very convincing. Now, whether Article 18 really does it or where it potentially over-includes also organizations that are not necessarily producing trusted content, I leave that for another day.
The idea that we want to reward trusted content creators is something that I’m completely on board with. I don’t think Article 18 does it very well. It’s tried to get close, but only tries to get close for the purposes of news media. I think we should be trying to do something like this for the purpose of the whole ecosystem. There are tons of other players – researchers, influencers and many others – who potentially produce high quality content and potentially should be rewarded. So, the way it’s drafted, to answer your question about a relationship, basically gives the Commission quite broad powers to potentially say that in some specific settings, this doesn’t actually apply. Because the VLOPs, again, do not have to necessarily follow this, in case they comply with Article 35 [of the DSA]. In case there’s a conflict, there’s a clear override by the DSA. I think it’s a kind of a starting point, but what it says to me, it’s not very absolute. It’s a good starting point, but I could easily see the platforms eventually saying, “Well, we have an untrustworthy content provider that is designated as a news media organization, yet for the risk mitigation purposes, we treat them slightly differently as we do others.” I see it as a starting point at the moment, more than really rules that you can deviate from, because of that carve-out.
Julian: This is another thing, you mentioned this very early on, that we will have to see how it actually plays out, because both of these laws are very new and it will take some time to figure out potential tensions. Thanks. I’ll try to squeeze one or two more questions and maybe you can keep it shorter for these ones. There’s one question: Could mandatory external independent audits have been mandated in the DSA instead of internal risk assessments? What disadvantage could have hindered including that in the DSA? Well, there are some independent audits in the DSA, so maybe this is an opportunity for you to explain a little bit what the risk assessments and what the audits are.
Martin: The question is whether the existing system of audits that is that a company pays the auditor and the auditor looks at its content wouldn’t have been better solved by having a third party. Now, we do have a third party that is researchers, but I take your point. That’s not the same thing as really auditing risk assessment. It could have been done. I don’t think there was a comprehensive good proposal, particularly around how to finance this. We already have quite a few issues with this issue. I saw one of the questions about trusted flaggers going in this direction as well.
Looking ahead towards upgrading the DSA: Improved support for the community of practice
How do you deal with the support for the ecosystem? I think that’s actually a challenge the DSA doesn’t address, we should rethink it for the upgrade. We could have had it, but who would pay for it? How would we actually do this? I think we need to get researchers’ access to data off the ground. If we manage to do that, maybe we build some experience on how to do external audits and maybe that can become a next thing. I don’t know. But for the time being, even though they’re probably quite imperfect and there’s a lot to improve, [this is what we have] although we haven’t seen the work of auditors yet. It’s hard to say. I think it’s still better than if there were no audits at all. Are they perfect? No, they’re not. They’re also not perfect in the financial services area. But I think the fact that we have researchers gives us an external pair of eyes that hopefully can complement it. If we can build a case that we could do it externally, maybe we can change it going forward. But at the moment, I’m skeptical that we have the capability and we know how to pay for it.
Julian: That goes back to your point about the community of practice. You mentioned the external auditors, which is an established system in other industries. You mentioned researchers which can get access, you mentioned the internal risk assessments, you mentioned the regulators, so it is really an interplay of a lot of different actors. I will close with the question that you already saw as well and briefly addressed about trusted flaggers: As of now, there are not a lot of trusted flaggers, these are organizations whose flagging of content is dealt with in a prioritized manner. This comment says it’s mostly attributed to a lack of resources and funding and you mentioned that as well. Do you have a comment on that? What is necessary to support this community of practice that is necessary to enforce the DSA?
Sneak preview: Some ideas on how financial support for the community of practice could work
Martin: I would suspend my assessment on the number of trusted flaggers we’re going to get. I know that there are some pending applications. We’re certainly going to have more than what is now in the registry. Let’s have a look in one or two years at how many we have but I suspect that what we are likely going to see is that there will be at least for certain types of content maybe under-representation. That’s actually something I’ve been thinking about a lot: how to make the system around the platform sustainable? So, I have an idea that I’m working through at the moment, which basically is around how to create money flows for these organizations. But it’s not simple, because the design challenge that you have is that in my ideal design, it shouldn’t be money that is decided upon either by the state or by the platforms. That’s the challenge. You can find the money but the challenge is in finding a mechanism through which you redistribute the money that is not decided by either of the two, because both have their own problems.
Julian: A little bit of an open end here, a little bit of a cliffhanger, but that’s good, because it gives us more opportunity to exchange views and talk about this some more in the future. I will close it with that. Sorry for not getting to all the questions. A really big thank you to you, Martin, for being here, for fielding so many questions from the audience. Really great to talk to you.
Also, I want to thank my colleagues, Josefine and Justus, who’ve been hard at work preparing and running this webinar in the background. And as always, many thanks to you, the audience, for being here, listening in, engaging with us, asking questions. If you’d like to stay in touch with us at SNV, please feel free to sign up to our newsletter. I’m happy to hear your feedback on this particular event as well. And with that, just one more time, thank you very much for being here and have a great rest of your day, everyone.
Meet the speakers
Dr. Julian Jaursch
Lead Platform Regulation
Martin Husovec
Publications for this Event
policy-brief
The Digital Services Act is in effect – now what?
What the establishment of Digital Services Coordinators across the EU means for platform users, researchers, civil society and companies
Dr. Julian Jaursch
February 8, 2024
article
DSA risk mitigation: Current practices, ideas and open questions
Dr. Julian Jaursch, Josefine Bahro
December 13, 2023
Explore other events
April 2023
Saturday
01
13:00 - 14:00
(CEST)
Past
Background Discussion
Background Briefing: Enforcing the DSA - Online talk with Irene Roche Laguna
Dr. Julian Jaursch
September 2021
Tuesday
07
16:00 - 17:00
(CEST)
Past
Background Discussion
Background Discussion: New EU-rules for Big Tech: How to improve the Digital Services Act
Dr. Julian Jaursch