Disinformation as a Service - A New Threat?

First Published:
//
Last Updated:

The popularity of online information sharing, particularly through social media platforms, continues to increase annually. While the ease and rapidity of disseminating information online is generally beneficial, it also has a downside — the proliferation of disinformation.

But the presence of disinformation is not the top cause for concern; rather, it is the alarming development of disinformation being transformed into a profitable business model. Welcome to disinformation as a Service, where the vice is now a lucrative industry. 

A recent study by Pew Research found that two-in-ten American adults believe that social media is the main catalyst of disinformation. Many posts on social media often go unchecked and false posts are 70% more likely to be shared than true information. 

«A lie told a million times becomes a fact. Without facts, we can’t have truth. Without truth, we can’t have trust». — Simply encapsulated in these words by famous journalist Maria Ressa, disinformation is capable of causing a lot of damage everywhere and your business is no exception. 

Join us, let’s unpack Disinformation as a Service. 

Important Note: Throughout this article, we may use the acronym DaaS to as the short form of Disinformation as a Service. Please don’t confuse this with Desktop as a Service 

What is Disinformation as a Service?

Disinformation as a Service (DaaS) refers to a «business» model where individuals or organizations offer services to create and spread disinformation on behalf of their clients. Essentially, it is a service that provides disinformation campaigns for a fee.

Disinformation as a Service can take many forms, ranging from fake news stories, to fake social media accounts, to deepfake videos. The goal of Disinformation as a Service is typically to manipulate public opinion or influence political outcomes, often by targeting specific groups or individuals.

Unfortunately, DaaS has become a growing problem in recent years, particularly with the rise of social media and the ease with which disinformation can be spread online. The use of Disinformation as a Service by malicious actors can have serious consequences, including undermining democratic institutions, spreading false information, and sowing division and mistrust within society.

It’s important not to confuse disinformation with misinformation. Disinformation refers to false or inaccurate information that is spread deliberately, often with the intention of misleading or manipulating people. Misinformation on the other hand refers to false or inaccurate information that is spread unintentionally, often due to a lack of knowledge, understanding, or verification. In other words, the person sharing the information may genuinely believe it to be true but is mistaken.

Disinformation as a Service as a new threat

Yes, disinformation as a service is now a new threat. It started in politics, where political campaigns and state actors were the primary users of disinformation to manipulate public opinion or to influence election outcomes. However, with the growth of social media and the increasing importance of online reputation, disinformation has become a popular tactic for businesses looking to gain a competitive edge or damage their competitors' reputation. 

A recent scandalous political DaaS incident that attracted global attention occurred before the 2020 US presidential elections. Prior to the 2020 US elections, Facebook and Twitter announced that they had banned over 3500 fake accounts. This is because the accounts were suspected to have been acting as political disinformation channels.

In the business world, disinformation campaigns can take many forms, such as spreading false rumors about a competitor, using fake reviews to boost one's own products or services, or spreading false information about a company's financial status to manipulate stock prices. Disinformation can also be used to damage the reputation of a company or its executives, which can have long-lasting effects on the business.

Who needs Disinformation as a Service and why?

A disinformation service serves anyone who wants to propagate false information. From ordinary citizens to politicians and enterprises, anyone can use disinformation.

Here are the key categories of users:

  • Political actors: Political parties, candidates, and governments may use disinformation as a service to spread false information about their opponents, interfere in elections, or sway public opinion on certain issues.
  • Corporate actors: Businesses and organizations may use DaaS to undermine competitors or manipulate public perception of their products or services.
  • Criminal organizations: Crime groups and hackers may use DaaS to spread false information for financial gain.
  • State-sponsored actors: Foreign governments may use DaaS as part of a larger information warfare strategy to influence the policies or actions of other countries or to advance their own geopolitical interests.
  • Ideology: Individuals who seek to push a particular agenda or ideology (mostly hateful). They may use DaaS to spread false information or propaganda to manipulate public opinion and support for their ideas.

The ecosystem that constitutes Disinformation as a Service

The DaaS ecosystem is composed of various actors, tools and platforms that work together to create and disseminate false or misleading information to achieve their clients' goals. 

Here is a breakdown of the key actors in this evolving ecosystem:

1. Troll farms

These are the individuals or organizations that create false or misleading content on a large scale. They use a variety of techniques to create content that is appealing to their target audience, such as using sensationalist headlines, clickbait, and emotional appeals.

They may also use advanced techniques like deepfakes to make their content more convincing. The most advanced ones normally operate from the dark web, but it’s possible to find others operating from the deep web and even the surface web. A notorious example is the troll farm linked to Russian businessman Yevgeny Prigozhin, which was accused of influencing public opinion during the 2016 US presidential elections  (Prigozhin has been reportedly killed by Russian authorities in an air crash incident in 2023). These operations demonstrate the sophisticated nature of some troll farms and their potential impact on global events.

2. Distributors

Once the content is created, it needs to be distributed. The distributors have access to platforms that allow them to disseminate the content widely. These may include social media platforms, news websites, or email lists. 

They use various tactics to make sure that their content reaches as many people as possible, including using bots and fake accounts to boost engagement.

3. Amplifiers

Amplifiers help to spread the content even further. They do this by sharing the content on their own social media accounts, websites, or other platforms. Amplifiers may also engage in coordinated campaigns to promote the content, such as using hashtags.

A good example of amplifiers are social media influencers. These are social media users who have a large following. They can use their influence to spread disinformation for a pay. For example, if a Twitter user with a huge following retweets some fake news, the false information can reach millions of their followers, who will in turn believe and spread the information widely.

4. Clients

Finally, the ecosystem of disinformation as a service would not exist without paying clients who commission the creation and dissemination of false content. As mentioned earlier, these clients may include political actors, corporations, or individuals who want to influence public opinion or achieve some other goal.

Top Disinformation as a Service risks that should worry businesses

Any business is a potential victim of disinformation as a Service, and these are the top risks you need to be aware of: 

1. Legal liabilities

If your company becomes a victim of disinformation, it's possible that this can lead to legal liabilities as the disinformation can indirectly affect partners, employees and even potential customers. Spending on the gravity of the damage, any of these entities can sue your business. Even a potential customer can claim to have been misled to purchase a wrong product. 

It’s not victims only who can be negatively impacted by legal liabilities. Innocent perpetrators too can be hit even harder. This applies to those who find themselves embroiled in disinformation chains. Even genuine misinformation can escalate to disinformation, especially where the perpetrator is an entity with capability to verify information. For example, US-based ballot tech companyDominion Voting system sued Fox News over claims that they broadcast disinformation about their participation in the 2020 US elections. Some comments made by Fox News insinuated that Dominion tampered with the elections. Dominion said that the comments hurt its business. 

This example illustrates the importance of being careful while transmitting information since it can lead to a lot of liabilities. In this case, Dominion is suing Fox for $1.6 billion. This could have been avoided if Fox  had enforced rules and guidelines on what to mention on air and how to do so.

2. Reputational damage & business loss

When false information is spread about a company, it can erode trust and credibility that customers, investors, and other stakeholders have in that company. This can result in a decline in sales, loss of partnerships, and difficulty attracting new business.

False information can also damage a company's brand image, making it difficult to differentiate it from competitors and resulting in a decline in market share.

3. Operational disruptions

When false information is spread about a company, it can create a sense of uncertainty and anxiety among employees. This can lead to a decrease in productivity and efficiency, as well as a decrease in employee morale. Employees may become distracted and spend time trying to verify or disprove the false information, which can take their focus away from work.

Moreover, employees may take sides and argue about the veracity of the information. This can lead to a breakdown in communication and collaboration, which can negatively impact the company's operations.

Employees may also feel targeted or attacked by the false information. This can result in increased stress levels and a decrease in job satisfaction, which can lead to increased absenteeism and employee turnover.

What makes disinformation so dangerous?

Disinformation is extremely dangerous and can quickly get out of hand, specifically because of these reasons:

1. Elusive accountability 

Unless the perpetrator is a large corporation like the case of Fox News that we discussed earlier, disinformation is quite elusive in law. This means that people who spread disinformation, especially individuals,can do so without facing any legal consequences. This lack of accountability makes it easier for disinformation to harm.

Many people, sometimes in their millions, are often involved in spreading fake information. In most cases, it’s so difficult to trace the origin. Even if you succeed to trace the origin, how will you get hold of the person behind it? 

Let’s say a US company is slandered by a disinformation item whose origin is outside America. How long do you think it will take the American authorities to reach the person (s) who created the item. Even if the authorities succeeded to find them, how will they prove it’s them who created it? What if they say their accounts were hacked? 

Such are the challenges that are making it so difficult to create any meaningful laws around disinformation. An influencer, for example, can send out fake news then turn around and say that their account was taken over by malicious actors who used it to spread the news. How are we going to prove them wrong? 

2. The ordinary public rarely spots disinformation 

Disinformation can be designed to look like real news, which makes it difficult for the general public to distinguish it from credible information. For example, a fake news story may use a sensational headline, authentic-looking images, and quotes from seemingly credible sources. This can make the story appear legitimate, even though it is entirely fabricated.

The creators of disinformation often use provocative or emotional language to grab people's attention and appeal to their existing beliefs and fears. This can make it harder for consumers to question the accuracy of the information. 

To make matters worse, disinformation can also be spread by bots or automated accounts, which can make it difficult to determine whether the information is coming from a real person or a machine. This can make it even harder for the general public to distinguish between real news and disinformation.

3. Cheap to produce

Disinformation is often cheap to produce, especially when compared to real news. The disinformation creators do not need to follow any ethical or journalistic standards. This gives them the freedom to make up stories or manipulate images and videos to support their agenda. 

4. Easy to spread

With just a few clicks, disinformation can be shared with a wide audience and reach millions of people around the world. Social media algorithms often prioritize engaging content, which means that disinformation can spread quickly, even if it is not true. 

Also Read: The most Dangerous Cyber Threats

Mitigating the risks of disinformation 

The goal is to take action to ensure disinformation does not bring about consequences that would harm the business. And if at all your company falls victim, the damage should be minimal — this is what mitigation is about. 

Here are some ways to mitigate the risks:

  • Monitor social media: Track mentions of your business and its products on social media using social media listening tools. With this, you'll quickly identify a potential disinformation as a service campaign targeted at your entity before it gains traction.
  • Implement a crisis communication plan: A crisis communication plan can help you respond quickly to DaaS campaigns aimed at your brand.  The plan should outline the steps to be taken, who is responsible for each step, and how the information would be passed to stakeholders.
  • Educate employees: Train your employees to recognize disinformation and how to report it. Make sure they understand the risks of sharing unverified information on social media.
  • Conduct vulnerability assessments: This will help you identify potential weaknesses in your social media presence and develop strategies for mitigating these risks. These assessments should include an analysis of the social media platforms used by the business, the types of information shared on social media, and the potential impact of a disinformation campaign.

 Also Read: How to Remediate Cyber Threats

Final perspective

Disinformation as a Service is a disruptive industry that is obviously driven by greed and rogue competition. This means it’s here to stay and people will always use it to spread fake news, push propaganda and discredit their rivals.

The fight against this vice will be hectic but luckily, governments are also joining the fight. The US government has already established the Global Engagement Center which is responsible for coordinating efforts to counter foreign disinformation and propaganda. 

Across Europe, the European Union is also investing in the EUvsDisinfo campaign to combat foreign disinformation. The campaign includes a database of disinformation cases and a public website to track DaaS-related incidents. 

For businesses, the responsibility is to remain constantly vigilant and equipped to handle the harmful effects of DaaS whenever it emerges.

 
1.25K
No comments yet. Be the first to add a comment!
Our site uses cookies