In today's fast-paced landscape, the ability to make effective decisions relies heavily on having access to robust and reliable information. However, in a world where misinformation, fake news and cyber threats abound, the reliability of information cannot be taken for granted. This issue becomes even more critical within organizations, where well-informed choices are essential for success.
Nowhere is this more evident than during crises, where reliable information is vital to ensure that effective and efficient decisions are made. The COVID-19 pandemic provided ample examples of this, as the consumption of false information led to unsafe behaviors, not just for individuals but also for entire teams and organizations.
Undermining Decision-Making
Recent research from Yale University sheds light on the detrimental effects of misinformation within a corporate context. Effective decision-making is paramount during crises, and it hinges on the ability to process high-quality information. However, the research reveals that even within corporate teams, it's often too easy to dismiss input from experts. Such dismissals can result in the spread of ill-founded rumors, leading to widespread indecision and poor decision-making across the organization.
This manipulation of behavior is not limited to decision-making but extends to employee actions. A study conducted at University College Dublin found that exposure to fake news stories, especially those related to workplace matters like privacy concerns with company apps, can lead employees to be 5% less likely to adopt such applications. The impact can even go as far as creating false memories and altering employees' behavior. This phenomenon, though relatively small, becomes significant when misinformation influences crucial workplace decisions, such as vaccination rates among employees.
Furthermore, a study from North Carolina State University underscores how fake news stories can influence the expectations of companies. Even when a company becomes the victim of a misinformation campaign, consumers still expect the organization to take corrective action. This highlights the growing role of communication professionals in addressing fake news issues. They must collaborate with reporters to provide accurate information or make correct information directly available to the public.
Related Article: Engage Next-Generation Leaders in Responsible Tech Efforts
Financial Implications
The financial implications of misinformation are substantial in the business world. In 2019, Robert Cavazos of the University of Baltimore estimated that fake news was costing companies approximately $80 billion annually. This cost encompasses reputation management, stock manipulation, and more, threatening the trust that forms the foundation of the free market.
Misinformation also impacts financial markets. Researchers at the University of Canterbury argue that low-quality political signals caused by misinformation can disrupt market relationships and undermine companies' share prices. The reliability of information is vital for investors and market stability.
In the workplace, we must also consider the potential manipulation of data that powers AI-based IT systems. Professor Sir Adrian Smith, the head of the Turing Institute, warned about the risks of data manipulation that can distort AI outcomes, with this issue of growing importance as the usage of AI-based systems becomes more widespread.
Deliberate Manipulation
One particularly concerning development is "end-to-end cyber-biological attacks," as highlighted by research from Ben-Gurion University of the Negev. In these attacks, cybercriminals aim to manipulate scientists into producing harmful toxins in their labs. Malware is used to manipulate DNA data on scientists' computers, turning potentially helpful drugs into toxic substances.
Within the business context, we need to be aware of the risks posed by adversarial attacks that aim to manipulate AI systems used for decision-making. These attacks can target specific outcomes or aim to distort system outputs, and they are highly effective due to the transferability of attack methods between different AI models.
To protect organizations, researchers from the Georgia Institute of Technology recommend training AI systems to detect and repair themselves in real time. This approach involves reducing noise in the system and restoring data to its original form, mitigating the impact of adversarial attacks.
Related Article: How to Spot Deepfake Remote Workers
Believing Our Eyes
In addition to cybersecurity, businesses should be vigilant against another emerging threat: deepfake technology. Deepfakes create hyper-realistic but entirely fake media content, and research from the Queensland University of Technology shows that detecting deepfake content remains a challenge.
Within organizations, the potential consequences of undetected deepfake content can be significant. For instance, attackers could manipulate images or videos to damage a company's reputation or credibility.
With over 3 billion images and 720,000 hours of video produced daily, the sheer volume of content makes detecting deepfakes difficult, particularly as mainstream media incorporates user-generated material into their coverage. It's concerning that only 11% of journalists use media verification tools, leaving room for fake content to be widely shared.
To address these challenges, both individuals and organizations must recognize the critical role that reliable information plays in decision-making. In the information age, overcoming these threats to information integrity is essential to harness the true benefits of data-driven decision-making.
Learn how you can join our contributor community.