Don’t miss out! To continue reading this article become a Knowledge Exchange member for free and unlimited access to Knowledge Exchange content and key IT trends and insights.
Disinformation is defined as “false information deliberately and often covertly spread (as by the planting of rumors) in order to influence public opinion or obscure the truth”.
While most commonly used in politics and public health discourse, disinformation campaigns can be used to target businesses of all sizes and can have damaging effects on reputation and revenue. And in an age where disinformation can be easily spread via the internet, it is feared that tools such as ChatGPT will make it easier to create and circulate false narratives on a larger scale than ever before. But what threat does AI engineered disinformation pose to businesses’ integrity, reputation and most importantly security?
AI has revolutionized many aspects of human life, both in personal and professional aspects, and as of late, Large Language Models (LLMs) have been one of the main focal points of AI development. With the popularity of OpenAI’s ChatGPT, there has been an arms race between tech companies to develop and launch similar tools. OpenAI, Microsoft and Google are leading the way, but IBM, Amazon and other key players are close on their tails.
Don’t miss out! To continue reading this article become a Knowledge Exchange member for free and unlimited access to Knowledge Exchange content and key IT trends and insights.
Although these advancements in AI are a testament to how technology has evolved since the inauguration of the internet just 50-odd years ago, the question arises – just because you can, does that mean you should?
OpenAI’s ChatGPT3, which launched in November 2022, is the most powerful LLM chatbot launched to date, harnessing the power of 570 GB of data. With ChatGPT4’s capabilities being more creative and nuanced than its predecessor, with the ability to process both audio and video as well as text, its potential to generate information is at a level that has been previously unknown.
There has been growing concern amongst high-profile players withing the technology sector , with many calling for legislation on the development of AI. Key figures warning of AI’s power are Elon Musk (Tesla), Steve Wozniak (Apple Co-founder), and Dr. Geoffrey Hinton (Formerly Google).
Disinformation attacks differ from typical cyber threats as there is no need to physically, or virtually infiltrate a business or its network. Bad actors can produce and circulate disinformation with relative ease, generating it en masse using chat bots such as ChatGPT and spreading the resulting inauthentic content via social media, blogs, or emails.
■ Manipulated audio and video
A deepfake refers to an AI generated video used to spread a fake narrative. They are often hyper-realistic in appearance. A falsified video of illegal activity, or business leaders making incendiary comments can erode public trust and can be used for extortion. They can also increase the effectiveness of phishing attacks, make identity theft easier and manipulate and severely damage a company’s reputation.
AI systems can also be trained to impersonate a real person, to generate speech that mimics a particular style, and language patterns, lending more credibility to scam attempts.
■ Fraud
Fraud is often motivated by a desire to obtain financial gain. AI can be used to create forged documents with fake letterheads, copied and pasted signatures, or even business contracts and invoices that can be used to deceive and mislead people.
■ Proxy websites
Proxy websites are used as fronts by malicious actors to host their fraudulent content. They are disguised as a legitimate site in order to launder fake news and divisive content and drive page views. Often times the only way they can be detected is by spotting a slight misspelling in the URL or cross referencing the site’s information with verifiable sources.
■ Disinformation-as-a-Service
Disinformation-as-a-Service is nightmare fuel for information warfare where anyone can buy fake news and misinformation campaigns powered by a network of professional trolls, bots, and other manipulation tools. Be it a competitor, a disgruntled client or ex-employee, anyone with bad intentions can engage an incendiary service to damage a business or individuals’ reputation with falsified information.
Fake news is more than a catchy phrase, disinformation poses serious harm to individuals and businesses alike and can result in:
■ Reputational damage:
False information can be spread quickly and easily on the internet, and despite best efforts and rebuttals with the truth, once trust is lost it can be greatly difficult to build this back with the public. This can result in a loss of sales and being unable to attract new customers.
■ Financial loss:
Disinformation can be used to inflate and manipulate stock market prices or to deceive investors, which can lead to significant financial losses for both the company and investors when the truth comes to light.
■ Cybersecurity breach
AI can be used to generate more sophisticated phishing attacks that appear to be from legitimate sources which can then be used to install malware on business systems, giving hackers access to confidential company information or customer data.
■ Business disruption
Disinformation can be used to spread false rumours about business operations or product availability which may cause customers to cancel their orders or avoid doing business with the company in the first place.
As Artificial Intelligence continues to develop, its power needs to be looked at through an ethical lens. With every benefit it brings, those with bad intentions also have access to use its power for evil rather than good. It will be interesting to see how global governments decide to handle it and how legislation will be put in place to regulate its advancement.
The development of AI legislation is already underway within the European Union. The European Parliament is looking to introduce the AI Act to ensure a human-centric and ethical development of Artificial Intelligence (AI) in Europe, MEPs endorsed new transparency and risk-management rules for Artificial Intelligence systems.
Although AI can be used to improve work processes, it can also be used to optimise output of defamatory and dangerous mistruths. We are in an age where fake news and disinformation is no longer limited to the political sphere, and companies are also at risk of disinformation attacks. There is no doubt that AI powered Large Language Models (LLMs) such as Chat GPT have the power to increase the frequency, intensity, and consequences of disinformation attacks against businesses.
Companies need to put precautions in place to detect and defend themselves from AI powered attacks, but only time will tell the true power of AI for better or worse.
[/um_loggedin]