Don’t miss out! To continue reading this article become a Knowledge Exchange member for free and unlimited access to Knowledge Exchange content and key IT trends and insights.
Disinformation is defined as “false information deliberately and often covertly spread (as by the planting of rumors) in order to influence public opinion or obscure the truth”.
While most commonly used in politics and public health discourse, disinformation campaigns can be used to target businesses of all sizes and can have damaging effects on reputation and revenue. And in an age where disinformation can be easily spread via the internet, it is feared that tools such as ChatGPT will make it easier to create and circulate false narratives on a larger scale than ever before. But what threat does AI engineered disinformation pose to businesses’ integrity, reputation and most importantly security?
AI has revolutionized many aspects of human life, both in personal and professional aspects, and as of late, Large Language Models (LLMs) have been one of the main focal points of AI development. With the popularity of OpenAI’s ChatGPT, there has been an arms race between tech companies to develop and launch similar tools. OpenAI, Microsoft and Google are leading the way, but IBM, Amazon and other key players are close on their tails.
Don’t miss out! To continue reading this article become a Knowledge Exchange member for free and unlimited access to Knowledge Exchange content and key IT trends and insights.