Don’t miss out! To continue reading this article become a Knowledge Exchange member for free and unlimited access to Knowledge Exchange content and key IT trends and insights.
At the beginning of the year, Google sent out a message to users about changes to its policy in relation to a forthcoming “sensitive event;” the World Health Organisation (WHO) sent out warnings of a new Disease “X”, and the academics, billionaires, politicians, and corporate elites of the world gathered at the World Economic Forum (WEF) in Davos, Switzerland to under girder such warnings and throw in a few predictions of their own.
It must be an election year!
“Rebuilding Trust” was the theme of the annual WEF gathering which covered areas such as security, economic growth, Artificial Intelligence (AI), and long-term strategies for climate, nature, and energy. Knowledge Exchange previously reported, the explosion of Artificial Intelligence (AI) upon society presents both potential benefits and threats, including increased mis-information and disinformation that looks set to increase this year.
Don’t miss out! To continue reading this article become a Knowledge Exchange member for free and unlimited access to Knowledge Exchange content and key IT trends and insights.
The WEF warns that the benefits of AI, will only come to fruition if ethics, trust and AI, can all be brought together.
But the question of this statement to many observing the forum’s discussions is: “Who’s ethics and trust of who?
This question of trust and ethics is also particularly poignant for other observers of WEF forums when discussions at Davos moved towards the potential combination of AI with biotech and nano technology.
Trust is certainly an important commodity for a world that is still barely crawling out of a pandemic, which has also seen a rapid rise in misinformation, disinformation and censorship around public health, climate, politics, economy, and war. This may make it difficult for people to rebuild trust in institutions, including think tanks like the WEF and other bureaucratic organisations.
But can we stop the rampant AI empowered digital transformation in our personal and business lives that will increasingly make Moore’s law look like glacial progression? Especially when the calls for a pause1 in AI development seem to have fallen on deaf ears.
It seems not. After a slew of announcements and releases last year, IT giants such as Google and Microsoft continue to duke it out in the AI arms race in 2024 by announcing plans to integrate AI functionality into search, office productivity software, and operating system level. Developments such as the introduction of a Microsoft Co-pilot key for certain laptops and desktop computers ramps up the hard integration of AI into the silicon of CPUs, GPUs, NPUs of PCs and servers in the cloud.
With analysts such as McKinsey predicting that Generative AI will “add the equivalent of $2.6 to $4.4 trillion of (Global) economic value annually”, who wouldn’t want to get a slice of the AI action?
However, McKinsey did also quite rightly state that like in other ‘generational’ advancements in technology, turning the promise into sustainable business value are sometimes difficult to get past the pilot stage. CIOs and CTOs have a “critical role in capturing that value” according to its Technology’s generational moment with generative AI: A CIO and CTO guide that states:
“Many of the lessons learned from [early developments in internet, mobile and social media] still apply, especially when it comes to getting past the pilot stage to reach scale. For the CIO and CTO, the generative AI boom presents a unique opportunity to apply those lessons to guide the C-suite in turning the promise of generative AI into sustainable value for the business.”
But it also comes at a cost, especially in terms of human employment, with industries lead by media and entertainment, banking, insurance and logistics looking likely to be hit the hardest, according to a poll of top directors by the FT.
"Whenever you get new technologies you can get big changes," said Philip Jansen, Chief Executive of BT who spoke to the BBC about the 55,000 jobs it intends to cull by the end of the decade- with a fifth of those being replaced by AI.
Because Generative AI tools can imitate humans in conversational chat, solving problems and writing code, Jansen added that AI would make BT´s services “faster, better and more seamless.”
According to a survey of 750 business leaders using AI by ResumeBuilder “1 in 3 companies will replace Employees with AI in 2024” it also cites that 44% of companies surveyed say AI will lead to layoffs in 2024, 53% of companies use AI, with a further 24% planning to start in 2024
With such alarming cuts like this year’s Google AI layoffs, Onur Demirkol of Dataconomy believes:
“The tech industry is navigating a transformative period where AI is increasingly automating tasks traditionally handled by humans.
As we see these changes, the industry is adapting to innovative technologies and facing critical questions about the implications on employment and the broader workforce landscape.”
Although proponents of AI development probably dismiss dystopian claims about machines running amok like in The Terminator or a society in perpetual surveilled panopticon2 as envisaged by George Orwell in 1984, there are alarming signs of such technology insidiously being introduced into society. This can be either in guise of entertainment, such as the Metaverse or in the name of public safety such as the New York Police Department’s Digidog, that was rereleased last year. GPS trackers and Autonomous Security Robot all have the potential to link to AI technology to detect, track, discriminate and engage “targets.”
And while Robocob might be still some years off, it would seem that group think narratives spiked by a cocktail of AI and algorithms to prepare society for such events such as Disease X, current and future wars, and issues surrounding climate change are already at work. With growing instances of deepfakes and misinformation, narratives are being pushed through ephemeral persuasion via search and social media. And not only high-profile celebrities, sports people or politicians are being targeted. Such tactics in the wrong hands can affect businesses or people’s reputations, like impersonating the CEO via an email or using deepfakes to discredit certain individuals that are opposed to a certain viewpoints or narratives and something that CIOs and CTOs need to be on the watch for and alert staff about.
Copyright and Intellectual property are also areas where AI could get companies into trouble as they increasingly look to AI to help in copywriting and marketing activities. For example, creating art using an AI platform like DALL-E or MidJourney may seem miraculous and tremendously impressive, questions are being asked what sources have been used to train these platforms and if they had permission to use them. Stable Diffusion found out the hard way when it received papers from Getty Images for copyright infringement.
If a company then creates an image to use in marketing or on the Web using a similar AI platform, theoretically another company like Getty could argue that this is also copyright infringement and sue that company using the AI platform as well.
In other areas, what happens when employees start uploading corporate presentations, technical designs, or white papers or even code into AI platforms to give them a bit more zing, what then happens to the information that is uploaded? And could a potential competitor use this information for their own gain? E.g. I am sure the designers at Mercedes F1 would love to use AI to design a car like Adrian Newey’s Red Bull.
What companies need to remember is that Millennials now occupy over half of those that have IT buying decisions. Compared to their Gen Z colleagues, Millennials are more digitally native as they grew up in an increasingly online and digitally enabled world, which is now quickly changing the way companies buy technology.
This means that influencing, rather than hard selling, is something that has bled over from the B2C world into B2B purchasing and this is a key factor in so many companies looking to invest in AI. Companies want to invest in solutions that can help them have the conversational interaction of humans, but also the analytical, predictive and processing power of machines that can capture intent signals, as well as to steer conversations in the right direction or to suggest solutions that might fit customer needs. If there is greater accuracy, transparency, and accountability via regulation there will be more trust in rolling out AI systems and that may involve more input and supervision of AI systems by human knowledge workers than less.
Pat Gelsinger, CEO of microprocessor and AI giant Intel, believes that it is fundamentally all about improving the accuracy of AI’s results. In an interview with US cable broadcaster CNBC, he said “This next phase of AI, I believe, will be about building formal correctness into the underlying models.”
“Certain problems are well solved today in AI, but there’s lots of problems that aren’t,” Gelsinger said. “Basic prediction, detection, visual language, those are solved problems right now. There’s a whole lot of other problems that aren’t solved. How do you prove that a large language model is actually right? There’s a lot of errors today.
Gelsinger believes although AI will improve the productivity of the knowledge worker, the knowledge worker or human still has to have confidence in the technology.
Trust is especially important for the IT industry whose technology is seen as the key not only to the riches of the AI castle, but also general day to day systems that people rely on for their day to day lives.
In January of this year, ITV’s docudrama, Mr. Bates vs The Post Office captured the attention of the UK public as it detailed the scandal surrounding the Horizon IT system which has been rumbling for the last 20 years. Originally introduced to eliminate paper-based accounting, over 900 UK Sub postmasters3 were the victims of what some commentators are calling he biggest gross miscarriages of UK justice that saw many wrongfully convicted of theft or fraud, some jailed and all smeared due to the ultimately faulty IT system.
Back then before cloud and AI, human intelligence was responsible for large scale IT roll outs and open to human error in software coding.
And it has taken 20 years of campaigning, questions in Parliament, investigations by publications such as Computer Weekly and finally a TV show to shine a light on a faulty IT system that was introduced in 1999, in order for the victims to finally get justice.
Sadly, some of the wrongly accused committed suicide or did not live to see the promised reparations, overturned convictions, and the clearing of their names they all deserved. Many cases are still sub judice.
What was galling for many viewers of the dramatization was the fact that some insiders at the Post Office allegedly knew there were issues with the system such as the ability to remote log into it, but continued to promote the misinformation that the system could only be accessed by the local operator and thus any errors in the accounting were solely down to the (human) Postmaster, which as it turns out was not true.
In this case, it appears to have been the humans running the Post Office covering up of the Horizon IT system errors that were to blame. But when AI, that can already successfully code language like C++, gets to the stage of being trusted enough to code sensitive medical, military, or other machinery that interacts with human ‘nodes’, will AI become more difficult to question and hold accountable, when machines are running the show?
And worse still with the introduction of AI into law enforcement or military machines, will lethal decisions be made in error? Will AI systems, like the HAL 9000 computer 2001: A Space Odyssey, think they are:
“By any practical definition of the words, foolproof and incapable of error.”
As we face a new era in artificial intelligence, the rapid development of AI is underscored by the critical need for a balanced and ethical approach to said development.
There is an undoubtable need for robust ethical frameworks, transparent practices, and proactive governance to harness the transformative power of AI while mitigating its risks. It is imperative for stakeholders to collaborate to ensure AI serves as a force for good, and that society fosters an innovative, while still equitable and ethical outlook in the march toward a technologically advanced future.
[/um_loggedin]