Don’t miss out! To continue reading this article become a Knowledge Exchange member for free and unlimited access to Knowledge Exchange content and key IT trends and insights.
This month’s Knowledge Exchange will examine both the benefits and the potential dangers of unregulated Artificial Intelligence on Enterprise resourcing, business IT platforms, sales and marketing strategies and also on customer experience. It will also ask if it is possible to pause AI development to roll out ethical and regulated AI that protects and enhances jobs rather than potentially replace them; protects personal data, privacy, and preferences rather than manipulate it for nefarious reasons and that doesn’t spiral us into a “Terminator” or “Matrix” like future where the machines are in control!
As we discussed in last month’s Knowledge Exchange on Hybrid Cloud, Artificial Intelligence (AI) has some compelling usages for ITDMs especially when it can help tame IT complexity by automating repetitive and time-consuming tasks. It can also be used to learn and write code from past data and be used to autogenerate content and images from multiple sources by mimicking human intelligence and human labour. As the technology develops, there seems to be a whole raft of plug-ins and algorithms that people such as Microsoft’s chief executive, Satya Nadella believes will: “Create a new era of computing.”
Copilot (sic) works alongside you, embedded in the apps millions of people use every day: Word, Excel, PowerPoint, Outlook, Teams, and more…it (sic) is a whole new way of working.
Microsoft 365 head I Jared Spataro
And as corporations and investors are constantly looking at growth, efficiencies and ultimately profit, the lure of AI to support this new paradigm must be insatiable proposition right now, especially as we are seeing a lot of economic pressure from various financial, energy and geopolitical crises.
Don’t miss out! To continue reading this article become a Knowledge Exchange member for free and unlimited access to Knowledge Exchange content and key IT trends and insights.
This is perhaps why, as news service Reuters notes, since the March 2023 launch of Microsoft’s AI “Co-pilot” developer tool, over 10,000 companies have signed up to its suite that can help with the generative creation and optimisation of its Office 365 Word, Excel, Powerpoint and Outlook email software that some commentators are calling “Clippy on Steroids”
Speaking to Senior editor of the Verge, Tom Warren, Microsoft 365 head Jared Spataro said: “Copilot (sic) works alongside you, embedded in the apps millions of people use every day: Word, Excel, PowerPoint, Outlook, Teams, and more…it (sic) is a whole new way of working.”
Microsoft has also invested $1bn into former non-profit, OpenAI a company co-founded by billionaire inventor and investor, Elon Musk who has since left the company to concentrate on his Tesla automotive and SpaceX Aeronautical business. Now a for-profit business, OpenAI, is working on an Artificial Generative Intelligence (AGI) to perform like a human brain.
Microsoft and OpenAI will jointly build new Azure AI supercomputing technologies. OpenAI will port its services to run on Microsoft Azure, which it will use to create new AI technologies and deliver on the promise of artificial general intelligence. Microsoft will become OpenAI’s preferred partner for commercialising new AI technologies,
Microsoft 365 head I Jared Spataro
The partnership will offer Microsoft the chance to not only catch up with Google’s AI developments of late, but also give it more muscle for AI development due to its combination of resources and hardware combined with the researchers and developers at OpenAI, according to industry watcher, TechGig it noted:
“Microsoft and OpenAI will jointly build new Azure AI supercomputing technologies. OpenAI will port its services to run on Microsoft Azure, which it will use to create new AI technologies and deliver on the promise of artificial general intelligence. Microsoft will become OpenAI’s preferred partner for commercialising new AI technologies,” according to an official Microsoft statement.
Back in November 2022, OpenAI launched its much lauded and controversial text generator ChatGPT, ahead of rivals Google that launched its rival Bard text generator product later in February 2023. Until then, Google had been largely considered at being ahead of its rivals by integrating AI tools into products such as its search engine, something Microsoft also started to implement into its Bing search engine in February. But unlike ChatGPT, Bard is still not available or supported in most European countries.
As the AI ‘arms’ race heated up in April this year, Google launched a raft of tools for its email, collaboration and cloud products, according to Reuters, that reported Google had combined “Its AI research units Google Brain and DeepMind and work on “multimodal” AI, like OpenAI’s latest model GPT-4, which can respond not only to text prompts but to image inputs as well to generate new content.”
And there certainly seems a tsunami of AI development projects that are either in development or planned to be rolled out from seemingly all collaboration, content, cloud, data and security vendors that can see the potential that AI could do to enhance current offerings and workflows. As we examined in March’s Knowledge Exchange whitepaper, adding AI to Cybersecurity applications to prevent increasing vulnerability and sophistication of attacks, including AI generated malware, was one of the main hopes and priorities of ITDMs for 2023 and beyond.
In other applications of AI, from a customer user experience point of view, having the ability to generate not just content but localised or personalised content is also very attractive for companies to generate Website content, emails and sales and marketing communications, because they don’t require hundreds of content and digital specialists to create it. For example, in the UK, Press Association (PA), has recently launched its Reporters And Data and Robots (RADAR) service to supply local news media with a mixture of journalist generated content and AI generated content to supplement their local coverage.
This venture is beneficial for local areas that have seen local news coverage drop in favour of centralised national news coverage and shines a light on how AI and Human intelligence can be combined to create quality content. And from a marketing point of view, the ability to generate personalised emails by AI using data from multiple sources would seem like discovering the Holy Grail. Imagine a scenario where AI looks at intent data from a customer Ideal Customer Profile list to see who is currently in market for a product or service by analysing what content a company or individual might be consuming. Using some predictive analysis, autogenerated content could be used to nurture those companies or individuals to a point where more predictive analysis could be deployed to analyse where they are in the purchasing process and create content accordingly. Or it could be used send invitations to follow up with a sales person or drive people to a website where there are AI enhanced chatbots that can gather yet more information about a product or service requirement in more of a conversational style, as Cloud giant AWS is developing:
Businesses in the retail industry have an opportunity to improve customer service by using artificial intelligence (AI) chatbots. These solutions on AWS offer solutions for chatbots that are capable of natural language processing, which helps them better analyse human speech.
By implementing these AI chatbots on their websites, businesses can decrease response times and create a better customer experience while improving operational efficiency.
Lastly, from a sales and business development point of view, having AI enhanced tools to generate less cold call intros and help with following up with more personalised and less generic emails is also a compelling application of AI in the lead and pipeline generation space.
With increasingly leaner SDR and BDR teams, it is often difficult for companies, especially start-ups, to get sales staff up to speed on complex IT solutions. But having a product matter expert involved in the modelling of AI algorithms that can enhance conversational email and follow up allows the BDR/SDR not only achieve the right messaging but also allows them to manage and communicate with more potential leads.
As a species, humans have always seemingly developed tools, products, medicines, industries, and societal frameworks that have the potential either help or harm society. But more recently we’ve really accelerated the ability for our tools to impact our world for better or worse.
But often, we are too distracted by the “wow isn’t that cool” part of technology before we think: Should we be doing this? What are the down sides? Who is regulating this? What are the long-term impacts. Can we stop it if we have to?
AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
Pause Giant AI Experiments: An Open Letter
And while we marvel at dancing robotic dogs from Boston Dynamics or AI infused technology into computer generated imagery and text there are many leading tech and industrial figures that are worried of the implications of rapidly evolving, unregulated AI upon businesses and society at large. They are very worried.
Before his death, eminent physicist, Stephen Hawking, forewarned that AI could help to end the human race. And in March this year, Elon Musk, who has started his own AI company and a whole host of tech leaders put their signatures to an open letter to the Tech industry to “pause giant AI experiments” for at least 6 months and not surpass any technology that exceeds frameworks like ChatGPT 4. The letter at the time of writing had nearly 30K signatures.
In the letter it said:”
“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
“Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
And although many play down this threat by assuming we can switch the machines off, there are others who think that if the core ethics and principles of AI are not agreed, implemented and policed now, the ability for ‘bad actors’ to exploit this technology for serious societal damage is very real (see Ugly, below)
For Musk, the proposal is simple. Using his experience of the regulation that is seen in the automotive and aeronautical business, he believes the necessity to have the same regulatory and oversight is badly needed now. For Musk and others, the sheer speed of AI development is currently ‘out of control’. And in a climate where institutions and governments are attempting to root out online disinformation, propaganda and other influencing material, a next breed of generative AI that can take a combination of text, picture and video ‘deepfakery’ could make this process extremely difficult.
This view is shared by Dr. Geoffrey Hinton who dramatically quit his job at Google recently warning the dangers of AI. Widely seen as a pioneer or ‘the Godfather’ of AI, Hinton and his research team at the University of Toronto in 2012, developed the deep learning capabilities and neural networks that are the foundation of the current systems like Chat GPT.
Speaking to the New York Times, Hinton said he quit his job so he could ‘speak freely’ about the risks of AI and that he now ‘regrets his life’s work.”
In an interview with the BBC, Hinton explained that the digital intelligence that is being developed is different to biological intelligence in that individual AI “copies” can learn separately but share instantly similar to a hive mind or collective, that will quickly eclipse average human intelligence.
Although, Hinton believes that AI is currently not ‘good at reasoning,’ he expects this to “get better quite quickly”, And although he currently doesn’t think that AI becoming sentient or heading to a Terminator style future is the main concern, he is concerned about AI’s effect to:
John Wannamaker pioneered the practice of marketing throughout the 19th century and early 20th century. Since then, companies have been investing in marketing and advertising to understand consumer preferences and influences. With TV ratings being measured since the 1940’s and the rise of marketing agencies in the 1950’s, the marketing industry now generates billions of dollars annually.
It now seems everyone wants to know not just what you watch or what brands you like, but who your friends are, where you go and other personal data that can be stored, analysed, and used or sold to those that are interested. Combined with increasing surveillance through signals intelligence and physical surveillance like CCTV and embedded cameras in devices, this creates a troubling prospect of mass surveillance, narrative manipulation, reduced freedoms, and increased central control for those who are aware of these developments.
As the Internet Age became more prevalent in the mid 1990s, most people certainly in the Western hemisphere, were happy to share some personal information in return for more personalised online experience or relevancy for advertising for things we might want or need. The bad actors, we were told back then, were those criminal gangs that operate on the dark web and broker personal information from cyber breaches to try and set up scams and other fraudulent activity.
Back then, not many people were fully aware or willing to admit that it wasn’t just the bad actors that were looking to track your online footprint and preferences, it was also governments around the world or more accurately government departments such as the National Security Administration (NSA) in the US that, that were increasingly interested in what you are up to! And these departments have little oversight by elected officials in the US Senate or Congress
And it wasn’t until people such as Julian Assange of Wikileaks and former National Security NSA contractor and whistle-blower Edward Snowden made us aware of some unethical online digital practices by our governments that we started to be more aware that the cool tech we were becoming more reliant on was increasingly being used not just for anti-terrorism or national security measures, but just to keep tabs on people’s everyday activity, conversations and online networks. It was Snowden that revealed cameras and microphones contained within certain devices can be activated without the user’s knowledge and used to monitor digital footprints, keywords and other activity. Speaking to the Guardian Snowden, said that the NSA’s activity is “….intent on making every conversation and every form of behaviour in the world known to them”.
“What they’re doing” poses “an existential threat to democracy”, Snowden said.
Even in “flight mode” or switched off, your indispensable tech is recording, surveilling and feeding back zettabytes of data into remote and often unregulated servers when they are reconnected to internet that are increasingly using artificial intelligence to conduct predictive modelling and other learnings.
And while this practice we are again told is to prevent terrorism or help curtail the spread of pandemics, increasingly, personal biometric and health information is being gathered and monitored in vast quantities and being analysed alongside individual consumer practices and sold on.
Even personal movement, either on foot or by public transport is being captured, analysed and stored with so called health trackers on watches and phones and by automatic number plate recognition (ANP).
In fact, most all private companies these days have entered the “spy business,” in one way or another according to J. B. Shurk of international policy council and think tank, The Gatestone Institution.
And this information is increasingly being bought by governments around the world, to track and monitor the activities and movements of their citizens, according to Ross, Muscato of the Epoch Times who notes that the chair of the House Committee on Energy and Commerce, US Rep. Cathy McMorris Rodgers confirmed that US State and federal governments regularly purchase Americans' personal data from private companies, so that they may "spy on and track the activities of U.S. citizens.”
"No kind of personal information is off-limits. Government agents use data brokers to collect information on an individual's GPS location, mobile phone movements, medical prescriptions, religious affiliations, sexual practices, and much more.”
Muscato also quotes findings from US Senator, Rand Paul that discovered at least 12 overlapping Department of Homeland Security (DHS) programmes that were tracking what US citizens were saying online and that they were targeting children to ‘report their own family for “disinformation” if they disagreed or had counter information about the official Covid narrative.
And it is perhaps these current “hand-cranked” measures used by agencies that could be incredibly enhanced via AI that people such as Elon Musk and Dr. Hinton are referring to when they voice concerns over the rapid, unregulated development of AI especially if it were weaponised into legislation such as the controversial US Restrict Act that could give power to AI to track millions of pieces of communication and mislabel it as dangerous, extremism or hate speech and then take further action on that individual.
So as we enter the summer months, why don’t we take a 6 month breather, not to stop what is currently in development or explore the benefits of AI to our current business needs and challenges but to also debate in a non-partisan way a technology that has tremendous potential for good, but also discuss if not regulated and controlled, we could be in a scenario where we can’t put the genie back in the bottle. For more details of how the industry may make policies during a pause please visit here:
2. https://www.reuters.com/technology/tech-ceos-wax-poetic-ai-big-adds-sales-will-take-time-2023-04-26/
3. https://www.theverge.com/2023/3/16/23642833/microsoft-365-ai-copilot-word-outlook-teams
6. https://futureoflife.org/wp-content/uploads/2023/04/FLI_Policymaking_In_The_Pause.pdf
7. https://futureoflife.org/ai/benefits-risks-of-artificial-intelligence/
8. Issac Asimov’s Law of Robots