We must not be scared of AI, or try to break it like the Luddites broke the looms

We should not be worried about artificial intelligence stealing our jobs, but instead, we should be concerned about the jobs of those who don’t know how to use AI being replaced by people who do know how to use AI.


We should not be worried about artificial intelligence stealing our jobs, but instead, we should be concerned about the jobs of those who don’t know how to use AI being replaced by people who do know how to use AI.

The onset of innovation is often painful, in fact, the more profound the scope and scale, the greater the consternation. In the early 19th century, a group of weavers (and factory owners) burnt the new-fangled looms that they felt were disrupting the textile sector to the detriment of their livelihoods — and their craft that had taken years to perfect — as well as to the detriment of consumers.

Those people are remembered today as Luddites. We know today that the introduction of machinery, far from killing the textile industry, revolutionised it and unlocked a potential none of the fearful could have foreseen. Whatever jobs were lost through the change were compensated more than tenfold by the new, hitherto unimaginable jobs that were created.

Life’s like that. Change is happening in front of our eyes. GPS started as a military application — and still is when it comes to dropping missiles anywhere in the world to the nearest 30 centimetres — but the rest of us can use it to come within five metres of our destination. Many of us use it, even when we know the route because GPS can also act as a very good indicator of road conditions ahead, police stops and, critically, the amount of time it will take us to get to our destination.

GPS wasn’t as much a disruptor in our lives as the cell phone, which in turn bred the tablet and together probably singlehandedly provided the catalyst for the death of fixed-line telephony and print and sections of the broadcast media.

We don’t think of the cell phone as quite so malign, instead, we see it as an indispensable part of our daily lives. We don’t think of the telegraphers and switchboard operators who went the way of the dinosaur.

We certainly don’t think of the tens of thousands of people involved upstream and downstream of the telecoms sector from the technicians erecting towers to the small entrepreneurs in pop-up stores in malls selling cell phone covers and illuminated tripods for zoom calls.

And that is just the beginning of the job revolution as the cell phone morphs into a pocket factotum and facilitator of everything.

We are now living in the epoch of artificial intelligence, a phenomenon which Bill Gates believes is the third most revolutionary event he has witnessed in his life after the introduction of the graphical user interface, which led to Microsoft developing Windows, and the development of the internet.

ChatGPT is in the van of the AI disruption, a bot that understands speech and can produce in-depth writing that is lucid and compelling, drawn from a multiplicity of sources in seconds.

Its arrival has perhaps caused exponentially more consternation than the steam engine did to horses and IBM did to typewriters. In the process, ChatGPT has sparked an AI arms race.

Initially, everyone was scared that school children would be using ChatGPT to do their assignments in record time, so apps were devised to detect if work has in fact been AI created — like Turnitin which is an online plagiarism checker.

Very shortly after, Quilbot emerged to paraphrase the work of ChatGPT to help the human writer scrub any trace of machine output. At every evolution, this AI process improves exponentially — and that’s precisely the point.

We should not be scared of AI and try to break it like the Luddites did the looms, but as international futurist Graeme Codrington says, we should recast the issue entirely away from looking at it as AI — artificial intelligence — to instead the concept of IA — intelligent assistance.

In much the same way that we use GPS as a means to choose the best route to our destination — even when we already know how to get there.

ChatGPT in particular is a wonderful tool, but it cannot do our thinking for us; it’s a generative engine; a plausibility generator. Depending on the questions we ask of it, it provides the most plausible answers after condensing and sifting through millions of works on a particular subject.

It is up to us how we use that information — or indeed further edit and combine it to create a coherent argument or article. ChatGPT can create scripts, newspaper articles, even arguments such as this. AI can create incredible images from art to even photographs and with that the potential for deep fakes is infinite, which segues into Codrington’s other observation — we should not be worried about AI stealing our jobs, but instead we should be concerned about the jobs of those who don’t know how to use AI being replaced by people who do know how to use AI.

The advent of this new technology is as boggling for business leaders as it is for anyone else, sometimes more so because the risks are higher. In many ways there are a lot of similarities with the uptake of social media; just as it is worrying how staff might misuse AI, so too is there a worry about them being less than circumspect about themselves and the companies they work for on social media.

For business leaders who are older than most of their staff on social media, there is the added worry of how they themselves engage with social media. Do they ignore it altogether or do they lean into it and allow their social media sites to create a more human face for the businesses they lead and the corporate brands they are building?

It’s a difficult question, in fact, many CEOs have outsourced the entire process to professional social media comms teams to ensure they get it right while minimising the risks. But that isn’t always great either, as originality of voice can be lost and — persuaded by the urgent jargon of the digital agency algorithmists — bills can spiral without great effect.  

But for those who choose to run their own professional social media pages, there is the great risk of being trolled, dragged into “Twars” or sucked down rabbit holes. There’s also the danger of misreading the (cyber) room and becoming guilty of exactly that which you were terrified your staff would do.

But, if done properly, the potential benefits far outweigh the risks — in particular the risk of having no social media presence whatsoever.

Using ChatGPT can give you different nuances and depth of answers depending on the framing of the questions you ask. You have to understand how to ask those questions to get the most out of the technology and then you still need to apply your own abilities to the information that you have been given to properly make sense of it and use it to achieve your objectives — just like using GPS.

Ask ChatGPT to solve a problem for you from the perspective of a CEO, an expert, a consultant and an activist. You’ll get four different buckets of answers.

The key is in how you use these answers because just like building brand awareness, information is useless for its own sake. There has to be a point, just like acquiring power. If you’re just gunning for the corner office for the prestige, the salary and the status, your tenure will be nasty, short, brutish and eminently forgettable. Rather be a cornerstone of change.

As Jeffrey Pfeffer explains in his Seven Principles of Power, the methods to acquire power are pretty standard — and a little soulless. It’s the choice of what you want to use it for that brings the soul — that makes you stand out as a human, as a leader, as a person of character.

We need to know what change we want, why we are doing it – and, critically, for whom. Unlocking these new technologies gives us power, but it also forces us to manage the dragons that emerge in ourselves as we gain power and gain access to the big levers in life. We have to channel, harness or slay our dragons as the situation demands because if we don’t, their energies will inevitably end up overtaking and destroying us — the difference between being a nuclear power station or a nuclear bomb.

AI offers us the best hope yet of quickly being able to deal with a world that is becoming exponentially complex and ambiguous, which the longer we hesitate to engage with increases our risk of being subsumed by it.

At the same time, we have to develop a proper knowledge of these new technologies to lessen the potential of weaponisation of AI in all parts of life.

Most importantly we must develop the personal mastery to both overcome our own baser instincts to create harm and to have the hardiness to stay hopeful. 

 

Professor Jon Foster-Pedley is dean and director of Henley Business School Africa, and chair of the Association of African Business Schools.

Originally published in The Daily Maverick, 17th April 2023

Similar posts

Get notified on new Learning insights

Be the first to know about new  our latest newsletter insights