AI, ethics and the CIO:Building a bridge between people management and technology

As a CIO with a good sense of what’s happening in IT right now, you don’t need to be told that things in the world of AI are moving fast.


AI tools are developing at lightning speed and require that we consider their implications, writes Jon Foster-Pedley.

By Jon Foster-Pedley, dean and director of Henley Business School Africa and chair of the Association of African Business Schools

As a CIO with a good sense of what’s happening in IT right now, you don’t need to be told that things in the world of AI are moving fast. Nevertheless, it’s worth setting the scene a little. The original plan for this article was to discuss some of the long-term business and ethical impacts of using the latest generative chatbots inside your business, based on the success of new large language model (LLM) ChatGPT3.5. At just three months old, it was already impacting professions such as content creation and programming.

If a week is a long time in politics, a quarter of a year is more than a lifetime for LLMs. Before I could even ask ChatGPT to write an interesting introduction to this piece, Google, Baidu and Facebook all unveiled their own AI engines and – most surprising of all – OpenAI released the successor to ChatGPT3.5, ChatGPT4. With such short product lifecycles, even the experts are struggling to keep up with just how convincing these tools are and the implications of their existence. And the question of them developing their own autonomy.

Many voices are calling for a slowdown in development while humanity figures out exactly where the boundaries on ethical development and deployment are. Indeed, an open letter published by the Future of Life Institute calling for a pause has been signed by a long list of notables including Steve Wozniak, Yuval Noah Harari, Andrew Yang and Elon Musk.

An excellent report published by colleagues at the University of Reading last year on the subject of business ethics and AI seems almost quaint already, concerned as it is with fundamentals like the ownership of training data, algorithmic bias and the privacy implications of facial recognition. Not that these things are no longer important, they are absolutely critical to ethical use of AI.

It’s just that topics such as “what happens when ChatGPT makes things up?” or “is ChatGPT a plagiarisation engine” suddenly seem more pressing and are capturing the debate. With ‘plagiarisation’ and in many other areas, I suspect we will enter into an endless époque of an ‘AI Arms Race’ as AI bids to outdo itself playing both gamekeeper and poacher.

At the risk of adding yet another voice to the already noisy discussion, here’s my take as the head of an African business school.

Balancing efficiency and accountability

Some of the emerging ethical concerns will be immediately obvious to smart CIOs. On the one hand there’s the obligation to make use of technologies that drive efficiency, and claiming back precious time spent reading and writing emails when an AI can summarise and compose with ease makes sense.

Is there any difference between knowing how to create a spreadsheet function based on =SUM in order to work out projected margins and just asking the AI to do it for you? Both are just tools designed to help humans work faster and more accurately.

On the other hand, how do we hold the nominal human sender or spreadsheet creator accountable if they don’t check generated text fully or if the summaries don’t include something important? The CIO will be likely playing a large role in deciding policies around this.

Being aware of security concerns and lurking biases

Then there’s the ethical obligation to ensure the security of personal and business data. How secure are these technologies and is it ethical to deploy them yet?

Microsoft has added new virtual assistant tools to its Office suite, for example, which means giving the “black box” access to a lot of sensitive business information, yet these are still experimental tools and headlines have already been made about ChatGPT leaking conversation content.

There’s also the ethical problem of ensuring no lurking biases live in the algorithms that may create discrimination.

Understanding the impact on culture

The most important ethical consideration, however, is also the most abstract. If you believe – as I do – that freeing up humans from mundane tasks and allowing them to focus on the kinds of creative work that requires critical thinking, the opportunity is vast. For the CIO, however, the ethical challenge will be to stand up and put this principle into action.

It’s almost certain that many organisations will first and foremost look to use these tools to reduce costs and cut staff rather than investing in the hard work of retraining and upskilling staff. The CIO must consider how generative AI will affect the culture of the company if it results in increased uncertainty and stress for employees.

Will we become redundant to AI? To me, it’s not so much whether we will lose jobs to AI, but rather the AI-illiterate will lose them to AI-literate colleagues.

Not forgetting about succession planning

There are important issues to raise about the capabilities of future generations too: if remaining employees are high value and likely overworked, who will train and mentor the workforce of tomorrow? Already we’ve seen many professions – including the law – where automation has vastly reduced the number of junior hires.

Where will the managers of tomorrow come from? It’s an accepted business principle that highly productive employees are those who have all the ability, opportunity and motivation to sharpen and use their skills: deployment of AI has implications for all three of these factors.

Why am I appealing to CIOs on this issue? Because you are the people who can see both sides of the ethical equation and are uniquely placed to evaluate and communicate the technical and ethical pros and cons to the business.

Today’s CIOs are not just masters of the IT trade, but also of the soft skills required to manage multifunctional teams. Through the processes of digital transformation, you’ve become the bridge between people management and technology, but your job is clearly not yet done.

In the past, you have had to motivate for more investment in new tools and technologies. Now, as boards are overtaken with equal amounts of excitement and fear, you will have to be the voice of restraint and reason. You will morph to chief intelligence officers from chief information officers.

Because the real question about AI is simple, and it really is the same as any other tool that has gone before. Does it add long-term competitive advantage and business value? Well, yes, it may, but more importantly, will it bring opportunities and advances to add value to our lives and our world, and curb our careering propensities to burn the house down?

In that case, the long-term horizon may be a lot longer – endlessly longer – than you think.

Published - CIO-SA Magazine, 11 April 2023

 

Similar posts

Get notified on new Learning insights

Be the first to know about new  our latest newsletter insights