AI – Don’t Believe The Hype


This 3D rendered image was created using AI in minutes rather than hours (case study here) and when you see how quick and easy AI makes doing some tasks now it’s very easy to get despondent or carried away and think us humans might as well all get our coats and go home.

Wait a minuteĀ though…things aren’t quite that bad…or that good for that matter…

Like Agile software development, the Cloud and Smartphones “AI” is one of those genuinely transformative leaps forward, and we all have huge new challenges and opportunities going forwards.

When I talk about “AI” in this post I’ll be talking about the large language models (LLMs) like ChatGPT, Google Bard, and so on by the way…AI is a wide and rapidly developing field but these models are de facto synonymous with “AI” at the time of writing so that is what I’m talking about…

So are ChatGPT et al a set of super intelligences that are going to make us all redundant and then force us into a “war against the machines”?!?

That’s very unlikely, but not impossible, of course.

The insight that the LLMs have extracted from the huge amounts of data that they have been trained on means that we definitely can’t be complacent about the risks of AI.

As discussed at the AI Safety Summit, if we extrapolate the current capabilities (and assume an increased pace of innovation) going forwards bad outcomes for us all are definitely possible, and we need to make sure that we proactively manage benefits/risks/rewards as the capabilities of AI systems grow and develop.

Like everything else AI needs to be in a framework that doesn’t put too much friction in to the model, but does put the right guard rails and boundaries in place.

Anyway back to the hype…at the moment people tend to exaggerate the state of play of “AI” in two directions …

1. They massively exaggerate an LLM’s sentience, and agency, and assume all humans will soon be obsolete.

2. They dismiss the current systems as “plagiarists” and only capable of telling us “what we already told it”.

As always there is an element truth in both points of view, but the reality is much less binary and more nuanced I think.

LLMs like ChatGPT are called “Neural Networks” because they try to emulate the human brain by using a network structure comprising of elements  that receive and send signals transforming the input in some way before passing it on (like Neurons). A network of transformers is also called a transformer and this is one of the reasons why the GPT in ChatGPT stands for Generative Pre-Trained Transformer by the way.

Neural Networks are “trained” (not programmed) by adjusting the transformation (or weighting) that each neuron applies to help achieve a target output from a known input. 

In other words, to achieve a particular outcome each individual neuron can amplify its input signal with a weight of more than 1 or attenuate it with a weight of less than 1. After a large number of training runs we will have a neural network producing a target output given a known input.

This is very different to  “traditional” computer software of course. Most computer software consists of a sequence of instructions that the hardware executes in the order specified by the developer. That approach put humans on the moon but it isn’t very good at telling us things that we don’t already know.

Over the years we have had a few attempts at creating AI with software and some didn’t work out (I’m looking at you Prolog) but the Neural Network approach has proven to be by far the most successful approach we have tried so far.

That might be because the structure of the system does more closely resemble the human brain (which definitely isn’t just processing a series of sequential instructions) but will also be because of the sheer brute force number of input data points and training runs modern companies and systems can use to train their neural networks. It is reported that GPT-4 uses a dataset of 1 petabyte (one quadrillion or 1,000,000,000,000,000 bytes, 1,000,000 gigabytes, or 1,000 terabytes) and 300 billion parameters, for instance.

So coming back to where we started, ChatGPT et al are very broad and deep nonlinear data models but they aren’t super brains that are going to make us all redundant and then force us into a “war against the machines”. Hopefully, anyway.

An LLM like ChatGPT can’t currently know anything a human couldn’t know…if that human had processed all of the data that the LLM had been trained on. Which is a big if, admittedly.

Umm so to unpack this a bit…does an LLM know things that we as humans don’t know? Yes it does. It doesn’t know things we couldn’t know but once it has been trained on billions of data points it has created an internal representation of that data that incorporates relationships and correlations that no human might have ever formulated before. Effectively it has joined dots that none of us might have ever joined before.

AI’s ability to personalise at scale will change the game for any business that deals with large customer or data volumes, and AI is particularly good at document creation so you need to be looking at AI if document creation is an important part of your business’s operating model.

Leveraging AI to create better customer experiences is something that every business needs to start testing and learning on…and sooner rather than later if it doesn’t want to be out competed by its competitors…

Yes AI is being over-hyped but saying that the AI hype is like the “Big Data” or “Blockchain” hype is to overlook a couple of differences between AI and those technologies….both Big Data and Blockchain are much more “Hows” than “Whats” and are essentially B2B pitches. AI on the other hand can be used by consumers directly (usually via a Chatbot interface) and can directly satisfy customer needs as part of a well put together customer value proposition. I challenge anyone to create a video with latte.social or a website with durable.co and then tell me that it isn’t an order of magnitude quicker, better and cheaper now using AI. AI is the real deal.

So don’t believe the hype, but like the invention of the computer, AI will ultimately change the way we all work and make the people using it appropriately an order of magnitude more productive and effective.

Rorie is the author of “The CTO | CIO Bible” and has been working with Neural Networks since his final year dissertation on “The use of Neural Networks in financial time series prediction” at Uni. Commercial AI work way before anyone had ever heard of ChatGPT included a secret squirrel project using a neural network to reverse engineer the Google search algorithm…he thought it was a massive neural network at the time but ChatGPT 4 Turbo is a lot, lot bigger…now part of the founding team at the AI as a Service Team providing AI Software Design & Delivery | AI APIs & Tools | AI Consulting Services…he can be reached at AIaaS.Team

you're currently offline