
Unless you’ve been living under a rock in recent times, the world has become a far different place in the last few years. And perhaps one of the biggest developments that we have seen in a generation or perhaps in all of our history is the rapid development and roll out of Artificial Intelligence (AI)
Unlike previous hype innovations, such as big data, cloud, blockchain … AI is actually something that most people can wrap their heads around, in no small part, thanks to Hollywood. But as of now, AI is a rapidly developing field that has stirred up more than just a few mind-blowing possibilities along-side an equal number of apocalyptic scenarios, that even have some of the biggest names in tech showing cause for concern.
Unlike the atrocious attempt at trying to explain it by the world’s favourite Veep, let’s take a try to take peek under the hood, and get a better grasp of what AI is, the good, the bad, and the what you need to know.
What is it?
Although work in the field of AI started back in the 1940’s, we’ve only started to see the barnstorming rise in the last few years, likely in places that you might not even be aware of. But what is it?
Let’s start with what it isn’t! AI and the specific models that make it up, language models, generative models, models that can play go, or snake, image recognition, are not ‘intelligent’. Sure, they do ‘smart’ things, but they are not ‘intelligent’ or ‘aware’, or ‘sentient’. They are still fundamentally programs, that do specific things really, really well … albeit, with the probability of error.
Perhaps the things that AI models to date are best suited for are analysing big sets of data, spotting patterns, and then spotting anomalies. Bank transaction fraud detection are one such application and they have been around for quite some time.
So, what is AI?
Marvel Fans rejoice! AI is Just A Rather Very Intelligent System (J.A.R.V.I.S.)
How does it work?
Math! Lots and lots of math. Remember those lessons back in high school where you were taught things like matrices and other stuff? Nope? Fair enough, I don’t think many kids did, but when it comes to any sort of program, its all down to the math.
Quite simply anything that any program does or doesn’t do, has to be quantifiable. With computers that all run on binary logic, yes/no … true/false … everything must be reduced down to some quantifiable … conditional set of statements … something that a computer can assess.



These are often things like ‘if’ statements (you can do these in excel); if a situation is true, then do this, otherwise (else) do something else. As you can imagine, by the time you start to map these conditions out, they become pretty bloated and hard to manage, and they only lead to one outcome (although that outcome can have many things)
This is where we get to things called artificial neural networks. A neural network can be represented as a matrix (you can do these in excel too) so that you can have a number of inputs and a number of outputs, and link all of them together, before ‘tuning’ the relationships to increase their accuracy.
This is only scratching the surface of how machine learning and deep learning (multiple networks) work and create the sense of ‘intelligence’, and this is an absurdly abbreviated way at explaining even the basics of neural networking.
I would strongly recommend checking out this video here [1], so you can get a better insight into how this all comes together for ChatGPT.
AI Is already here!
So, if you spent the time to check out the above video, you’ll know how … despite the absurd levels of complicated mathematics, statistics and probabilities, AI actually … isn’t intelligent at all. In fact, current AI models can only do what they are trained at relatively well.
The moment you venture away from their specific purpose, they kinda don’t offer any value at all. ChatGPT might be able to spit out an essay in a few seconds, but it can’t drive, it can’t do your work … its not aware … but AI is already here and you’ve been using it without even knowing it.
Who are the biggest culprits? Amazon, Google … YouTube … who constantly assess each and everything you do and watch, and then make recommendations based on your behaviour, which we’ll cover in a moment.
Even the facial recognition in your phone, is based around a neural network tuned to recognize faces. Facebook, Instagram … you like puppy videos? AI’s in Social Media have got you covered.
So … If AI is already here, and it already recommends things you like … what’s with all the doom mongering around it?
The Darker Side.
When it comes to AI, one of the most interesting aspects is how it kinda works backwards.
Let’s explain. A conditional program sort of works ‘forward’, if this, then that and there are a finite number of branches that can be navigated.
AI Networks, which work on relationship strengths, need to be tuned … and that means, teaching it, training the models to spot patterns. And those patterns require training sets on ten’s of thousands or even ten’s of millions of samples, to tune the neural networks to recognize things correctly. Imagine taking an infant and then force feeding it millions of images to recognize a dog.
The Model will not know initially what a dog looks like, but millions of images later, it will have identified enough patterns to determine with high accuracy what a dog is and what it isn’t.
Training Data
Now … that’s one such case. But what happens when the training data is biased? What happens when the input which could be perfectly valid in one way is poor in another.
What happens when a language model is trained with social slang? This might be fine for a virtual assistant, but poor for grammar … and this is a very weak case. What happens when an AI model is weighted to factor in who might be using it? this starts to become a very slippery slope.



But if training the model to be accurate, neutral and objective is one thing, what about the feedback issue. AI can work orders of magnitude faster than any person or group of people. What happens when the output of various AI’s starts to become the input for other models? This is where we end up with generational bias … and the world has seen enough of that already. Now let’s do this at the speed of silicon!
Perhaps the most worrying aspect is when the AI model is actually allowed to take action.
Let me explain again.
As long as AI stays in the box, that’s one thing. But what happens when it is allowed to actually take action?
The Dilemma & Legal Issues.
So, what happens when it goes wrong? What happens when the intent doesn’t match up with the idea?
Although there have undoubtedly been many situations where this has happened, there are two notable cases that we can link here.
The first is an AI agent that is ‘highly’ motivated to do its job [2] on the battlefield, with … frightening unforeseen (albeit infantile logical deduction) consequences, and the second is when the AI Agent seems to do something beyond the expectation of what it was designed to do when it seems to create its own language [3]
What happens when it goes off script?
There is always going to be the ‘we never foresaw that’, or ‘we never predicted that’ reply, but if unforeseen issues are one thing, letting them happen is another. Unforeseen circumstances are always crystal clear in hindsight.
For every case of some young-immature-super-smart program getting it wrong, there will be proponents supporting further rollout.
Having an AI Agent … a model being rewarded for behavior’s that would get any soldier thrown in prison is completely abhorrent … but even within a simulation, that model now exists … that academic information hazard has been created. Hell … even the exercise of an AI Agent behaving outside the realm of Rules of engagement (ROE’s) makes the whole academic exercise null and void. But is exists!
Who’s responsible for when it goes wrong? Or are we now allowing developers to get away with things because they didn’t ‘think of that’ … not when they thought of everything else!
Related: Why Engineering projects are so Expensive
It comes down to ethics
Although it may not be clearly visible, what we can glean is the following;
AI isn’t … AI models are very good at doing specific things, but the same model that can identify a skin cancer in under a second, cannot get you to or from work, they can’t drive, they can’t communicate … they can’t do actually do anything.
People … might be rather average at most things, but … we can also do most things too.
But the real thing that makes people so … ‘good’ … is not just that we can do things, but we know … perhaps that we shouldn’t do things too. People aren’t black and white, but we do grey very well, perhaps we do grey the best!
We can know when action must be taken, and we can take it … but we also know when the action might not feel right, and that is something that a machine just can’t calculate … not now anyways.
A brilliant … a perfect model that cannot think is just a tool. An imperfect mind that understands cause and effect, action and consequence … right and wrong is far better any day of the year.
What you should be aware of
If AI is going to take jobs, and it will, like every step change has, anything that can be automated usually will be, then AI will eventually become the low-cost employee in the background.
It will do a job, and that means, that we will need to have some understanding on how that job is done, not just physically, but at the mathematical level … we … you cannot simply treat the automated system as a black box … you need to have a concept of what is going on, and that means, we need to be better. We need to know what the agent is supposed to do, and why its mis-firing!



And this has massive implications on education. I’m the first person to say there is simply too much to know and understand out there. But every time there has been a technological step change, it has driven the forced betterment of people.
Steam drove the first industrial revolution. Robotics was supposed to have driven skilled labor to extinction, instead it birthed an entirely new service industry, not to mention limitless industrial potential. The Internet of Things, was supposed to have already created a world wide interconnected distributed utopia, meanwhile Windows 11 can barely maintain a Bluetooth connection!
AI isn’t coming … its here! It will find its way into your life even more than it has. It will take tasks and jobs from people, but it will also create new opportunities like every revolution before it. But unlike steam … AI needs to be treated with the utmost respect. Perhaps only one other technology has ever offered so much potential for good, and so much potential for bad!
AI is here, and there is a need to adapt and to do so quickly. Anyone who chooses to ignore first principles because there is a machine that can do it instead are the one who will pay the highest price.
Related: It’s Time to talk about Energy
#AI #artificialintelligence #industry #technology #machinelearning