Why Chat GPT, AI, and ML are so dangerous, has been known for decades


03 May 2023

Chat GPT, AI, ML, etc., are now using digital systems in almost every possible application. Not just cars, trucks, trains, planes, surgery – where safety implications are obvious – but in everything else, where the implications for a great deal more than mere safety risks are almost unimaginable. This situation could have been avoided

There is a lot of serious concern now being expressed in national and international media, about how far we can trust Artificial Intelligence (AI), Machine Learning (ML) etc.

Not to destroy civilisation as we know it. For example, see https://www.bbc.co.uk/news/world-us-canada-65452940.

What makes it especially alarming, is that this issue even featured as a serious news item on the BBC Radio 4’s regular news roundup at 7am on 1st May 2023 – because scientific, technical or engineering matters are almost never mentioned on this programme. And, when they are mentioned, the interviewers always show how smug they feel at not being able to understand anything about the topic being described by their highly-respected interviewee.

So it seems clear that Chat GPT, AI, ML, etc., could be an existential threat to humans!

With my ‘human being’ hat on.  I am worrying whether there is any way my family and friends can even survive the next decade. Or whether we might as well just give up, now that so many of the dystopian science fiction stories I have read, and films I have seen. e.g.,  https://en.wikipedia.org/wiki/I,_Robot_(film), are coming true at once.

However – with my ‘EMI hat’ on what I find especially interesting. Is that the problem now being discussed is an inherent problem with programmable digital systems. I understand that this problem was first identified in the 1970s and was one of the reasons for the creation of IEC 61508, first published in 2000. It has two components:

i) )The inherent unpredictability of digital systems2            
Digital systems are non-linear, so we can’t test a percentage of the digital states and assume that the results prove anything about any untested states.

ii) The inherent untestability of modern digital systems3
Programmable digital systems (hardware and software) have far too many possible digital states to ever be 100% tested – even once.

Until recently, there were very few safety-critical systems relying on AI or ML, and all were operated by trained personnel. When they malfunctioned, the damage was contained, and rarely excited any interest in national or international media.

But Chat GPT, AI, ML, etc., are now starting to use digital systems in every possible application. From self-driving cars, trucks, trains, ships and planes – where safety implications are obvious – to everything else, used by anyone, including human communications in words and sounds.

This civilisation-spanning use by everyone, could result in almost unimaginably worse consequences, than mere safety risks!       
(Even without considering the possible effects of ‘bad actors’ or deepfakes.)

Interesting, isn’t it, how a problem that has been well-known since the 1970s in the digital systems industry, could now present a serious threat to humanity.     

Please see the next page, for Footnotes and References.


1 Since 1997 – with the help of many others – I have been researching how to deal with the problems outlined in i) and ii) above, as regards the effects of EMI on the functional safety of digitally-controlled safety-critical systems.

See: https://www.emcstandards.co.uk/we-can-t-fully-test-digital-systems         
and the many papers, articles, presentations posted at:    https://www.emcstandards.co.uk/emiemc-risk-management.

This research resulted in the publication, in 2017, of the IET’s Code of Professional Practice on Electromagnetic Resilience, see:     

 In turn, IET 2017 led to the first published standard on EM Resilience: IEEE-1848-2020, which is (for now) the state of the art on managing the functional safety and other risks that can be caused by EMI.     
You can purchase it from: https://standards.ieee.org/ieee/1848/7221/ 

2 From [A]: “….in  general, a successful series of tests provides little or no information about how a digital system would behave in circumstances that differ, even slightly, from the test conditions.”

   From [B]: “Unlike linear systems, digital systems lack continuous behaviour. Relationships between inputs and outputs can be complicated, discontinuous and not predictable…”

 3 Given 2 above, we would hope to be able to prove that the behaviour of a given digital system was acceptable, by testing it thoroughly.           
For a safety-critical system, we would ideally want to test 100% of all its possible digital states. But even if 99% of all possible digital states could be tested as being safe-enough, we should never assume that the untested 1% would not be very dangerous.           

It’s easy to show that, unfortunately, we never have nearly enough time to even test 1% of all possible digital states, never mind 99%! Even if using very powerful testing resources.  

From [B]: “Unfortunately, a digital system, particularly its software, is usually sufficiently complex that it is not practical to test all possible inputs and outputs exhaustively.”


[A] See: “Computer Based Safety-Critical Systems”, IET, October 2009, no longer published (but I have a copy I’d be happy to share).

[B] See: “Computer Based Safety-Critical Systems”, IET, 2013, https://www.theiet.org/media/9534/computer-based-safety-critical-systems.pdf

Note: I can provide many more references from experts other than the IET, if required.

Important note: Please don’t get confused between the risks of not achieving compliance with the EMC Directive, and the functional safety risks that can be caused by the occurrence of EMI (even when the EMC Directive is complied with).   
See: https://www.emcstandards.co.uk/risks-associated-with-emc-and-emi-don-t-g

I will be providing Training Workshops at the upcoming EMC and CI 2024 event in Newbury, England in May, register at www.emcandci.com

Photo by Hitesh Choudhary on Unsplash

« Back to Blog