Live it loud!

By Lostpixel

Today's work and Artificial Intelligence.

Realised I haven't picked up a camera and taken any photo's for a week. Not that I haven't been busy. Lots of little jobs been done catching up on the backlog of the usual maintenance tasks that have remained undone for many months.
The weather today started pretty grey and damp so today turned into a sofa day.
A few hours spent editing and re-editing a few images from the last twelve months that haven't previously seen the light of day.  These are just a few examples. All editing done in Lightroom and Silver Efex Pro 2.

Warning! Off at a tangent now --->

The editing tools and process made me think about the current discussions in the news about AI after recently having a detailed discussion with my son who is working on an AI solution for a specific business problem. 
There are quite rightly some concerns about the technology that is out there. It is incredibly powerful, and, to some degree, clouding the issue as to who is the creator of some things as well as potentially creating incorrect output. But, the conversations are somewhat distorted giving the impression that all AI is dangerous.
What many also don't appreciate is just how embedded AI is already in many common devices and systems. 
Some simple things like detectors that open automatic doors can now use AI to identify people that are walking towards the door rather than across it.
Cameras, your phone, speech recognition like Alexa and applications like Lightroom are packed with AI technology. All the features like eye-detect, face detect etc are all AI. But, those AI tasks are usually done by dedicated AI systems - that is, specific bits of code called to do a specific task. 

Also, what is somewhat opaque in the discussions is just how AI learns. It gives the impression that AI will continue to learn, become distorted and start acting differently and dangerously. That is indeed the case in the likes of some of the mega systems. They will continue to mop up information on a huge scale and digest that data within their own data training processes. 

Any flaws in the learning algorithms will only magnify problems.

However, most AI that we use has been 'taught' by using a very precise and controlled iterative process that uses pre-qualified and tagged data - usually tens of thousands of data points. This creates a neural network which is just a simple data matrix using some complex maths to predict an outcome and then a reverse iteration to determine how good the prediction was by comparing the decision against the manual created tag against the data. This process is repeated and results refined over hundreds of thousands of times if needed. As long as the data is tagged correctly, the learning process can become very, even unerringly accurate.  Then the AI solution is applied to new instances of data in an equal unerringly precise and accurate way.
It all sounds very complex, and, indeed it both is and isn't. The principles though are quite simple. AI processing for mundane but specific tasks on even complex data like images can be created relatively quickly and then embedded in software.
The risks are when AI itself is deployed against AI processes for many sub-tasks. This I suspect is where the outcome of an AI system becomes less authoritative and potentially flawed, especially when that multilayered AI system uses its own data for learning its own model. Without moderation and validation, the outcomes become increasingly divergent from a safe and accurate assessment. This seems to be what is happening with the likes of OpenAI's ChatGPT and its Large Language Model and lets newer reincarnation. It is consuming data, possible harvested from unknown or unvalidated sources that it shouldn't be and then including it in its learning model and then including and outputting that when it shouldn't. 
It is these very large AI systems that are drawing attention, and, quite rightly.
The risk is that these fears will pollute the fields where AI is already in use and also for potentially new applications such as medical diagnostic aids such as automated assessment of X-rays and other scans, or even prescriptions based upon history and symptoms. A properly implemented system doing that would be based on hundreds of thousands if not millions of qualified input samples and give doctors and consultants lots of support in diagnostic decision making. While their own decisions are rule based founded on acquired knowledge, any humans reference dataset is minute by comparison to an AI system and, over time, at risk of divergence and so creating errors. All fascinating stuff.
Is AI actually hard to do?
At a basic level - Not really. My son trained a raspberry Pi to learn how to recognise hand written numbers to a 99%+ level of accuracy by using 50 thousand examples. What is scary is that I understood what both he and the code were doing, and, realised that the data used in the AI training process isn't in fact included in the AI tool itself. 

Now back to reality....

Comments
Sign in or get an account to comment.