After my recent monthly update, I thought it might be useful to discuss and attempt to define the terms around AI, and where my interest primarily lies.
AI is the umbrella term, encapsulating Machine Learning and Deep Learning. Everything that allows automation of human tasks. It’s quite a big field and has been going since the 1950’s (or arguably much earlier).
The earlier efforts used human rules to process data. For example, earlier chess programs used human rules that said “if the board looks like X, your next move should be Y”.
Similarly, expert systems for medical diagnosis, or rule-based systems for loan approvals all fit into the (wide) umbrella of AI.
So AI = “any program that allows a computer to do something a human would normally do”. Even if the program was hand-coded by a human.
Machine learning is more interesting, because it removes the human from providing the rules. This is where AI starts ‘learning’.
The earlier attempts at machine learning were statistically based. You give a machine-learning algorithm input data and the answers you know, and the algorithm tries to come up with rules that maps (as far as possible) your inputs to your answers.
The key point is that the rules are completely machine generated, no human expertise involved (I’m simplifying and slightly fudging here!).
So then you give your algorithm new data and it will apply the rules to produce a new answer.
It sort of looks like:
InputA → Rules → AnswerA
InputB → Rules → AnswerB
InputC → Rules → AnswerC
InputZ → Rules → AnswerZ
You allow the algorithm lots of data, and lots (and lots!) of attempts to sort out rules that map the Inputs to the Answers. At the end of all that, you hope your Rules generalise to new Inputs the algorithm has never seen before and produce useful Answers about the new inputs.
Hopefully thats sort of clear. Except I’ve really left out the magic here. The magic is in the scope of the inputs and the answers. What sort of Inputs could you give your ML algorithm? And what sort of Answers could you ask the algorithm to get rules for?
Turns out that the data is to some degree is unlimited, and the answers … seem to be unlimited as well.
ML has had success in amazing areas, like image recognition, speech recognition, and many others.
Deep learning is ML with massive datasets, and massive compute, but in particular it refers to layers of Rules.
In the little diagram above, we had “rules” which mapped Input to Answer. In deep learning, you have something like:
Input → rules → rules about rules → … → rules about rules about rules about… → Answer.
This probably makes more sense if we switch “Rule” to “Representation”. You change your Input into “Representation 1” which then gets changed into Representation2 and so on.
And your algorithm is then learning to slowly transform your Input into your answer, by morphing it through each of the previous new Representations.
“Deep” refers to the number of layers of new Representations that your input goes through. The sum of all the Representations = the ‘Rules’ that transform your Inputs to your Answers.
examples include: image classification (used in Google Image Search), speech transcription from audio, language translation, game playing (eg: Chess, Go), natural language question answering, text-to-speech that sounds natural, protein folding, autonomous driving, interpreting medical images…
Deep fakes where a video of a person is mapped to a video of a different person, with their voice being transformed into the other persons voice.
DeepMind recently assisted in stabilising plasma in nuclear fusion research:
who would have guessed that?!?
Deep learning looks like magic. I am very familiar with it, but the results still blow me away, given that I know it’s just a (non-linear) mapping between a bunch of Inputs and a bunch of Answers.
My interest is in the ML, Deep learning side of things because of the requirement that our stocks do unexpected (good) things. There (in my view) nothing more unexpected than what can come out of stacks and stacks of fairly simple non-linear representations coupled with human imagination and expertise.
We’re still very early days, but the results so far are some of the most amazing things I’ve ever seen. The only way I can see to play this at the moment is to invest in the biggest of the big tech players (GOOG, MSFT, AMZN, and FB. TSLA?) who have the resources to hire the talent and train these (massive) algorithms.
hope this was of interest