Let’s say you have a block of Jello. We’re going to call that Jello our ‘Machine Learning Algorithm’. We want to teach it the difference between cubes and spheres.
Step 1. Drop a cube on the left side of the Jello. Did it leave a mark? Great. Drop it several more times.
Step 2. Drop a sphere on the right side. Same deal.
Ok, now we bake our Jello so it’s nice and hard.
Knowing the difference between cubes and spheres
Pick up a shape, does it fit a spot in the right side?
* Then it’s a sphere.
No, it fits a spot on the left side?
* Ok, then it’s a cube.
A machine learning algorithm is an impressionable material that picks up the shapes of data that passes through it.
Like water flowing through a canyon, data flows in, cube shaped data repeatedly hitting a spot until a cube shaped impression is formed. Sphere shaped data hits another spot until a sphere shaped impression is left. The more that different shapes of data flow through the more shapes the ML can recognize. The more that same and similar shapes flow in the better the impression of that shape.
After training, we harden the material so that it acts a strainer. Now when cube shaped data comes in, it fits the cube shaped channel and is sorted into the cube side, our strainer “knows” it’s a cube.
Instead of cubes and spheres if we have dog and cat shaped data our Jello or our canyon can learn the difference between dogs and cats. Feed in words and it can sort the difference between words. Feed in collections of words that make up ideas and it can sort different ideas.
Why is that so powerful? Well, our brains are the same sort of Jello. We drink in ideas until we understand them, sorting them into ‘things that apply right now’ and ‘things that don’t’.
Photo by Isaiah-Phillips Akintola on Unsplash
Photo by EKATERINA BOLOVTSOVA: