If you believe that ML-based software will be able to learn from experience when running in production, and even better than humans… Well, you’d better double-check some core concepts.
The smell of AI has now permeated the entire spectrum of the industry and all players are running around like headless chickens looking for problems that AI can solve. The need to use AI in business is often even stronger than solving a core business problem.
In the real-world, companies have two types of problems where more intelligence would just be invaluable:
- Macroscopic (and sometimes nearly intractable) problems
- Subatomic (and sometimes just annoying) problems
Only number crunching, elbow grease and software acumen could possibly solve any of those problems; astonishing, powerful, out-of-the-box services are only tools.
Macroscopic problems (e.g., predicting the amount of energy being produced by a wind farm, estimating lightning occurrence in specific areas, classifying feedback, interacting with humans, capturing possibly dangerous behaviors from videos) can be solved by mincing them into smaller pieces and gluing together in a sort of business-driven pipelines. In short, reducing from the macroscopic scope to a subatomic scope.
In the subatomic AI world, you have relatively restricted problems of three types: regression, classification, clustering. Any of those small and highly circumstanced problems can be solved in one of a few ways: analytically and/or via machine learning.
Machine learning is not magic. It’s only about the mechanization of the solution of highly circumstanced problems that can be formulated as regression or classification or clustering problems.
So much for the myth of the “machine”. The other myth is “learning”.
Raise one hand whoever truly believes that, with a machine learning solution in place, the software would learn from experience as if it were a human. Sorry, but this is not what happens (exceptions apply, but they are … just exceptional and specific situations).
Where is learning then?
Learning is in the path that saves developers from having to build a complex and possibly inaccurate analytical solution. It’s the same reason that takes us to computers for calculations and numerically-intensive operations.
A machine learning project is developed along the following steps:
- Formulate the problem as a regression or classification or clustering scenario
- Identify data and learning algorithm for the scenario
- Run the algorithm and get a model back
- Integrate the model with some software application
The steps hard-coded in the learning algorithm have nothing to do with the model that ends up being deployed to production. The model is simply the analytical definition of a mathematical function to be calculated in production. The learning ends when the model is created; running the model is a stateless operation, with no memory of the past, like tossing a coin.
Machine Learning is the solid part of what nearly everyone calls AI and it is just software, problem solving and consulting. Any executives, and every engineer, should look into it and much deeper than into, say, other buzzwords such as microservices or Blockchain.
Why is that?
Because machine learning is a real breakthrough and allows to solve problems in a way that is sustainable and scalable as never before. The point is not that those problems can’t be solved otherwise (some instead must be solved with machine learning); it’s just that machine learning makes it possible to have complex answers quickly and reliably. Not via magic or sci-fi cyborgs, but via the calculation of mathematical functions discovered in a preliminary stage applying learning algorithms to rich datasets.