I present my distilled view of some important applications of ML, and explain the full procedure from ML to AI, when it makes sense to invest in it. I am grateful for my direct experience, business and social engagement with Companies, Projects, and key people that have contributed to my vision. Machine Learning is a vast and prolific topic where people and Companies do make mistakes. ML is not a magic crystal ball. Machines can be more accurate at recognizing patterns than humans, but not supernatural intelligence that magically extracts information where it doesn’t exist. We should always challenge and interpret results.
Important applications of Deep Learning
There are some important applications where Deep Learning (neural networks with more than one layer, subset of ML) uses compute power to perform a practical task different from extracting information or decision making. These are great applications. We can use a framework like TensorFlow or Keras with Python when experimenting, and switch to C++ code or lower level for performance. These projects require years and are out of the scope for this article. Such applications include:
- Computer vision. Recognition, tagging. This includes:
- Biomedics (e.g. recognize cancer from X Ray)
- Automatic Drive
- Autoencoders, compression
- Natural Language Processing (NLP), including:
- Text summarization
- Text generation
- Chatbots, vocal assistants, conversational interfaces
- Text to speech
- Sentiment analysis – like the above, very aware of the limits
Errors – Wrong applications for ML / DL
There are cases that DL was never meant for. Neural networks are a black box that once trained, has internally memorized information and it’s very difficult to visualize and understand how it processes information.
This is the case about hiring. Many Companies have trained ML models to select candidates and hire them, based on the dataset of past hires and performance. To reinforce the point, AWS timely retired a hiring ML model that was incorrectly biased against women. The problem is DL detects patterns but patterns are correlation, not causation. It may validate the Pirates vs Global Warming Pastafarian theorem. The Nobel laureate Daniel Kahneman, in his book “Thinking, Fast and Slow” demonstrated how traditional interviews are useless predictors of performance, while those based on metrics become “moderately useful”. Using ML for hiring decision is a dire mistake that has escaped the comprehension of many: seeing predictions that apparently seem coherent with past results, they validate them without questioning. The cost of this error can’t be estimated.
Playing games is another controversial case. Results can be great, but let’s consider that if we put more than one AI using the same ML model, then someone is going to lose because there can only be a winner.
Trading. To skip the controversy, if everyone starts using the same ML agent, both predictions and recommendations won’t work.
Correct use of Machine Learning, Deep Learning, and Artificial Intelligence
I have studied, researched, and participated to so many events I heard much on the topic of Machine Learning. But there is a difference in quality between speculation and hand-on proven experience. I heard the most intelligent discourse on the topic from Riccardo Sabatini, Chief Data Scientist at Orionis Biosciences, at an event in March 2019 that made complete sense from what I heard and changed my vision. Riccardo had accomplished features such as:
- Predict diseases from DNA. He trained a model from full DNA of patients and their medical histories. You can understand the correlation is a total legitimate use case.
- Predict adult face from DNA. He trained a model from full DNA and photos of people.
And here are the steps that he indicated a Company should follow and invest in, in their ML / AI journey:
- Collect Data. You should build a Big Data set, to prevent overfitting. Data points should include as many fields as possible because you don’t know yet how it will be used, what patterns will prove valid, if anything. You must determine how to collect the data and start it as soon as possible. Dataset size is paramount.
- Build a ML model. Ask yourself a question, a problem. And then experiment with baseline copy-paste ML model. You don’t need anything else besides:
- Linear Regression
- Logistic regression
- Deep Learning will only improve the accuracy of the ML model. If this model converges, you are onto something. If it doesn’t, you should not bother with DL and instead change variable or question. Perhaps you observed the wrong data, perhaps there is no pattern to detect. Some people insist with DL thinking it will finally solve a problem and train it until they make it work. This is the mistake. Maybe the ML model will have enough accuracy and this step is optional. DL is even more expensive to train and run inference, and is therefore suited to cases accuracy is important. Note that you rarely need to touch TensorFlow during initial tests. Riccardo says that today “maybe CUDA engineers still need to write TensorFlow code”. He says he doesn’t even bother – and look at his results.
- Artificial Intelligence is a way for users to consume the inference from the pattern detected by the model. You should only invest in AI in this case. In our age, simpler forms of AI are otherwise unlikely to bring value to warrant investment.
The above steps are always in order and if one doesn’t work, then go back and repeat or change the previous step.
To prepare for AI, build a data lake. Start collecting Big Data, and use copy-paste logistic regression or linear regression to test problems. ML is not a crystal ball. If you get 70% accuracy easily, then move forward to Deep Learning, and evaluate a great AI or adjustments and bigger investments. Else stop and try another problem or different data or variables.