8.1 The Future of Machine Learning
Without machine learning there can be no interpretable machine learning. Therefore we have to guess where machine learning is heading before we can talk about interpretability.
Machine learning (or “AI”) is associated with a lot of promises and expectations. But let’s start with a less optimistic observation: While science develops a lot of fancy machine learning tools, in my experience it is quite difficult to integrate them into existing processes and products. Not because it is not possible, but simply because it takes time for companies and institutions to catch up. In the gold rush of the current AI hype, companies open up “AI labs”, “Machine Learning Units” and hire “Data Scientists”, “Machine Learning Experts”, “AI engineers”, and so on, but the reality is, in my experience, rather frustrating. Often companies do not even have data in the required form and the data scientists wait idle for months. Sometimes companies have such high expectation of AI and Data Science due to the media that data scientists could never fulfill them. And often nobody knows how to integrate data scientists into existing structures and many other problems. This leads to my first prediction.
Machine learning will grow up slowly but steadily.
Digitalization is advancing and the temptation to automate is constantly pulling. Even if the path of machine learning adoption is slow and stony, machine learning is constantly moving from science to business processes, products and real world applications.
I believe we need to better explain to non-experts what types of problems can be formulated as machine learning problems. I know many highly paid data scientists who perform Excel calculations or classic business intelligence with reporting and SQL queries instead of applying machine learning. But a few companies are already successfully using machine learning, with the big Internet companies at the forefront. We need to find better ways to integrate machine learning into processes and products, train people and develop machine learning tools that are easy to use. I believe that machine learning will become much easier to use: We can already see that machine learning is becoming more accessible, for example through cloud services (“Machine Learning as a service” – just to throw a few buzzwords around). Once machine learning has matured – and this toddler has already taken its first steps – my next prediction is:
Machine learning will fuel a lot of things.
Based on the principle “Whatever can be automated will be automated”, I conclude that whenever possible, tasks will be formulated as prediction problems and solved with machine learning. Machine learning is a form of automation or can at least be part of it. Many tasks currently performed by humans are replaced by machine learning. Here are some examples of tasks where machine learning is used to automate parts of it:
- Sorting / decision-making / completion of documents (e.g. in insurance companies, the legal sector or consulting firms)
- Data-driven decisions such as credit applications
- Drug discovery
- Quality controls in assembly lines
- Self-driving cars
- Diagnosis of diseases
- Translation. For this book, I used a translation service called (DeepL) powered by deep neural networks to improve my sentences by translating them from English into German and back into English.
- …The breakthrough for machine learning is not only achieved through better computers / more data / better software, but also:
Interpretability tools catalyze the adoption of machine learning.
Based on the premise that the goal of a machine learning model can never be perfectly specified, it follows that interpretable machine learning is necessary to close the gap between the misspecified and the actual goal. In many areas and sectors, interpretability will be the catalyst for the adoption of machine learning. Some anecdotal evidence: Many people I have spoken to do not use machine learning because they cannot explain the models to others. I believe that interpretability will address this issue and make machine learning attractive to organisations and people who demand some transparency. In addition to the misspecification of the problem, many industries require interpretability, be it for legal reasons, due to risk aversion or to gain insight into the underlying task. Machine learning automates the modeling process and moves the human a bit further away from the data and the underlying task: This increases the risk of problems with experimental design, choice of training distribution, sampling, data encoding, feature engineering, and so on. Interpretation tools make it easier to identify these problems.