It’s hard to find a tech article or company description that doesn’t mention the words “artificial intelligence.” Every single forward-thinking organization today has a project or initiative around digital transformation, with AI typically being in the front and center. While nascent, AI adoption is still stuck somewhere between the innovation stage and the early adoption stage. According to a survey from MIT Sloan Management Review and the Boston Consulting Group, “Almost 85% believe AI will allow their companies to obtain or sustain a competitive advantage. But only about one in five companies has incorporated AI in some offerings or processes.”
As someone who works with organizations that are trying to put AI to work through projects, initiatives and innovation labs, I have learned that there are three key barriers preventing AI from going mainstream in the enterprise:
Where To Apply AI? The ‘Problem Selection’ Problem
The first barrier is a lack of understanding as to what are the best problems to solve using AI. I have seen many organizations take the technology-first approach of selecting projects (e.g., “Let’s find a project where we can implement chatbots or machine learning and then go figure out how we can apply it!”). Technologists tend to gravitate toward the right mathematical or algorithmic problems to solve. Another thing that skews this problem selection problem is the available data.
We have seen repeatedly that’s not the case. Given these projects take time and effort, how do you ensure that you are solving the most important problems for the business? The more successful organizations use a practical approach to selecting the business problem to be selected based on two factors — the size of the business benefit (to explore whether it’s worth doing) and the availability of the required data to solve to the problem (for training models).
The Data Science: Application Development/IT Gap
The teams that are engaged and help drive AI projects are usually innovation teams full of data scientists who usually have a very strong technical understanding of machine learning, natural language processing, algorithms, statistics and all of the technical toolsets. If you look at typical enterprise organizations, at best you have lots of data scientists and ML engineers but hundreds if not thousands of application developers.
If you look at history, taking small-scale innovations and using them to create a large-scale, enterprise-wide deployment has always been the domain of core IT. Unfortunately, most of the industry’s innovations, typically led by the Googles and Amazons of the world, are still focused on empowering the data science group. However, this mindset completely neglects the needs of the IT community.
Due to this fact, enterprises lack the tooling, skills and methods required to bridge the gaps between the data science workflows and the software systems development workflows. I know there are attempts being made by Amazon with SageMaker and Google with Cloud AutoML, but these toolkits still only address one part of the issue — skills needed to build, train and run these ML models. These services still lack the necessary tools and methods needed to turn AI into a production-class deployment.
For example, how will you ensure that an AI system built for production has built-in scalability to scale with the amount of data that it is consuming? How will you ensure that the AI system has built-in capabilities to ensure there is explainability, algorithmic assurance and KPI-focused performance and optimization? It’s one thing to identify a picture of a cat vs. a dog accurately, but it’s another thing entirely if you don’t explain in a lawsuit why you rejected a medical claim based on a recommendation from your AI system. These are major problems that are solved through methods in the application development land but are still very foreign in AI and data science.
Access To Industry-Specific AI Building Blocks
One of the biggest issues with AI being adopted at scale in enterprises has to do with the choice these organizations have. To solve business problems with AI, you need the right technology platform and a partner with a deep understanding of your industry and its requirements. The plethora of horizontal data science platforms or general-purpose APIs from Watson, Google or AWS will not adequately solve the complex problems that enterprises face.
Every industry has its unique data sets, data models and regulatory requirements. Without an industry-focused approach, most performance measurements for AI systems can only happen at a statistics level. But rarely can you tell if your prediction model to provide an optimal omnichannel experience for your website visitor can be enhanced if you improve the F1 score. The reality is most that enterprise business processes are way too complex to be solved by one ML model at a time. These have to be applied holistically at the business KPI level, and we need the tooling, methods and building blocks to enable that.
While speaking at a Recode event, Google CEO Sundar Pichai said AI is “one of the most important things that humanity is working on. It’s more profound than … electricity or fire.” And he’s right. We are in the early stages of this massive global-scale disruption for businesses, governments and individuals. Although there is a lot of promise ahead, to gain the true value of this technology, businesses need to find a way to make it scale with their business needs. I’m very optimistic about the amount of progress we can make in 2018 to truly unlock the power of human potential with AI. In future articles, I’ll explore how to overcome these adoption barriers, who is doing it and what can we learn from them.