Implementing AI Trust in Customer Experience
In 2019, Ovum conducted a market survey and analysis among customer experience (CX) managers and customer-facing employees regarding the current state of their company AI strategies as well as future plans and visions. The survey included participants across New Zealand, Australia, Germany, France, UK, Canada, and the US in 5 vertical markets_ travel, retail, telco, financial services, and hospitality.
Broadly, the data showed participants viewing AI in their respective fields in a positive light, with more than half of participants claiming it increases customer satisfaction, employee job satisfaction, confidence from the customer, and efficiency of inquiry response times. The opposing view, held by about 1/3 of participants, was that AI technology may make significant errors that make them look bad and that AI would replace their jobs. Although data was more promising and positive than not, with it also came the clear lack of plan or strategy of AI technology. 22% of surveyed CX managers indicated that their company uses AI with "no specific strategy," even more claim to still be in a development phase, and only 12% actually claim to be well on the way to implementing AI strategy. While the UK, the US, and Canada seem to be the most proactive in preparing conscious, strategic AI for their environments, these countries still show a drastic need for growth.
What we would like to highlight from this analysis are two main points_ people are generally positive about the impact of AI in various industries, and strategy and implementation are far behind the speed at which new technologies are currently coming out.
As we always emphasize at CognitiveScale, AI is here to stay. It is being rapidly adopted in virtually every domain and is now seen as a source of competitive advantage by executives. Globally, we have moved on from the decade of learning what exactly AI is and can do and are now moving forward to implementation and more purposeful strategy. However, as more specific planning and design occur, and then in order to trust and adopt an AI system, we must have confidence in its decisions. In order for widespread adoption of AI, through managers, employees, or customers, societal trust through explainability of models is absolutely critical.
CognitiveScale's Cortex Certifai was designed in order to help meet a growing need for customer and control of automated decision making. We use a unique AI risk metric, the "AI Trust Index," applicable in every industry, to produce a FICO-like composite risk score for any black-box decision making model based on five dimensions – fairness, explainability, robustness, data rights and compliance. In this changing tide of CX and AI, we fit right in a critical space of assuring that current and developing technology is as fair, trustworthy, and transparent as possible.