I had an interesting chat with Arijit Sengupta, CEO of Beyondcore, this week. Beyondcore is one of the emerging companies that is starting to move the analytics market into the next phase of intelligent systems or artificial intelligence (AI). Sengupta agreed with my earlier piece on IBM’s Watson Analytics, a competing platform, and the two big problems I identified as we transition to AI models for analytics and business intelligence. First, these systems will take a substantial amount of time to train. Second, executives using the systems need to know the right questions to ask. He added a third problem: Most BI systems have bias built in at the front end. This problem is translating into analytics products that don’t work and AI systems that can’t be right. Unfortunately, there is a good chance most of the intelligence systems in the world suffer from this design flaw.
My Own Experience with Analysis and Bad Information
This conversation really hit home because I recalled a troubling experience at IBM early in my career. An executive come to me with a problem: Some of our biggest banking customers had Severity One problems that had been open for nearly a year. Severity One problems were measured in minutes, and the management reports indicated that they were normally being solved in minutes, not months or years. When we did the causality analysis, we found that a policy allowed the call center to dispute a trouble report and that until the dispute was resolved, it ended up in a disputed bucket that wasn’t reported. Since staff were measured on the time these problems remained open, they wouldn’t remove them from the disputed classification until a solution was found. Management was unaware of the problems because the reports that management received didn’t capture this behavior; it was weeded out early.
I also did an in-depth review of why IBM had to fire its CEO in the early 90s. In short, it was largely because those surrounding the CEO would scrub the information he received. He never saw the problems that resulted in his firing until they were so obvious that the board became aware of them — and were forced to fire the CEO.
In both cases, this wasn’t a management failing. It was a system failure. The failures not only created erroneous reports, they resulted in bad decisions that were career threatening, not only to the employees who corrupted the process, but the executives who used the flawed reports to make decisions.
I’ve observed the firing of a lot of executives and CEOs over the years, and it has generally come down to one thing. They made the wrong decisions based on bad information. Often, the systems and people that actually caused the termination survive to plague the next administration, which speaks to why many turnaround efforts fail.
Analyzing Bad Information in the Affordable Care Act
Apparently, Beyondcore was brought in at one point as part of an audit to figure out why the Affordable Care Act wasn’t performing better financially. Looking at the data, the company determined that there had been a flaw in the data capture. One of the key assumptions in the program planning was that young members would generate far more revenue than they would consume in insurance services, funding older members who would contribute far less than they would consume. For 83 percent of the young population, the assumption was true. Unfortunately, for 17 percent, it wasn’t. That 17 percent consumed, on average, $7,000 in mental health charges annually. Since the contribution for the young members was set at $1,000 per year, the math no longer worked for this group. Worse, in theory, those with these expenses had a greater incentive to sign up and contribute to the program than those who did not, potentially creating an even wider gap between services and revenues. In this instance, the root of the problem was likely twofold. People wanting the bill to pass didn’t want to find problems like this and not only didn’t look for them, they likely objected if anyone else pointed them out.
Analyzing Bad Information Against the Affordable Care Act
The Republican Party also has a set of beliefs about the Affordable Care Act, that largely have proved to be untrue. The points presented in order to block the Act are missing, for the most part, actual problems that could have better supported their position. In short, they prevented themselves from seeing actual problems, even though they were looking for them, because they didn’t base their positions on facts, but beliefs. And most of their beliefs had no more data behind them than the Democrats supporting the program did. They concluded that the program was bad before doing the analysis.
Death by Bias
This kind of series of errors is likely why companies with BI systems in place don’t seem to do that much better than those without, even though these systems should provide a substantial advantage. It is because the information going into the systems has been altered before the analysis. When that is done, the analysis can’t be accurate. Instead of assuring success, it more likely will assure failure. This isn’t a technical problem; it is a process problem.
Wrapping Up: Knowing Which Questions to Ask and the Future of AI
One of the big problems with intelligence systems is that the user may not know which questions to ask. This is where systems like Beyondcore are increasingly focused. Training executives, who have substantial distractions and like to see information that agrees with their world view, to ask the critical questions likely won’t work. The system has to point to anomalies in the data and recommend actions based on them, or the decision maker is likely to ignore the information they don’t like and fail as a result. In that case, the system whether it is BI or AI, will have failed because the human element, rather than being an asset, has become the biggest liability.
Rob Enderle is President and Principal Analyst of the Enderle Group, a forward-looking emerging technology advisory firm. With over 30 years’ experience in emerging technologies, he has provided regional and global companies with guidance in how to better target customer needs; create new business opportunities; anticipate technology changes; select vendors and products; and present their products in the best possible light. Rob covers the technology industry broadly. Before founding the Enderle Group, Rob was the Senior Research Fellow for Forrester Research and the Giga Information Group, and held senior positions at IBM and ROLM. Follow Rob on Twitter @enderle, on Facebook and on Google+