As service providers and businesses of all sizes continue on the path to becoming digital enterprises, there are a lot of new and innovative solutions coming to market. Devices, readers, platforms and software that enable digital supply chains, IoT and the means to connect anything are being rapidly developed and deployed. As buyers we get caught up in the innovation and oftentimes forget that as amazing as these solutions are, they have to serve the interests of the entire business not just a single function. Technology solves important business challenges but when trying to understand digital strategies and commit capital to implementation of new technologies there remains a fundamental challenge and it’s not risk versus reward it’s risk versus scale.
We’ve all read the post mortems on big bang transformation failures and the prevailing wisdom is to try something on a small scale, get the quick and visible win and then move on to the broader implementation. And that works, it really does – but what if that slick new solution doesn’t or, worse, can’t scale to hundreds of devices or thousands of customers? Suddenly that low-risk approach becomes unworkable and businesses are faced with two choices – wait for the vendor to scale the solution which means expensive reengineering or start over.
Measure Twice, Cut Once
When evaluating emerging technologies and innovation it’s important to truly understand what problem is being solved. “Being digital” isn’t a business requirement. “Implementing on-line customer care” is. But that seemly straight-forward requirement comes the need to understand what it means, from a business standpoint. There are multiple solutions for every requirement and it’s incumbent on the buyer to be able to understand not only what a solution does, but how or even if, it fits. During trials and lab tests, buyers can execute use cases and uncover potential functional hazards. Customers can demand proof of interoperability and integration with existing systems and data or, at a minimum, an approach to get there. That’s managing the risk.
But scale is difficult to prove in a lab trial. Maybe each use case executes just fine, but what happens when the system needs to run them all at the same time? What happens when 100 customers access the same on-line sales or support channel? Or 1000? Many of the solutions coming to market have been architected to scale from a volume standpoint, they can support 1000 customers in an individual use case, but can they scale in parallel? What happens when 1000 customers are clamoring for support across five different channels?
Evaluating scale requires an examination of the foundational components of a system. Are the solutions:
- Data driven? Agile enough to support a variety of constantly changing customer journeys.
- Able to meet performance demands? Not just isolated functionality but everything happening at once.
- Able to handle and manage data at line rates?
- Using rules or cognition to adjust and tune system behavior?
- Open, configurable and adjustable without expensive services or specialized skills?
Scale is a risk factor, but one that is largely forgotten or misunderstood in the hype of new technology or the awe of innovation. The ability to scale is something that has to be designed in from the start, not added after an innovation hits the market. That one-off amazing solution might not be so amazing at scale. As consumers of these solutions, it’s our job to know for sure.