The doorbell started life as a simple mechanical device. It had one button with a single function: to ring a bell.
Today’s version is likely to have a camera, a motion sensor, video, and a smartphone interface that can access data sent from the doorbell to the cloud. It is no longer just a doorbell; it is a complete security system.
The evolution of the doorbell is just one example of digital transformation—the use of technologies such as data analytics, connectivity, cloud computing, and AI to transform products, processes, and entire systems.
Almost every organization seems to include digital transformation in its vision and strategy, but most struggle with executing digital transformation initiatives. There are myriad reasons: the challenges of introducing new technologies and providing the workforce with relevant skills, ensuring that the company’s culture and organizational structures are conducive to change, and anticipating correctly which processes need to change and how, to name a few.
To effect change, some organizations begin with proof-of-concept and pilot projects. They soon find themselves mired in a “pilot purgatory,” unable to scale up by formalizing the piloted approaches and making them part of the company’s standard workflows and practices. Other organizations start with large infrastructure development efforts that are difficult to execute and fail to meet the requirements of the actual projects, workflows, or products that emerge from the transformation strategies.
We have observed that organizations are often most successful with digital transformation when they adopt a pragmatic approach.
What Is Pragmatic Digital Transformation?
Pragmatic digital transformation does not require starting from the ground up or completely overhauling existing processes and assets. Just the reverse; its fundamental principle is reuse: In pragmatic digital transformation, data and models—and the engineering teams’ associated skills in developing analytics, models, and simulations—are applied systematically to workflows throughout the life cycle of the product or service.
The systematic use of data can start with analytics developed specifically to get insights from experimental and research data. But it also means scaling and extending those analytics to huge, heterogeneous sets of live and archived data, acquired from manufacturing, maintenance records, and other business processes, to enable data-driven decisions not only during research and design but also in production, operations, and maintenance.
Systematically Using Data: From Data Siloes to Data Analytics
As organizations recognize today, the challenge is not the lack of data but the crushing volumes and variety of their data—not only engineering, scientific, and field data but also business and transactional data. The diversity of data management approaches adds to the complexity: Data may be stored on-premise or in the cloud, in consolidated data lakes or separate databases, in relational databases or spreadsheets. And every datastore may have a different governance policy and access permissions.
Digital transformation begins when the accumulated knowledge and transformative potential of this data can be uncovered and applied systematically throughout the product life cycle. The core tasks are, first, to integrate data from multiple repositories; second, to develop analytics that are easy to use and access; and third, to integrate those analytics into the workflow at the right time to enable groups throughout the organization (engineering, business-unit management, analysts, service teams, and more) to apply insights from the data to improve processes or designs.
Using Big Data Analytics to Optimize Manufacturing Processes at GSK Customer Healthcare
GSK Customer Healthcare’s R&D team wanted to improve manufacturing processes and increase capacity at the company’s toothpaste manufacturing plants. The most cost-effective approach, they knew, would be to systematically use the historical data they had accumulated over the years. They set out to see whether they could learn from that history to make better products.
“Lying hidden within the servers and notebooks of the manufacturing communities, there exists a wealth of untapped knowledge. It is long overdue that we brush off this diligently collected process data and begin to learn the secrets it hides.”Bob Sochon, GlaxoSmithKline Consumer Healthcare
They began by focusing on process data. Accumulated across all their factories, formulations, and batches, the data amounted to terabytes, and it was housed in separate siloes, separate systems, and different formats.
To get insights from their process data, GSK first needed to clean it by filtering out noise, filling in missing data, and removing outliers. They could then use it to compare phases from batch to batch.
The R&D team built an algorithm in MATLAB® to sort and tag the data by formulation phase (Startup, Add Silica, or Finishing), and ran this algorithm across all their process data. They built an interface in MATLAB that enables their process engineers to select and observe data by formulation combination, batch, and operator.
By linking manufacturing phases to analytical data, GSK has seen dramatic improvements in both processes and capacity—for example, vessel heating time, which used to take 30 minutes, now takes just two minutes. These improvements translate into significant business benefits: reduced time to market for new formulas and increased output from factories previously thought to be close to full capacity.
Extending the Use of Models: From Development to the System in Operation
The systematic reuse of models is a basic principle of Model-Based Design, where models form a digital thread connecting development, design optimization, code generation, and verification and validation. This digital thread does not need to be limited to the development process; it can be extended to deployed systems in operation when design models are reused as digital twins. A digital twin—an up-to-date representation of a system or subsystem as it operates—can be used to assess the current condition of the asset, and more importantly, optimize the asset’s performance or perform predictive maintenance.
Minimizing the Cost of Ownership with Simulation and Digital Twins at Atlas Copco
Air compressor manufacturer Atlas Copco has turned the systematic use of models and data into a collaboration platform that streamlines communications across their technical organizations and within their global sales organization.
In developing their new ZR 160 VSD+ product line, Atlas Copco engineers had two priorities: reliability—if a single compressor fails, the entire production plant fails—and energy efficiency—electricity accounts for 75% of the total life cycle cost of a compressor, a considerable amount when the average compressor runs day and night for 10 years.
Not only did the team want to design an efficient product, they also wanted to design the product efficiently.
They implemented a framework to manage the models, the data, and variants based on a digital twin. The same models drive the configuration applications that their sales and application engineering teams use to configure and quote systems for specific customers.
With this platform they can quickly implement and deploy upgrades onto 120,000 machines that are in operation worldwide. Each machine is equipped with up to 50 sensors that continuously relay data back to the Atlas Copco data warehouse, enabling the service division to set up customer-specific predictive maintenance strategies based on real-time information on the condition of the machine.
“We use a digital twin as the single source of truth and then build applications on top so that everyone has access to the same data and information.”Carl Wouters, Atlas Copco
Improved Processes, Deeper Insights
In pragmatic digital transformation, previously siloed data is combined and applied throughout development and deployment to improve processes and provide insights into system performance. A system model captures the high-level system behavior as well as the detailed subsystems. Those models connect to system requirements for traceability and early validation. Subsystem models can be reused to generate an implementation as software or an FPGA. And the models are reused for integration, validation, and verification, either at the model level or operating on the actual code.
By taking a pragmatic approach, organizations can reap the business benefits of digital transformation—improved quality, higher output, cost savings—while avoiding the struggles and pitfalls that deter some from embarking on, or even contemplating, a digital transformation initiative.