23.11.2022

Model-based digital transformation is THE vehicle to move industries to the digital future. This approach is actionable from the start, fosters agile work, and delivers valuable results.

*by Tim Varelmann*

Over 70 years ago, three physicists with an interest in electricity developed the transistor. What used to be one of many things developed at Bell Labs (a company which attracted many talented technical people at the time) turned out to be able to profoundly change the world. Transistors could be used to build computers, whose sole purpose had been - well, computation. However, the ability for automatic computation also enabled a chain of other use cases: computers could be connected to build the internet and revolutionize communication. New communication methods were the foundation for many other modern developments, like online shopping and delivery, or work-from-home. One of the currently emerging ramifications of the developments of computers is the digital transformation of entire industries.

This article explores the nature of digital transformation and specifically the following questions:

• Why is digital transformation so powerful in industrial settings?

• Why is it happening now?

• What is a strategy to successfully influence the transformation?

The answer to these three questions requires going back to the roots: automatic computations were the first thing computers did. Thus, mathematics is a natural ally of ours to explore which culminated effects automatic computations have. However, as the digital transformation is more elaborate than automatic computations, we will need a more elaborate approach than plain algebra. A mathematical approach which can describe complicated systems is mathematical modelling. This article will illustrate its arguments with industrial examples. Many industrial examples. The breadth of examples shows how universally helpful mathematical models are.

A mathematical model is a way to describe a real-world system using math. The real-world system can be anything from something big, such as a chemistry park, to something small, such as a bacteria population. Other examples include rockets, factories, electricity grids, logistic networks, and the temperature and humidity in a house. They can all be described with mathematical models. Mathematical models are as versatile as natural language, but they are unambiguous because they use math terms.

Logic is a way of translating sentences which use words like 'and', 'or', and 'not' to math. Sometimes, we also need to express more complicated things using words such as 'if', 'only', 'then', and 'otherwise'. An example would be: "We can NOT operate conveyor belt 3 AND conveyor belt 5 at the same time".

A more complicated statement is “ONLY IF we decide to build a warehouse in Brussels, THEN we can send trucks from Brussels to Paris. OTHERWISE we will have to cater to Paris from Munich OR Barcelona”. Logic is the mathematical translation of the keywords printed in bold. To describe the elements between these keywords, we use variables and equations:

Variables form the heart of a mathematical model. They use numbers to describe the potential actions that may happen in the system under consideration.

Building a warehouse is a yes or no decision, and we can represent the options with 0 (don't build) or 1 (build). Whether to build a warehouse becomes a binary variable in the model.

A logistics company with 23 trucks can send some trucks from Brussels to Paris. The number of trucks sent on this route can take any value between 0 and 23. However, sending 3.5 trucks to Paris will not work. The decision of how many trucks to send from Brussels to Paris becomes an integer variable in the model.

It is possible to transport 3.75 metric tons of sand with conveyor belt 5 between 2pm and 3pm. The decision on how much sand to transport on belt 5 between 2pm and 3pm becomes a continuous variable in the model because its values may be decimal numbers.

To describe relationships between the variables of the model, we use equations and inequalities. For example, the physics of friction can be described by an equation. When a car moves at some velocity, such an equation can be used to compute how much fuel the car needs to keep moving at this velocity. To complete the mathematical model, we may have to consider some limitations of the system, such as safety limits and capacities. Here is one example for each of them:

**Safety limitation: **

Glass production limited to 20 metric tonnes per hour to prevent overheating of the plant:

Prod_{glass}<= 20 t/h

**Limited capacities: **

A storage can store 12 tonnes of sand:

Store_{sand}<= 12 t

A model is a universally understandable collection of a system's specifications, details about its components, and a description of its environment in one place. Mathematics is a lot clearer than an amorphous mixture of documents formulated in natural language(s). This leads to the following benefits:

• Having to adapt just a single model makes maintenance easy when system components and its environment, change frequently and sometimes simultaneously.

• Knowledge is explicitly formulated, not hidden in the heads of experienced operators.

• Model outcomes/predictions are unambiguous as well and can largely be processed and interpreted automatically by software.

• Tests for inner consistency and correctness of a model can be automated and regularly performed.

• Models are executable in silico, even if the real system does not (yet) exist.

The latter points lead directly to the next section:

Model-based design can explore and refine large parts of the final design without relying on physical prototypes by examining model behavior in simulation software. The drastically reduced need for physical prototyping reduces the cost of designing, the time the design process needs, and the damage of mistakes that inevitably happen in every project. As a result, a tremendous amount of design alternatives can be conceptualized and examined. Instead of spending time and other resources on building physical models, we can direct more effort towards exploring multiple design alternatives. An impressive example of the power of this approach: when Tesla was just a start-up, conventional wisdom in the automobile industry was that designing a car from scratch that competed with industry standard was impossible. Nevertheless, Tesla quickly developed a car that would not only compete with industry standard, but exceed previous ones and define a new standard. Model-based design was a crucial pillar for Tesla's rise.

And the best? Mathematically modeling systems broadly applies through all kinds of domains. Not having to build a car to examine its design is great, but building one car and deciding to focus on a different design for serial production does not immediately break the bank. In other domains, building a system to not use it is absolutely impossible: electricity grids and molecular motors are some examples from my research at RWTH Aachen University and MIT. For cars, model-based design allows to test more design alternatives and only prototype the most promising ideas. In the more complicated domains, model-based design is the only way to try something at all without actually building the final system. Thus model-based design is even more valuable in these domains.

Building a model prototype from scratch need not be a daunting task if you know an effective way to start. Model-based design facilitates iterative and agile design processes and the reuse of former prototypical models. Mathematical models can be built incrementally from crude sketches to sophisticated and well structured digital twins.

In physical/scientific domains, we can almost always derive a very simple model by enforcing basic conservation laws such as the conservation of mass and energy through mass and energy balances in a chemical reactor. We could complete such a first prototype with assumptions that make life easy, e.g. a thermal flow of zero, which models a perfectly adiabatic isolation to the environment. These simple models can often make qualitatively reasonable predictions. We can then improve the quantitative precision. If necessary, we can iteratively remove simple assumptions and replace them with increasingly sophisticated descriptions of reality. Most likely, this happens as one or several submodels. Let's consider the thermal isolation of a chemical reactor as an example for which we incrementally build a sophisticated model from a very simple one. This is an important task, as many chemical reactions only work well in a specified temperature range. For operators of chemical processes, it is therefore important to understand how much heat is lost to the environment, so they can control just the right amount of heating for a reactor.

While step-by-step a model becomes more complex, the tests for the model can grow together with the model in similar steps. Every feature extension has a well-defined and manageable scope, which leads to well-defined and manageable test extensions. This increases test completeness when the model transitions to a productive solution and identifies inevitably occurring errors early, when they are cheap and simple to fix.

In the end, - compared to a design process where specification, design, and tests are considered separately - model-based design leads to a higher-quality final product in a shorter development time .

Of course, also the operation of actually existing systems can be modeled. The pure power of virtually crunching some numbers without having to act out certain actions proves precious, e.g. in production planning and logistics. In fact, I would argue that models are the foundation of the digital transformation. The reason is that design, simulation and optimization rely on models to truly explore all ways and eventually identifying the best solution.

Complicated models can be structured into intuitive modules. Each module is a subsystem with a clearly described boundary. At the boundaries of the subsystems, the interactions between subsystems are specified. Many small subsystems with well-defined and manageable scopes therefore break down the overall complexity of the full model. This is important for several reasons:

• Beginners can learn an existing model step-by-step,

• Experts can focus on their area of expertise and rely on well-defined interfaces to other parts of the model.

• Adapting a model in our ever-changing environments requires only local adjustments of modularised mathematical models.

As an example, let us consider the reactor from the previous section. This time, however, the reactor and its heat loss to the environment is considered as part of a chemical plant - an arguably complicated system. However, by dividing the plants into smaller components and clearly describing points of interaction between the components, the plant model remains manageable.

Properly modularised models are accessible ways of collecting information. Interestingly enough, the availability of well-structured information triggers some magic in human brains: pattern recognition. The power of pattern recognition is that the identification of a pattern in a class of problems goes a long way to also identify a solution for that class of problems. So, by modularising models, we can make an abstraction from solving individual problems to solving classes of problems. A solution to all problems of a problem class is nothing but an instruction for automating the solution of these problems.

Highly automated systems that take care of many details are perceived as 'smart' these days. Under the hood, there is just software written by a smart developer who has recognized an abstract pattern in the task and implemented a systematic solution that exploits this pattern. I want to give three examples:

• Numerical optimization solvers come with interfaces that allow to formulate the logic of a model like a human modeler thinks. Internally, the logic is represented through multiple mixed-integer constraints. Under the hood, a software developer took a list of logical keywords (like AND, NOT, IF; we saw them in the first section in this article) and for each keyword, developed some equations and inequalities involving binary and continuous variables that represent this keyword. A human modeler can then model a system using one keyword after the other. There is no need to translate every keyword to its corresponding equations.

• Voice assistants in your phone or 'smart home' seem like they can listen and understand you. Under the hood, they just identify syllables in spoken language (and even this ability took decades of training). Then, they look up which words are made of which syllables and match the series of syllables they identified with words they can look up from a dictionary that contains all words of a language.

• In my final PhD defense, I presented software that translates dynamic optimization problems formulated in chronological time to dynamic optimization problems formulated in terms of wavelet coefficients. This enables a powerful reduction of the problem size and, as a second-order effect, adaptive time grids.

The recurring theme in all these examples is that some mundane task (translating logic to mixed-integer formulations, matching syllables to words, wavelet transformation) has been done once for all possible cases (e.g. all logical keywords, all words in a dictionary, all chronological time signals) and in a properly structured way. As a result, we can work much faster on a higher level (changing AND for an OR is faster than changing multiple equations for some other equations and inequalities) and consider systems that enable work on this higher level 'smart'.

So why is the digital transformation happening now? I have the impression that now, in some domains, the model quality and structure is good enough to systematically trigger pattern recognition. The people working with these models can do abstractions and make their solutions smart.

I really think the automation that is possible through such abstractions is a blessing. It removes mundane tasks from the ToDo-list of us humans. However, this does not make human labor redundant, rather, it makes it more powerful. When we work with well abstracted models, we can address problems of larger scope than we could handle without automation. The reason is that we spend more time doing what humans do best: being creative, exploring and interpreting the results of such work. And we will still be able to create and interpret better that computers for a long, long time.

Any AI technology is just a peculiar model-algorithm combination when it reaches a productive state. Thus, using models builds a culture and infrastructure to effectively use AI as a side effect. Here is how both culture and infrastructure can develop through industrial model usage:

1. Mathematical models are built starting from first principles. First principle knowledge is typically available in a company. Therefore, the behavior of these early mathematical models is transparent and comprehensible. The model just appears as a collection of things that were already known before. However, we saw in the second section of this article that this collection allows people to work faster and make fewer errors.

2. Because of the improved productivity, models are scaled up. Using proper modularization ensures they remain comprehensible. The more widespread models are used, the more people will be involved in its development and understand their part of the model. An increasing fraction of the company's workforce works with models on a daily basis.

3. This path also triggers the company-internal setup of necessary digital infrastructure: data bases, version control, deployment pipelines and cloud storage. The infrastructure can grow based on company-specific processes and needs. Continuous testing and verification of results will be automated as well.

4. Finally, more elaborate techniques involving machine learning and artificial intelligence can be used in precisely defined areas where they are powerful.

In the introduction, I claimed that digital transformation by using mathematical models is actionable, agile, and valuable. I also claimed that this would be applicable in virtually every industry. Over the course of the article, I presented the fundamental reasons for this: Starting to use mathematical models means simply collecting all available information and formulate them in the unambiguous language we call mathematics. As the collection grows, modularization introduces an easy-to-maintain structure for all that information. Soon, more complicated problems can be tackled because modularization also facilitates the reuse of existing models. Next, in a very natural progression, automation will follow because a good structure of information makes people creative at abstracting away the mundane parts of their daily work. These actions have immediate benefits, but chaining all the 'next steps' leads to an even more valuable effect: The culminated benefit of using mathematical models is a digital transformation that reduces error rates, streamlines internal processes and fosters human creativity for the gain of employees, shareholders and customers.

Thanks to Fabian Viefhues for proofreading this article and providing valuable feedback.

How to succeed on the pursuit of a PhD

During the summer break, my PhD certificate arrived, testifying the success of my research in the development of mathematical optimization algorithms and software. The occasion made me reflect on the journey I have had and what I would do differently - this is the result.

21.9.2022

What do socks laundry and mathematical optimization have in common?

Mathematical optimization algorithms use tricks that often work for real-world problems. We explore a peculiar similarity to the task of matching socks.

11.7.2022

Unleash the power of modern software and mathematical precision for your business.

Start your project now