Current research projects

Resilient supply chain planning for Fast-moving Consumer Goods

For many industries, increasing the resilience of their supply chains has become a top priority in recent times. In this project’s context, we regard resilience as the ability to capture and cope with uncertainty in supply chain planning to best prepare for effective pivoting in case of change. In general, uncertainties related to demand and supply complicate the already complex planning procedures. However, similar to many industries, supply chain planning in Fast-moving Consumer Goods (FMCG) largely still leverages deterministic approaches (i.e. single-value planning). Maintaining buffer stocks is just one example of how companies indirectly prepare for uncertainty: by adding slack to their deterministic planning results to leave room for uncertainty to unfold. In contrast, in research, many approaches exist to dynamically and explicitly incorporate uncertainty (i.e. a multitude of futures) in supply chain planning. However, despite the existence of vast research areas around differing methodologies, practice still falls short on adopting these enhanced capabilities for handling uncertainty.

Therefore, we aspire to bridge this gap between research and practice together with the global software company SAP SE within real-world use cases of their FMCG customers. We aim to jointly pave the way towards a practically relevant resilient supply chain plan – to consider and best prepare for multiple futures. An important aspect of the project is the effective communication (including visualization) of planning results to decision makers and other stakeholders. As a methodology, we chose stochastic programming to accommodate uncertain input and to balance the commitment to certain decisions with the flexibility of future decisions. Providing risk-based profit implications and representing relevant planning decisions as ranges of future decisions are just two of the goals we pursue within this project.

This research project is run by Thorsten Greil in collaboration with our industry partner.


Online Scheduling in Smart Factories

The Fourth Industrial Revolution or Industry 4.0 conceptualizes rapid change to technology, industries, and processes in the 21st century due to increasing interconnectivity and smart automation. A central point of the advancements is the interconnection of machines, devices, sensors, and humans allowing communication via the Internet of Things (IoT). The interconnection allows cyber-physical systems to make decentralized decisions independently and perform their tasks as autonomously as possible. These concepts combined with the design of Smart Factories, following Industry 4.0 paradigms, enable more flexible production and therefore increase the operational efficiency of such factories.

Scheduling is one of the application areas that can profit from these advancements. The common practice in scheduling is to generate a rigid plan for a given horizon, e.g., a day. These plans consist of assignments of jobs to machines and the sequence of jobs on each machine (additionally, including the start time of each job). The production is then executed based on this plan. However, in reality, production includes a lot of uncertainties, such as raw material shortages, worker shortages (e.g., due to sickness), varying operating times for tasks, cancelation of orders, the arrival of high-priority orders, or machine breakdowns. These disturbances lead to inefficient schedules and require rescheduling the plan to adapt. An alternative is conducting the scheduling online, i.e., in real-time, by utilizing the real-time data of the production system provided by the IoT. This enables us to (re)actively deal with uncertainties. Online scheduling calls for a fast and reliable state-dependent online optimization that considers the current state and possible future outcomes.

We formulate online scheduling as a Markov Decision Process (MDP). The resulting sequential decision problem reaches a new decision epoch whenever a machine becomes idle. We suggest multiple methods based on (Deep) Reinforcement Learning (RL) and priority rules developed by Genetic Programming (GP) to train agents to solve the MDP, i.e., conduct online scheduling. Further, we explore the generalizability of the algorithms and intend to integrate our suggested online scheduling approaches with planning.

Jan-Niklas Dörr is working on the project in collaboration with SAP.


Ameliorating Inventory Management

Amelioration of food inventory during storage facilitates product differentiation according to age and, consequently, induces a trade-off between immediate revenues and further maturation. Exemplary products can be found especially in the food sector, e.g., aged cheese, port wines, or spirits such as rum and whisky.

To balance the inventory levels in multiple age classes, decision makers need to integrate recurring purchasing, fulfillment, and issuance decisions. Purchasing/ordering decisions determine the additions to the youngest age class. Fulfillment decisions determine the inventory volume that is allocated to each individual product, e.g., the total amout of ten-year-old port wine and the total amount of twenty-year-old port wine that is placed on the market in a given period. Finally, issuance decisions determine how the stock volumes from different age classes are allocated to the individual products. Many ameliorating products offer flexibility in the issuance decisions. For instance, port wines and whiskies can be blended from younger and older stocks, and cheese products are often labeled as "matured for at least x months".

Several sources of uncertainty complicate the inventory management problem. Apart from demand uncertainty, fluctuating harvest yields in geographically restricted growing areas lead to stochastic purchasing prices. Further, the maturation progress is subject to decay risks.

We model the problem as an infinite-horizon Markov Decision Process. The curse of dimensionality (all problem dimensions increase exponentially in the number of age classes) renders optimal solutions to large-scale problems intractable. We provide a solution approach for large-scale problems which utilizes interpretable machine learning to derive generic decision rules from optimal solutions to aggregated problems. Adapting deep reinforcement learning algorithms to the specific problem structure represents another solution strategy.

Alexander Pahr is working on this project.
 


Production and Supply Chain Management of Biopharmaceuticals

Biopharmaceuticals are pharmaceutical drugs derived from biological sources used for therapeutic or diagnostic purposes. They possess a particularly high efficacy and efficiency in treating complex health conditions, like cancer, inflammatory diseases or metabolic disorders. Biopharmaceuticals are drugs derived from biological sources and produced large-scale using a two-stage biomanufacturing process. In the upstream process, cultivated cells produce the active pharmaceutical ingredient in a non-linear process with random yield. In the downstream process, chromatography resins are used to purify the target protein. Those resins suffer from non-linear, random capacity decay. These stages are often treated as two independent entities operated with fixed process control strategies, neglecting the trade-off in both decisions and regulatory leeway to make condition-based decisions.

This project aims to support operational decision making in the biopharmaceutical primary production. For this we formulate different stochastic optimization models to simultaneously determine upstream and downstream operations decisions. For example, one model is a discrete-time, infinite-horizon Markov decision model (MDP) that maximizes the long-term product time yield by deciding on the upstream bioreactor harvesting and downstream chromatography purification and resin exchange. We solve the MDP model both with policy iteratoin (dynamic programming) and reinforcement learning and compare the respective results. We also work with other stochastic optimization methodologies, like, two-stage stoachtic programming, chance-constraint programming, and robust optimization.

Mirko Schömig is working on this project.