top of page

Data analysis with algorithms for process optimization

Datenanalyse mit Algorithmen zur Prozess-Optimierung

With our approaches to model-based data analysis, we deliver more detailed and reliable statements for two strategic problem areas:

 

  • the load on the processes due to the technical structure of the products and equipment, and

  • the identification of capacity reserves in "brown field" factories

The technical and organizational complexity of a factory with a multitude of products, materials and work processes is not particularly suitable for the use of “deep learning” algorithms as the tip of Artificial Intelligence, whose users admit that they do not actually know how their system actually works acts. We are therefore going “one step back” and reflecting the experience of experts in our models. On the one hand, this results in targeted queries for data, and on the other hand, the creation of the results is transparent, traceable and verifiable.
 

Data analysis to determine the load on the processes due to the technical structure of the products and operating resources (technology and KPI-based cost accounting)

As is well known, the value-related surcharge rates for overheads do not reflect the effort that the various materials cause in the value-added process, e.g. in purchasing, goods receipt and internal transport.
Neither full cost accounting, nor contribution margin accounting, nor process cost accounting take into account the different time consumption depending on the complexity and diversity of products and materials, which differ greatly between standard parts and customer-specific parts.

 


We have therefore designed and applied two new approaches to data analysis:  

  • a "top down" approach based on the technical structure of products and materials, supplemented by process frequencies that occur automatically in the ERP system,

  • a "bottom up" approach, which starts with the factors influencing information processes during processing by the employees, also supplemented by process frequencies.

Datenanalyse mit Algorithmen zur Prozess-Optimierung

Here are the results of an analysis in a division of a large company. The previous value-oriented overhead surcharge was standardized for each individual product group, ie 100% applied and compared to the results of a "pure" process costing and our two approaches.

Datenanalyse mit Algorithmen zur Prozess-Optimierung

Overall, the greatest deviations in the results lie between traditional full cost accounting and “pure” process costing. 
With our two different analytical approaches, we can better record and identify the stress on the processes caused by the various materials and products.

As a side effect of the analysis, we can also make statements about capacity utilization. The basis is our pool of experience about the extraction of process performance indicators from SAP and time standard values from the SAP laboratory. 
In this example, gaps in the capacity utilization could be made transparent, which amounted to between 20% and 40% of the capacity in the examined areas.

 

Top down analysis to show capacity reserves in "brown field" factories

Successful companies often suffer from capacity bottlenecks in their factories that cannot be quickly drilled out with a new plant on the greenfield. In our experience, however, there is a sufficiently large, but more or less hidden, potential for increasing productivity in most of the existing plants. The management is emotionally aware of this potential, but there is a lack of reliable data on the actually possible higher capacity utilization.

 

This is where our top-down analysis helps, which first uses the monetary values of the stocks of unfinished and finished products to determine the average lead time in the warehouse and in production.
 

Datenanalyse mit Algorithmen zur Prozess-Optimierung

Long idle times slow down the output

With mostly generous time assumptions for the physical processing time and the transport times (here 7 days), an average idle time, here for production, can be calculated, which is very high at 93%:

Datenanalyse zur Ermittlung der Belastung der Prozesse durch die technische Struktur der Produkte und Betriebsmittel (Technik- und KPI- basierte Kostenrechnung)

The direct connection between a long throughput time with a high proportion of idle or waiting times and poor capacity utilization is not always obvious from the impression of the activities in the factory: You rarely see that someone is “waiting”.
A logical analysis creates transparency here, as shown in the following picture: 

ProzessNeu1

Halving the lead time enables it to be doubled
the flow of orders through the factory.

Complexity Analysis for 
Capacity increase

Years of analysis and recordings have made it possible to create a complexity index that depicts the causes of idle times in the processing time.

ProzessNeu2

With just a few data about the situation in production, an initial approximation of the causes of the downtime shares can be made.

ProzessNeu3

With an inverse process, our algorithms determine the effects of possible measures on a reduction in the proportion of idle time in the throughput time.

ProzessNeu4

This meant that the construction of additional factories or outsourcing could be omitted, since the targeted restructuring in the "brown field" increases the output between
35% to 77% could be realized.

Do you need more information?

bottom of page