Hidden Markov Models, Bayesian Statistics, and Quantum Computing together can estimate inventories


In production systems, organisations that produce things, like mines, factories, refineries, etc. a whole system exists dubbed inventory. The inventory is a graph of materials and objects moving through the systems and changing face. For example, in a coal mine, dirt is digged out of pits in units called lots. The lot moves around the mine, gets merged, crushed, dried, and cleaned. The lot moves from the ground to trucks and process points, ports, railways, etc. where most likely it changes shape and characteristics. Same thing happens in manufacturing, chemicals, water procurements, and many other industries.

It is crucial and critical for the business to know their inventory at each stage. They usually have manual or automatic processes to measure characteristics of every part of the inventory. For example weightometers are fitted all over the system which generate data, and they may manually keep record of what happens on the ground. The data generated is almost always limited, of low quality and is not directly a reflection of the reality. In fact, computer science should come in and start filling the gaps, improve data quality and estimate inventories much better than the most skilled human can do by looking at the inventories and excel sheets. In this post I want to reflect on how three techniques, two from AI, and one from Quantum Computing can provide a set of tools to just do that.

Hidden Markov Models (HMM) are used to extract state machine in temporal data. For example, if we track Joe movements, the fact that Joe  usually gets out of the kitchen, go to the Mary’s desk for five minutes and come back to his own desk, is a Markov chain of states hidden in the data. In inventories HMM can be very well used to extract sate machines. I refer to this paper for a good tutorial on HMMs. In the inventory settings, HMMs can be used to extract movements of materials and objects and the change in their characteristics. For example coal goes to the process point A and becomes dry and the dry coal goes to the crusher and becomes grained, and then goes to another process points and becomes pure by loosing ash. Or again coal goes into a process point and ash comes out. Such MMs can be extracted by looking at the data.

Bayesian statistics is simply the implications of the Bayes theorem: P(A|B) = P(B|A)P(A)/P(B). In computer science a mixture of Bayesian statistics and HMM is used for improving data quality in time series (read a good paper here). It is mainly done by forming correlation tables. For example, Joe always goes from Mary’s office to his own, so the correlation of Joe location between Mary’s office and Joe’s office is high, but he never goes from Mary’s office to the Jack’s office, so the correlation of Joe’s location between Mary’s and Jack’s office is zero. Using this knowledge, if we are in doubt if Joe is currently in his own office or in Jack’s office, we can check to see where has he been before. If he has been in mary’s office beforehand, we can estimate that Joe is probably in his own office. In literature this is called probabilistic easing. The probabilistic easing, however, does not produce an exact value of the location and characteristics of objects, instead it creates a set of probable location with their probability. For example in a coal mine, we may estimate that the lot X is out of the crusher with 20% probability and it is merged with lot Y with 80% probability. This dramatically extends our knowledge of the inventory, however, it is probabilistic, which is ok, because life and reality are probabilistic.

Quantum mechanics may seem irrelevant to a coal mine. However, Quantum mechanics is extremely beautiful, and I love to find a use for some of the formulas and methods coming out of Quantum computing in unrelated fields. There is one specific method in Quantum computing that might be relevant to our settings. It can become very useful to improve our estimations for eased data generated from the previous steps.

The problem of the eased data is of course that it is probabilistic (its both curse and blessing). In reality, inventories get measured occasionally and this measurements infer valuable information that are ignored in the easing approaches. For example, the easing method reports a specific inventory of coal to be with 60% probability high in Ash. The engineers on the ground sample some of the coal in this inventory and send it to lab, and the result comes that the coal has exactly 20% ash. The easing approach uses this fact merely to improve the easing from this point forward. However, quantum algorithms tell us that this fact contains much more information.

In quantum mechanics there is a state which is called entanglement. Entanglement is a very unintuitive phenomenon but we know that it happens. It means once two atoms are entangled, their electron are in the same superpositions, no matter how far apart those atoms are from each other. By the way, superposition is nothing new in this post. Remember we said that easing method says that Joe is with 10% probability in Jack’s office and 90% probability in his own office, this is called super position in quantum mechanics. Just replace Joe with your favorite electron. Now that we figure out the similarity with eased state of inventory and quantum state of matter, we can utilize the work in quantum computing to improve our knowledge of the state of inventory.

In quantum computing, once entangled atoms pass through quantum gates, their state changes, and we can compute their superpositions, which is something similar to Bayesian easing. However, in quantum computing, once we measure the sate of one of the atoms, we break the engagement, and the atom collapses to the new state which we measured. This is very similar to when we send the coal samples to lab and realize the exact state of our coal. In fact, the eased probabilistic state collapses to one value that has come out of lab. In quantum computing however, we use this data to calculate new superposition for all other atoms. For example, if the measurement of the output of the first quantum gate comes out as 001, we know that other gates can not be in an state which is inconsistent with the first gate being in 001. We can re-estimate the superposition of other atoms with some linear algebra. We can do exactly the same in inventory. For example, we can apply this new knowledge to back track and re-estimate the eased sate of all other bits of coal in the system but limit easing only to states that are consistent with the new knowledge out of lab.

This may sound a bit superficial and hard to implement, but remember that the Bayesian easing based on HMM is not my invention and people have been using it successfully. Adding the quantum flavour is not hard even because it is also implemented and used by many others in the computer science field. I don’t see a feasibility problem here. There might be a marketing problem but if you are interesting in implementing above, and you are research student, feel free to drop me an email.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: