Monday 7 March 2016

Python script to walk on Markov Chain

Suppose you have a simple system which has only four different states, and your observation shows the system changes its state like the following Markov Chain:

Sample Markov Chain to walk on

We want to write an application to simulate the system's state change based on the above Markov Chain. You can find an answer for one of the frequent questions, "Why do many people use Python in Artificial Intelligence?" here. Because you are going to see how easy we can do this job with just basics knowledge of Python while writing the same application with Java or C++ could be a problem.

If you are not familiar with terms like state vector, superposition probability vector, transition matrix or the way we calculate the transition states you may need to take a look at our previous posts or just carefully follow this post. I'll try to describe each step I take to describe how we can walk on the given chain.

We are going to use row vectors to define the state of the system. So since our system could be in four different systems the followings are the representation of each state:

S0 : [ 1 , 0 , 0 , 0 ]
S1 : [ 0 , 1 , 0 , 0 ]
S2 : [ 0 , 0 , 1 , 0 ]
S3 : [ 0 , 0 , 0 , 1 ]

Now look at the above Markov Chain, if you are in state S0 your next state could be any of S0, S1, S2 and S3 with probabilities of 0.5, 0.35, 0.15 and 0.0 which we define it as a superposition of states vector like the following:

In probability domain next state after S0 is : [0.5, 0.35, 0.15, 0.0]

Do the same for other states we can define the transition matrix as below in Python 2D array:

transition_matrix = [ [0.5,0.35,0.15,0.], [0.,0.,0.2,0.8], [0.,0.,0.3,0.7], [1.0,0.,0.,0.]]

Now let's consider the current state of the system is S1 so that we can write:

state_vector = [0.,1.0,0.,0.]

To determine the next state you need to multiply these two matrices, so we have:

super_position_state = numpy.dot(state_vector, transition_matrix)

The result of the above multiplication gives you the superposition of probabilities of being in any of the four states. What we need is tossing a four-sided weighted dice and choose one of the possible sides. Since this is not an algorithm or programming lesson I just bring the function that accepts the superposition state and randomly returns one of the states, see the bellow function:

def get_next_state(super_position_state):
    weight_precision = 1000
    weight_sum = sum(super_position_state)
    weighted_state = map(lambda e: e * weight_precision / weight_sum, super_position_state)

    state_list = []
    i = 0
    for item in weighted_state:
        state_list += [ i ] * int(item) 
        i += 1
         
    new_state = [0.] * len(super_position_state)
    new_state[random.choice(state_list)] = 1.0;
     
    return new_state          

So if you give the calculated super_position_state to the above function it returns the next state:

state_vector = get_next_state(super_position_state)

That is the whole idea you may also need to find out the name of the state based on the non-zero element of the state vector, so the following function is helpful:

def get_state(state_vector):
    index = [i for i, e in enumerate(state_vector) if e != 0]
    return 'S' + str(index[0])

Now all we need is to put the process we described in a for loop to predict the next states, like the bellow:

for i in range(100):
    super_position_state = numpy.dot(state_vector, transition_matrix)
    state_vector = get_next_state(super_position_state)
    print state_vector, ':', get_state(state_vector)

You can download the whole Python scripts from this link: Download Python application to walk on Markov Chain

I ran a test for 2,000 state change and used the "Node sequence graph" tool to draw the result, here is the chain the simulation has built:

The result of 2,000 random walk on the given Markov Chain 

Now you see how easy we can use basic Python skills and understanding of Bayesian Network / Markov Chain to simulate a random walk on a network.

No comments:

Post a Comment