Wednesday, 27 January 2016

How to convert "Time Series" to "Bayesian Network"

I already talked about time series in "Time series as a point in space" post and also discussed a way we can find its anomalies in "A simple way of calculating anomaly" post. But since I've been talking in recent posts about building Bayesian Network to learn existing patterns in information, here I want to show you how easily we can model a time series data with a Bayesian Network.

The process of converting a time series data to a network is nothing rather than the way we get information from a series when we look at it and build its pattern in our mind.

Look at the following time series, when you look at it, you don't need to know what this series is representing, or what are the unit of the Y or T axis. All gets your attention is the up and downs of the series through the time.

Sample time series

So what you basically do in your mind is the following steps:

1- Building your imaginary Y and T axes with large units, like what you see in the picture.
2- Building a network in your mind which shows at your imaginary times, the possible features of the series.

Sunday, 24 January 2016

Using Bayesian Network to learn word sequence patterns

We want to see how the stuff we talked about before (in the following posts) can help us to learn patterns. The example is about learning the way we use words in simple sentences; the sentences we usually use in our IM, chat, or the way babies start to build sentences. Here are the posts you may need to read them before if you are not familiar with Bayesian Network:

Using probability to predict state of a system
Introduction to Transition matrix and Superposition of states
More on Transition matrix and Superposition of states

How your cell phone learns your style of writing
You get a new cell phone and start using its messaging system. You enter sentences when you text your friends, and eventually see the cell phone starts suggesting words or guessing what you are going to type. The more you use the phone, the better its guesses will be.

Wednesday, 20 January 2016

More on Transition matrix and Superposition of states

The example in the previous post was about a very simple 2 states system, now let us consider a bit more complicated system which has 3 states. Things we are going to talk about or calculation we are going to do here for this 3 states system could be easily extended for a system with more than 3 states too. Now look at the following network which can represent any possible network graph of state transitions for a system with 3 states.


General state transition network for a 3 states system.

Here you see we have the option of going from any state to any state with a probability. To not get confused we show the probabilities with 'a' instead of 'p' so for any node of the graph the sum of the outgoing arrows should be 1, like the following:

Sunday, 17 January 2016

Introduction to Transition matrix and Superposition of states

Consider the following simple state transition network, it is just a simpler version of what we had in our previous post. In each state, we have only two options either staying in the state or going out to the other state. So it is obvious that if we consider the probability of going from S1 to S2 as a, the probability of staying in S1 would be 1-a. Same for when you are in S2, you either go to S1 with the probability of b or stay there with the probability of 1-b.


Simple 2 states transition network

At any given time like t, you are either in S1 or S2 so if we consider S(t) as the current state which is a vector of probabilities of being in S1 or S2  while we are in time t like bellow.

Thursday, 14 January 2016

Using probability to predict state of a system

Although we can discuss this topic from mathematical point of view and specifically Markov process model but it is not that much difficult to talk about it in plain english too. What I want to talk about it is that we can use what we've talked about during last 2 or 3 posts to build a network of information to find out the most probable state of a system or process.

Suppose you have a system which at any give time could be in one the following elements of the state set:

S = { S1, S2, S3, ... , Sn }

We start monitoring the system and record its state-change in every monitoring interval, after a while we will have something like the following:

Sample state transition network with transition score

Here it shows that we have seen 10 times transition from S1 to S1,  or 13 times transition from S1 to S2, ... or 25 times transition from S4 to S5 and ... We can easily convert these transition counts or scores to probability values. For example when your current state is S1 then you only have 3 options to go to next state, staying in S1 or going to S2 or going to S3. Each of these transitions has a probability which can be calculated as bellow:

Monday, 11 January 2016

The effect of repetition in learning

Our habits are strange, the more you do them the more it's difficult for you to get out of them. In a sense, it seems like some simple probability rules are controlling our habits, as you use them more the chance of doing them again gets increased and in order to get rid of them you have to either not repeat them or you have to do something else more.

Let's get back to our previous example, "route to work".  Look at the picture our subject usually uses route A or B to get to the office, every time he takes one of these routes we increase a counter like NA or NB which at any time shows the number of the times the subject has taken the route.

Learning model when you have 2 options to choose from

So the probability of taking any of these routes at any time will be like the followings:

Sunday, 10 January 2016

Visualizing relations and probabilities

It is not wrong if we say human vision is one the most advanced pattern recognition systems that exists. Just consider our vision system starts to work since we get born even before it! And starts collecting data, processing and ... consider your eyes get 10 samples per seconds from the environment while you are awake like 16 hours a day, here is how many images have been processed by a 30-year-old human being:

10 (image/sec) * 3600 (sec/hour) * 24 (hour/day) * 365 (day/year) * 30 (year) = 9,460,800,000

Now consider you have many shapes, objects, colors and ... in each of these captured images, how much we learn from them every second of our life!? Now you can understand why it is so much easy for a human being to recognize people, things, ... so much easy while it is still a very hard work for a machine.

Friday, 8 January 2016

All is about probability

Have you ever thought about how a goalkeeper dives for the ball or a dog runs to catch something?! Mathematical modeling and solving some complex time related differential equations? No way! It seems that it is not that much related to mathematics or physics or dynamics that we human know because most animals do many complex things in their daily life exactly like we human, hunting, using shortcuts while moving and so on.

A goalkeeper doesn't model the ball's movement
with complicated mathematical formulas, it just uses 
his already practiced patterns.
If you ask me, I'll say it is all about probability. Let's consider a simple example. We are going to model the route you take to work every morning. We monitor you and see for a month in which we have 24 working days you have taken route A for 20 days and route B for only 4 days.

It is a simple pattern and if someone knows he/she can easily find out the best route to see you there.  This is in fact exactly what a goalkeeper does or even a dog when they run for a ball or a frisbee. Even most basic daily patterns we or animals use are more complicated than our simple route selection. For now, we just know with the chance of 20/24 you'll choose route A and 4/24 route B. Now let's consider while we are monitoring you in those days we collect another data like if it is raining or not. Now the collecting data will give us a better prediction information.

Restarting

I haven't written even a word in months, you start writing about stuff you like with love and excitement and you know you have to sacrifices something for it. At least, you have to get time from something and give it to this. But after a while you find out it is the time to get time from writing the thing you started with love and give it to something else to be able to survive in life ... this is how things works in this world. Anyways, I'm gonna write about the human intelligence they way human thinks, stuff about pattern recognition or anomaly detection, perhaps some theories I mean Markov and or Bayesian models ...