HMM structure

Image that we close our eyes , and choose a coin between the two

biased coin that we have prepared . Assume that the first coin have

80 % for us to get head and 20% for us to get tail , and the second

coin have 70 % for us to get head and 30% for us to get tail . While

we choose the first(second) coin , next time we have 30% (50%)

to choose the same coin again , and 70%(50%) to choose another coin .

Then we can use Hidden Makov Model to describe phenomenon

of choosing a coin and toss it to see what we get (head or tai) .

The model we describe above can be drown below :
HMM

Example :

If we observe ” HHHTTTTHHTTTTTTTHHHH ” sequence of doing

above activity . Then we can use this model to predict the Coin ( first or second)

– Toss pair for the sequence . Since we close our eyes , we can not see which

coin we choose to make a toss ( States are hidden for us , so it was called

Hidden Markov Model) .

Example 2 : In activity recognition , activities can be modeled the states , and

the things we touch while we are doing some specific activity can be modeled as

the observations . Then while we collect enough data , we can use them to

train the HMM and learn the parameters (transition probabilities, etc) of HMM

iteratively.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s