Markov Chain Introduction in R

The Markov chain is used to explain even the most complex real-time events. Markov chain is used in some form by speech recognition, text identifiers, path recognition, and many other Artificial Intelligence systems.

In this tutorial, we’ll show how simple it is to grasp this notion and how to put it into practice in R.

LSTM Network in R » Recurrent Neural network » finnstats

Markov Chain Introduction in R

The “memorylessness” idea underpins the Markov chain. In other words, the process’s future state is determined solely by the prior state, rather than the order of states.

This basic assumption simplifies conditional probability computation and allows this approach to be used in a variety of contexts.

We commonly employ the Latent Markov model, which is a more advanced variant of the Markov chain, in real-world issues.

Example

In-country A, there are just two businesses: Water1 and Water2. One of these competitors is interested in forming a partnership with a water firm.

They employ a market research firm to determine which brand will have the greater market share after one month.

Water1 currently controls 60% of the market, while Water2 controls 40%.

Naive Approach Forecasting Example » finnstats

The following are the results of the market research firm’s findings:

Over a month, the probability of a client sticking with the brand water1 is 0.8.

Over a month, the probability of a consumer switching from water1 to water2 is 0.2.

Over a month, the probability of a client sticking with the brand water2 is 0.9.

Over a month, the probability of a consumer switching from water2 to water1 is 0.1.

We can observe that customers prefer water2, although water2 has a lower market share right now.

As a result, we can’t be certain about the recommendation unless we run some transition calculations.

Markov Chain Introduction in R

Step 1: Making a transition matrix and using a discrete-time Markov chain

#install.packages("markovchain")
#install.packages("diagram")
library(markovchain)
library(diagram)

Creating a transition matrix

transmat <- matrix(c(0.8,0.2,0.1,0.9),nrow = 2, byrow = TRUE)
transmat
     [,1] [,2]
[1,]  0.8  0.2
[2,]  0.1  0.9

Create the Discrete-Time Markov Chain

disctrans <- new("markovchain",transitionMatrix=transmat, states=c("water1","water2"), name="MC 1")
disctrans
MC 1
 A  2 - dimensional discrete Markov Chain defined by the following states:
 Water1, Water2
 The transition matrix  (by rows)  is defined as follows:

       Water1 Water2
Water1    0.8    0.2
Water2    0.1    0.9
plot(disctrans)

Step 2: After 1 month and 2 months, the market share is calculated.

Market Share after one month

Current_state<-c(0.6,0.40)
steps<-1
finalState<-Current_state*disc_trans^steps #using power operator
finalState
Water1 Water2
[1,]   0.52   0.48

Market Share after two months

Best Course For Data Analytics Free » finnstats

Current_state<-c(0.60,0.40)
steps<-2
finalState<-Current_state*disctrans^steps #using power operator
finalState
  Water1 Water2
[1,]  0.464  0.536

Step 3: Creating a steady state Matrix

Steady-state Matrix

steadyStates(disc_trans)
     Water1    Water2
[1,] 0.3333333 0.6666667

Conclusion

Even current market share is low, going with water 2 is ideal. In this article, we introduced you to Markov chain equations, terminology, and their implementation in R.

Homoscedasticity in Regression Analysis » finnstats

The purpose of this research was to show how basic Markov chain principles and absorbing Markov chains may be used to address a business challenge.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

seventeen + ten =