Identifying Promoter Regions Using the Viterbi Algorithm

Aditya Mittal
8 min readNov 15, 2021

Due to the versatility of DNA, it’s crucially important to determine what certain parts of the gene do. Looking at RNA polymerase transcription, it’s clear that there are three major regions in genes:

  • Promoter regions: regions upstream or next to where a gene will be transcribed that RNA polymerase binds to
  • Coding regions: the portion of the DNA or RNA that codes for proteins
  • Termination region: regions downstream that signal for RNA polymerase to break off

Promoter regions are a vital part of gene expression as they control the binding of RNA polymerase to DNA. As a result, it’s crucial for applications in genetic engineering and gene analysis to determine what gene codes for which proteins and how to alter those proteins.

Hidden Markov Models

Example of a Hidden Markov Model.

Let’s start off with a basic Hidden Markov Model. These Hidden Markov Models are assumed to be Markov processes with hidden states. We define the following terms:

  • Observations/emissions: These are the observations that the user is giving to the model. For this particular model, these would be “Walk”, “Shop”, and “Clean”.
  • States: These are the “hidden” values that we are trying to infer from the observations using probability analysis. For this particular model, these would be “Rainy” and “Sunny”.

Each arrow represents a different probability. For instance, the Rainy to Sunny probability is an arrow pointing from Rainy to Sunny with a probability of 0.6. This means that there is a 60% chance that from a Rainy state, it will move to a Sunny state in the next step. These are the various probabilities:

  • Initial probabilities: These are the probabilities that the states will take place in the first step. One such example is that there is a 30% chance of starting at the Rainy state. In other words, P(Rainy) = 0.3.
  • Transition probabilities: These are the probabilities within various states. One such example is the one aforementioned about Rainy to Sunny. Assuming t_k is the current state, we can say P(t_k+1 = “Sunny” | t_k = “Rainy”) = 0.6.
  • Emission probabilities: These are the probabilities that a state will cause this particular emission. For instance, P(Shop | Rainy) = 0.4.

Converting to Code

Bio.HMM.MarkovModel provides a useful tool for initializing these Hidden Markov Models for analysis.

from Bio.HMM.MarkovModel import HiddenMarkovModeltransition_alphabet = ['Rainy', 'Sunny']
emission_alphabet = ['W', 'S', 'C'] # W - walk, S - shop, C - clean
initial_prob = {'Rainy': 0.3, 'Sunny': 0.7}
transition_prob = {('Rainy', 'Rainy'): 0.4,
('Rainy', 'Sunny'): 0.6,
('Sunny', 'Rainy'): 0.2,
('Sunny', 'Sunny'): 0.8}
emission_prob = {('Rainy', 'W'): 0.1,
('Rainy', 'S'): 0.4,
('Rainy', 'C'): 0.5,
('Sunny', 'W'): 0.6,
('Sunny', 'S'): 0.3,
('Sunny', 'C'): 0.1}
model = HiddenMarkovModel(transition_alphabet, emission_alphabet, initial_prob, transition_prob, emission_prob, 1, 1)

Viterbi Algorithm

Viterbi allows us to infer the most optimal hidden states from a given observation. A more detailed analysis of Viterbi will be given below, but using BioHMM, we can utilize the power of Viterbi through one line of code.

model.viterbi(Seq('WSC'), transition_alphabet)

This provides an output of:

(Seq('SunnySunnyRainy'), -4.5972020163389145)

If given a sequence of “Walk”, “Shop”, and “Clean”, you might be given hidden states of “Sunny”, “Sunny”, and “Rainy” respectively for each of those steps. We can see that the probability of this is 10^(-4.597). These may seem like small numbers until noting that there is repeated multiplication of decimal numbers, meaning a smaller number is very likely.

Identifying Promoter Regions

Let’s take a more complicated example where the two states are background or promoter regions, while the emission probabilities are the various nucleotides (A, T, C, G).

Example of Hidden Markov Model in biology.

Note that there are no initial probabilities set over here so the initial probability of starting in some state s is P(s) = 1/n for n states.

from Bio.HMM.MarkovModel import HiddenMarkovModeltransition_alphabet = ['B', 'P'] # B - background, P - promoter
emission_alphabet = ['A', 'T', 'C', 'G']
initial_prob = {'B': 0.5, 'P': 0.5}
transition_prob = {('B', 'B'): 0.85,
('B', 'P'): 0.15,
('P', 'P'): 0.75,
('P', 'B'): 0.25}
emission_prob = {('B', 'A'): 0.25,
('B', 'T'): 0.25,
('B', 'C'): 0.25,
('B', 'G'): 0.25,
('P', 'A'): 0.15,
('P', 'T'): 0.13,
('P', 'C'): 0.42,
('P', 'G'): 0.3}
model = HiddenMarkovModel(transition_alphabet, emission_alphabet, initial_prob, transition_prob, emission_prob, 1, 1)

Similar to above, we can use Viterbi:

model.viterbi(Seq('AGTACACTGGT'), transition_alphabet)
model.viterbi(Seq('GCGCGCGCAA'), transition_alphabet)

These provide respective outputs of:

(Seq('BBBBBBBBBBB'), -17.567574447856494)
(Seq('PPPPPPPPBB'), -15.314217188702496)

As we can see in the first example, a sequence of ‘AGTACACTGGT’ has a probable state of being all background regions. This makes sense because from looking at emission probabilities there are low chances of promoter regions interacting with A’s or T’s with emission probabilities of 0.15 and 0.13 respectively.

However, in the second example, due to high numbers of C’s and G’s, a sequence of ‘GCGCGCGCAA’ has a probable state of being all promoter regions. This makes sense as CpG islands are common places for gene promoter regions.

HMMs with “Memory”

Lastly, we can increase the quality of our analysis by considering dinucleotide sequences. What if we could improve the accuracy of our state estimation by considering what the last nucleotide was? This would allow for stronger predictions as this is a likely scenario in our DNA.

The following state pattern is seen with 8 states: A+, A-, C+, C-, G+, G-, T+, T-. + represents promoter regions, while - represents background regions. The letter in front of the symbol represents what the last nucleotide was.

Transition probabilities were taken from Durbin, Eddy, et al. 1998, while emission probabilities are trivial (if the letter in the state matches the letter in the emission, then the probability is 1). Initial probabilities were defined by 1/n, such that all states had random initial probabilities of 0.125.

The transition probabilities were initialized like follows:

array_promoter = [0.18, 0.274, 0.426, 0.12, 0.171, 0.368, 0.274, 0.188, 0.161, 0.339, 0.375, 0.125, 0.079, 0.355, 0.384, 0.182]
vals_pp = [0.75*val for val in array_promoter]
vals_pb = [0.25*val for val in array_promoter]
array_background = [0.3, 0.205, 0.285, 0.21, 0.322, 0.298, 0.078, 0.302, 0.248, 0.246, 0.298, 0.208, 0.177, 0.239, 0.292, 0.292]
vals_bp = [0.15*val for val in array_background]
vals_bb = [0.85*val for val in array_background]

The first array looks at probabilities when the starting state is a promoter. It goes as follows AA, AC, AG, AT, …. Going from a promoter to promoter region has a transition probability of 75% while transitioning to a background region is 25%. Thus these numbers were accordingly multiplied to represent true estimates. The same was done for when the starting state was background.

After initializing, we can use Viterbi:

model.viterbi(Seq('AGTACACTGGT'), transition_alphabet)
model.viterbi(Seq('GCGCGCGCAA'), transition_alphabet)

We get the following results:

(Seq('A-G-T-A-C-A-C-T-G-G-T-'), -17.77362274426406)
(Seq('G+C+G+C+G+C+G+C-A-A-'), -16.064944938458574)

Notice that the results remain mostly the same with a slight difference in the second example having 3 background nucleotides. These slight variations show that HMMs with memory have slight accuracy increases but not by as much due to the high accuracy of mononucleotide HMMs.

Viterbi Algorithm

Viterbi was coded to determine its efficacy compared to BioHMM’s interpretation of the Viterbi algorithm.

Note that prior knowledge of the Viterbi algorithm is likely needed to understand the next section. For help, go here.

def viterbi_algorithm(observations, states, start_p, trans_p, emit_p, table):
V = [{}]

for st in states:
V[0][st] = {"prob": start_p[st] * emit_p[st][observations[0]], "prev": None}

We first start off with defining V, where all steps in our Viterbi algorithm will be stored for printing in the future. We then iterate over all states and create a dictionary initialized with initial probabilities and previous pointers (note this is the first step, so there are no previous pointers).

for t in range(1, len(observations)):
V.append({})
for st in states:
max_tr_prob = V[t - 1][states[0]]["prob"] * trans_p[states[0]][st]
prev_st_selected = states[0]
for prev_st in states[1:]:
tr_prob = V[t - 1][prev_st]["prob"] * trans_p[prev_st][st]
if tr_prob > max_tr_prob:
max_tr_prob = tr_prob
prev_st_selected = prev_st

max_prob = max_tr_prob * emit_p[st][observations[t]]
V[t][st] = {"prob": max_prob, "prev": prev_st_selected}

This creates pointers for the Viterbi program. The program runs by determining the maximum transition probability possible by multiplying the probabilities of the last step with the transition probabilities.

It updates this by iterating through each state, and then multiples the best transition state by its corresponding emission probability for finally storing in V. It does this for every single state and for as many steps as the length of the sequence is determined.

opt = []
max_prob = 0.0
best_st = None

for st, data in V[-1].items():
if data["prob"] > max_prob:
max_prob = data["prob"]
best_st = st
opt.append(best_st)
previous = best_st

This determines the maximum probability state for the last step to start the traceback.

for t in range(len(V) - 2, -1, -1):
opt.insert(0, V[t + 1][previous]["prev"])
previous = V[t + 1][previous]["prev"]
print (" ".join(opt) + ", probability = %s" % max_prob)

It then traces back through the pointer matrix and looks at “prev” to determine the optimal pathway. Lastly, it prints the final result to the compiler with the maximum probability.

Running this model on the sequence of ‘GCGCGCGCAA’, the following DP table was created with the following results:

Final result of the self-coded Viterbi algorithm.

Comparing the results of the self-coded Viterbi algorithm to the HiddenMarkovModel implementation gives the following:

HiddenMarkovModel Implementation
G+C+G+C+G+C+G+C-A-A-, probability = -16.064944938458574

Own Implementation
G+C+G+C+G+C+G+C-A-A-, probability = 1.0545885728228732e-07

As seen here, they had the same resulting states, showing that the model did indeed work the same for both.

View my Github repository for more information and for access to the full code for the project!

TL;DR

  • Hidden Markov Models have both states and emissions, which are interconnected by conditional probability scores.
  • Hidden Markov Models are a tool used to infer whether a region is a promoter or background region based on a sequence of nucleotides.
  • The Viterbi algorithm is a dynamic programming algorithm that allows for this process to occur by looking at various transition and emission probabilities and keeping a pointer matrix to trace back through.

If you want to talk more, schedule a meeting: Calendly! If you’re interested in connecting, follow me on Linkedin, Github, and Medium.

--

--