Let's say there is a game where for each move there are probable paths, depending on the throw of fancy dice. Depending on the results there might be transitions forward, backward or staying on one place. Eventually (even after infinite amount of throws) the graph leads to final states. Each edge is weighted with probability .
For the case where there are no loops I can simply sum+multiply and re-normalize the probabilities for each outcome if I start at the same vertex(cell).
However, if I have loops it starts getting confusing. For example, let's say we have same probability on each edge:
start0
/\ ^
/ \ |
end1 tr2
/
end2
the graph starts at start0 and has 50% chance of terminating at end1 or going to transition tr2. From tr2 there is again 50% chance terminating at end2 or going back to start0.
How could I calculate the total probability for reaching each stop end1 and end2. If I try using a converging series like this:
pEnd1=1/2 + 1/2*1/2+1/8+.. ->lim->1. which makes no sense since end2 is getting no probability. Obviously I have a mistake there.
So my question is how can I calculate the probabilities of reaching the final nodes if I have the probabilities of each edge but I could have loops.
example 1) simple fork with a loop all edges are 50% probable
start0-> p=50% ->end1
start0-> p=50% ->tr1
tr2-> p=50% ->start0
tr2-> p=50% ->end2
example 2) more loops
start0-> p=1/3 ->e1
start0-> p=1/3 ->tr1
start0-> p=1/3 ->start0
tr1-> p=1/3 ->tr2
tr1-> p=2/3 ->start0
tr2-> p=7/9 ->start0
tr2-> p=2/9 ->end2
example 3) - degenerate test case - since all paths end at e1 - it should end up having 100% probability
start0-> p=1/3 ->e1
start0-> p=2/3 ->tr1
tr1-> p=3/4 ->start0
tr2-> p=1/4 ->e1