Proof rules for probabilistic loops

Probabilistic predicate transformers provide a semantics for imperative programs containing both demonic and probabilistic nondeterminism(cid:0) Like the (cid:1)standard(cid:2) predicate transformers popularised by Dijkstra(cid:3) they model programs as functions from (cid:4)nal results to the initial conditions su(cid:5)cient to achieve them(cid:0) This paper presents practical proof rules(cid:3) using the probabilistic transformers(cid:3) for reason(cid:6) ing about iterations when probability is present(cid:0) They are thoroughly illustrated by example(cid:7) probabilistic binary chop(cid:3) faulty factorial(cid:3) the martingale gamblingstrategy and Herman(cid:8)s prob(cid:0) abilistic self(cid:0)stabilisation(cid:0) Just as for traditional programs(cid:3) weakest(cid:6)precondition based proof rules for program deriva(cid:6) tion are an important step on the way to designing more general re(cid:4)nement techniques(cid:3) or even a re(cid:4)nement calculus(cid:3) for imperative probabilistic programming(cid:0)


Introduction
The standard predicate transformers described by Dijkstra 3] provide a model in which a program is a function: it takes a set of desired nal states to the set of all initial states from which the program's execution is guaranteed to produce one of those nal states.Regarding sets of states as predicates over the state space, programs are thus predicate transformers.
A conspicuous feature of Dijkstra's presentation was the appearance of demonic nondeterminism, a form of choice in programs over which the users have n o c o n trol and about which nothing can be predicted.It arises naturally in the predicate-transformer approach, and as a result bene ts from a particularly simple treatment there.
In the work of Kozen 12] demonic nondeterminism is replaced by probabilistic nondeterminism.Probabilistic nondeterminism is not controllable by the user (either), but it is to some extent predictable: in repeated runs of a program coin: =heads 1 2 coin: =tails one would have the same expectations about the nal value of the program variable coin as one would have about the repeated ipping of a real coin.
We h a ve extended the above 1 6 ], presenting a system in which demonic and probabilistic nondeterminism are treated together in a simple way: as well as building on the original work of Dijkstra and of Kozen, we took advantage of later work by Claire Jones and Gordon Plotkin 10] and a `relational' probabilistic model proposed by J i F eng He 7] (who used `convex closure ' 15] to generalise an earlier imperative model due to Kozen 11]).
One of the principal results of our earlier work 16] is the exact determination of the `healthiness conditions' that apply to probabilistic predicate transformers they generalise the conditions given by Dijkstra for standard predicate transformers.
Morgan is a member of the Probabilistic Systems Group within the Programming Research Group at Oxford University: the other members are Annabelle McIver, Je Sanders and Karen Seidel.Our work is supported by the EPSRC.
Our overall aim is to broaden the scope of re nement methods to include more aspects of `real' system design, in this case that the ultimate components from which a system is built are never entirely reliable.When their unreliability can be quanti ed, probabilistic program derivation, or re nement, can be used to match l o w-level unreliability of components to high-level `tolerable' unreliability in a speci cation.
The contribution of this paper speci cally is to use the probabilistic healthiness conditions to propose and justify methods for the treatment of probabilistic loops in that way w e m o ve the theory 16] t o wards everyday practice.The main theorems concern probabilistic invariants and variants, and generalise the corresponding standard theorems our probabilistic healthiness conditions are crucial to their proofs, and to the separate treatment of partial and total correctness.
Informally, the use of invariants is just as in standard programs, based on the work of Hoare and Floyd 9, 4 ]: the invariant is established initially it is maintained and on termination additionally the negation of the repetition condition holds.Here however we use probabilistic invariants, as anticipated by Kozen, by Sharir, Pnueli and Hart 21], and nally by Jones 12, 1 0 ] we h a ve generalised their work by treating nondeterminism as well.
The probabilistic variant rule (and the related `0-1 Law') was earlier proposed by Hart, Sharir and Pnueli 6] and shown to be sound and nitarily complete: a variant function must be bounded above and below, and have a nonzero probability of decrease.Our contribution here is to express that rule at the level of probabilistic predicate transformers, reproducing the proofs of soundness and nitary completeness in that context.We a c hieve a slight generalisation in that catastrophic failure (divergence, or abort) is included naturally as a possible behaviour of programs in our model.Sections 3{4 give the main theorems for the use of invariants and the way in which they are combined with information about loop termination they are illustrated by the examples of Sec. 5, chosen to reveal the various combinations of probabilistic and standard variants and invariants.Sections 6{8 treat termination on its own.Section 9 provides a nal example, a recent `showcase' for probabilistic formalisms in which certain termination is the principal feature.

Probabilistic predicate transformers
Standard predicates are sets of states, and can thus be regarded as characteristic functions from the state space to f0 1g.In practice | that is, for reasoning about speci c programs | they are written as Booleanvalued expressions (formulae) over program variables.
Probabilistic predicates are functions from the state space to the entire closed interval 0 1]. 1 In practice they are written as real-valued expressions over the program variables.
The manipulation of the predicate transformers in the two systems | standard and probabilistic | is very similar.For example, in both cases assignment is syntactic substitution, sequential composition (of programs) is functional composition (of the predicate transformers) and recursion is given by least xed points.For a full presentation we refer the reader to our other publications 20, 1 6 ]. 2 Because symbols may be confused in the probabilistic case, however, we adopt the following notational conventions to separate them as much as possible.Notation 2.1 Standard predicates are Boolean expressions over the state variables, and are written in the normal way.( W e use , for bi-implication.)Probabilistic predicates are real-valued expressions between 0 and 1 inclusive.The brackets ] convert a standard predicate to a probabilistic predicate so that true] is (the constant expression) 1 and false] i s 0 .
The overbar operator denotes subtraction from 1, so that P ] is the same as :P] for standard predicate P .
Minimum and maximum are written u t respectively, with u binding more tightly.
The relations `everywhere no more than', `everywhere equal' and `everywhere no less than' between probabilistic predicates are written V, and W respectively.

2
With the above conventions we h a ve for example that because for all values of the state-variable x the right-hand side is at least 1=2.However the stronger claim 1=2 x 0]=2 + x 0]=2 is false, since when x is 0 the left-hand side is 1=2 but the right-hand side is 1.
The basic properties of predicate transformers needed for our presentation are collected in App.B, and are referred to here as `facts'.Those concerning wp are consequences of the healthiness laws 16].We make essential use also of weakest liberal probabilistic preconditions in some of our proofs the facts concerning them are proved elsewhere 18].Since wlp does not appear in the statements of the principal theorems, however, the wlp-theory is not needed for use of our results.Notation 2.2 We write f: xfor the function f applied to the argument x (rather than f (x)).The application operator : is left associative.2 Notation 2.3 We write : = for `is de ned to be'.

3 Partial loop correctness
The weakest liberal precondition of a program describes its partial correctness, identifying those initial states from which the program either establishes a given postcondition or fails to terminate 3].The more conventional weakest (not liberal) precondition requires termination as well, and thus describes total correctness.We write wlp:prog:Q and wp:prog:Q for the weakest liberal and weakest preconditions respectively of program prog and postcondition Q.
The wlpsemantics di ers from the wp in these two respects. 3.The nowhere-terminating program is de ned wlp:abort:Q : = 1 for all postconditions Q. (Compare wp:abort:Q : = 0.) 2. The weakest liberal precondition semantics of a recursive program is given by a greatest (rather than least) xed point.When considering loops, a special case of recursion, the wlp semantics is therefore as given in Def. 3  (2) Proof: We substitute I for P in the right-hand side of Def.3.3, setting Q : = G u I , and nd G u wlp:body: obtaining the result immediately from the elementary property of greatest xed points that x f: x implies x :f . 2 It is worth noting that the assumption (1) of Lem.3.4 is weaker than the one used in the standard rule for partial correctness of loops, where one conventionally nds wp instead: G u I V wp:body:I : (3) The di erence is real, even in the standard case, but only if we are genuinely interested in partial correctness.With Lem.3.4 we can show for example wlp:(do x 6 = 0 !abort od): x = 0 ] 1 (4) choosing I : = 1 to do so: if the loop terminates then it establishes x = 0 .
The reason that (3) is used in the standard case is that it su ces for total correctness of the loop, and avoids introducing the extra concept of wlp: if indeed I V wp:loop:1 t h e n w e m ust have G u I V wp:body:1 in any case, making (1) and (3) equivalent.
For probabilistic programs the above analysis does not apply, 4 and as shown below use of the stronger (3) is required for soundness in general (Ex.4.7).

Total loop correctness
In the standard case Fact B.1 is used to combine partial loop correctness with a termination argument, to give total loop correctness.Here we rely on its probabilistic analogue Fact B.2. strictly weaker than the above when I is probabilistic since in that case I & I 6 I .Note for example that 1=2
It is easily checked that Q 0 &Q 1 Q 0 uQ 1 when either Q 0 or Q 1 is standard, and thus that Fact B.1 results when Fact B.2 is specialised to Q 0 Q 1 : =Q 1 for standard Q.W e h a ve further that & is commutative a n d associative with identity 1 .
With Fact B.2 and Lem.3.4 we h a ve immediately a rule for total correctness of probabilistic loops.] n > 0 !skip od .
Then we h a ve V wp:loop:0 wp:loop:(G u I ).
2 Thus we improve Lem.4.3 in a di erent w ay, below, where for simplicity w e assume that body is deterministic. 6The strategy is to develop a larger invariant I 0 than the I we are given, so that when we e v entually form the precondition I 0 & T we r e c o ver the original I .
First we show strict wp-invariance of T itself.The examples below illustrate the interplay o f i n variant and termination condition in the three possible probabilistic cases: one, the other, or both are probabilistic.

Uniform binary selection
In this example the termination condition is standard, indicating either certain termination (when 1) or failure to terminate (when 0).Given a positive i n teger N , a n i n teger l is to be chosen uniformly so that 0 l < N the method is by successive divisions of the choice interval into roughly equal halves.
Example 5.1 Let prog, init and loop be as in Fig. 1: given arbitrary integer C , w e are interested in the probability that l = C nally.W e de ne I : = l C < h ]=(h ; l) and with the following calculation show i t t o b e i n variant. 7n the calculation, we start with the overall postcondition and reason backwards towards the precondition, indicating between predicates when wp is applied to give the lower (the right-hand side of a reasoning step) from the upper (the left-hand side).We h a ve init !n f : =N 1 loop !do n 6 = 0 !f : =f n n: =n ; 1 p n: =n + 1 od The program prog is the whole of the above.The decrementing of n fails probabilistically, sometimes incrementing instead.after applying wp:(l: =p h;p p;l h: =p) W in spite of 0-divisions: lower is 0 whenever upper contains divisions by 0 after applying wp:(p: =( l + h) 2) in spite of 0-divisions as required.Standard reasoning with variant h ;l shows that termination is certain because N > 0 initially thus T 1, implying I V T trivially, and we h a ve immediately from Thm. 4.6 that l C < h ]=(h ; l) I V wp:loop:(G u I ) wp:loop: l = C ] and so nish with 0 C < N ]=N wp:init:( l C < h ]=(h ; l)) V wp:prog: l = C ] : We conclude overall that for any i n teger C the probability o f prog's setting l to C nally is at least 1=N provided 0 C < N , and that since there are exactly N such v alues for C we h a ve a c hieved uniform selection from the given interval.The probability is (only) at least 0 otherwise | when C lies outside the interval we should assume that we h a ve `no chance' of establishing l = C nally. 2 Note that the proof in Ex. 5.1 of invariance of I would succeed even if p were chosen nondeterministically between l and h rather than being assigned the speci c value (l + h) 2. In that case we w ould appeal to the more general Thm.A.3 to reach the same conclusion. 8

Faulty factorial
In this example both the termination condition and the invariant are probabilistic.Given a natural number N , the program is to (attempt to) set f to N ! in spite of its containing a probabilistically faulty subtraction.
Example 5.2 The program is shown in Fig. 2, and is the conventional factorial algorithm except that the decrement o f n sometimes increments instead. 9hen p is 1, making the program standard (and decrementing of n certain), the invariant N != f n! su ces in the usual way t o s h o w t h a t wp:prog: f = N !] 1.In general, however, that postcondition is achieved only if the decrement alternative i s c hosen on each o f t h e N executions of the loop body, t h us with probability p N .More rigorously we de ne invariant I : = p n N != f n!], showing its preservation with the calculation The exact termination condition depends on p.Standard random walk results 5] s h o w that loop terminates certainly when p 1=2, but with probability only (p=p) n otherwise.In either case, however, the termination condition is at least p n and so exceeds the invariant: thus Thm.4.6 applies.We conclude wp:prog: as suggested by our informal analysis earlier. 2

The martingale
Here the termination condition is probabilistic and the invariant is standard.The martingale is the gambling strategy of doubling one's bet after each l o s s o f a n e v en wager: since the wager is won eventually, with probability 1 , a n o verall pro t seems guaranteed. 10s is well known however, the aw with the martingale is that the gambler runs the risk of using all his capital before the probabilistically certain win: his capital is nite, but the number of bets before the eventual win can be arbitrarily large.Example 5.3 We model the martingale as in Fig. 3.If the gambler cannot place his bet, because his capital has become too small, he simply remains within the loop.
It is easy to show t h a t I : = c + b = C + 1 ] i s a n i n variant o f loop and with some arithmetic it can be shown informally that | with the given initialisation | the chance of losing consistently until the capital is exhausted is 2=P, where P is the smallest power of two exceeding C+1.Thus wp:prog:1 is just 2=P.
There are two problems in applying Thm.4.6 at this point, however.The rst is that, although 2=P is the termination condition of prog as a whole, we h a ve not established the termination condition of loop itself though we could calculate it, it would be a messy expression in terms of general initial values for b and c.
The second problem is that the invariant i s not less than the termination condition: after the initialisation shown, for example, we h a ve I 1 but 1 6 V T .
Both problems can be solved by using Lem.4.3 in this case rather than Thm.4.6 whatever T is in general, still we h a ve  6 The 0-1 law for termination Beyond its use for speci c programs, Thm.4.6 has a general consequence that will be of importance to our later analysis of termination. 11The 0-1 Law of Hart et al. 6] reads informally a s f o l l o ws.Let process P be de ned over a state space S, and suppose that from every state in some subset S 0 of S the probability o f P 's eventual escape from S 0 is at least p, for some xed p > 0. Then P 's escape from S 0 is certain, occurring with probability 1 .More succinctly one could say that the in mum over S 0 of eventual escape probability is either 0 or 1: it cannot lie properly in between.
Note that we do not require that for every state in S 0 the probability o f immediate escape is at least p | t h a t i s a m uch stronger condition, from which the certainty o f e v entual escape is obvious.
In our context we x loop and choose an invariant I : the process is then the iteration of body, leading to eventual escape from the set of states G u I | leading thus equivalently to eventual termination of the loop.The 0-1 Law in that form is easily proved from our Thm.4.6.
Lemma 6.1 Let I be a wp-invariant o f loop with termination condition T (as in Not.4.2). 12 If for some xed probability p > 0 w e h a ve p(I ) V T , then in fact I V T .
Proof: With G's being standard, wp-invariance of I and Fact B. V p(wp:loop:1) Fact B.7 p(T ) de nition T and, since p 6 = 0, our result follows by dividing both sides by p.

2
Aside from its intrinsic interest, the importance of Lem.6.1 is that it will give u s a v ery general variantbased argument for establishing termination of probabilistic loops.

Probabilistic variant arguments
Termination of standard loops is conventionally shown using `variants' based on the state: they are integervalued expressions over the state variables that are bounded below but still strictly decreased by each iteration of the loop.That method is complete (up to expressibility) since informally one can always de ne a v ariant variant : = `the largest number of iterations still possible from the current state' , which satis es the above conditions trivially if the loop indeed terminates.
For probabilistic programs however the standard variant method is not complete (though clearly it remains sound): for example the program do (n mod N ) 6 = 0 !n: =n + 1 1=2 n: =n ; 1 od (5)   over natural number n is certain to terminate, yet from the fact that its body can both increment and decrement n it is clear there can be no strictly decreasing variant.
With the 0-1 Law of Lem.6.1 we are able to justify the following variant-based rule for probabilistic termination, su cient for many practical cases including (5).In Sec. 8 we s h o w it complete over nite state spaces.
Lemma 7.1 Let V be an integer-valued expression in the program variables, de ned at least over some subset I of the state space S. Suppose further that for iteration loop 1. there are xed integer constants L (low) and H (high) such that G u I V L V < H ] and 2. the subset I , as a (standard) predicate, is at least 13 wlp-invariant for loop and 3. for some xed probability p > 0 and for all integers N we h a ve p(G u I u V = N ]) V wp:body: V < N ] : Then termination is certain from any state in which I holds: we h a ve I V T , where T is the termination condition of loop.
Proof: We show rst that Assumption 2 allows Assumption 3 to be strengthened as follows: wp:body:(I u V < N ]) wp:body:(I & V < N ]) standard predicates W wlp:body:I & wp:body: G I standard 13 Being wp-invariant is a stronger requirement, therefore su cient also.
Thus we can add (Iu) to the right-hand side of Assumption 3. Now w c o n tinue with induction to show that for all n 0 w e h a ve p n (I u V < L + n]) V T : (6) For the base case we reason from Assumption 1 that p 0 (I u V < L ]) V G V T : For the step case we reason That, with Assumption 2 and p H;L 6 = 0 , g i v es us I V T directly from Lem. 6.1.
2 Informally, Lem.7.1 shows termination given an integer-valued variant bounded above and below such that on each iteration a strict decrease is guaranteed with at least xed p r obability p > 0. Note that the probabilistic variant is allowed to increase | but not above H . ( W e h a ve emphasised the parts that di er from the standard variant rule.) The termination of Program (5) now follows immediately from Lem. 7.1 with variant n mod N , taking L H : = 0 N .
In some circumstances it is convenient to use other forms of variant argument, variations on Lem.8.1 one easily proved from it is the more conventional rule in which the variant is bounded below (but not necessarily above), must decrease with xed probability p > 0 and cannot increase.That rule follows (informally) from Lem. 8.1 by noting that since the variant cannot increase its initial value determines the upper bound H required by the lemma, and it shows termination for example of the loop do n > 0 !n: =n ; 1 1=2 skip od for which v ariant n su ces with L : =0 .
Figure 5: The variant decreases with probability at least p(1 ; p) it may increase.9 Example: self-stabilisation In our nal example we apply Lem.8.1 to a variation on Herman's probabilistic self-stabilisation 8], a distributed probabilistic algorithm that can be used for leadership election in a ring of synchronously executing processors.
Example 9.1 Consider N identical processors connected clockwise in a ring, as illustrated in Fig. 4. A single processor | a leader | i s c hosen from them in the following way.
Initially each processor is given exactly 1 token the leader is the rst processor to obtain all N of them.Fix some probability p with 0 < p < 1.
On each step (synchronously) all processors perform the following actions: 1. Make a local probabilistic decision either to pass (probability p) o r t o keep (probability 1 ; p) all its tokens.2. If pass, then send all its tokens to the next-clockwise processor if keep, do nothing.3. Receive t o k ens passed (if any) from the next-anticlockwise processor, adding them to the tokens currently held (if any).We show that with probability 1 e v entually a single processor will obtain all N tokens.We de ne the invariant to be that the total number of tokens is constant (at N ), and the guard (which if true indicates that termination has not yet occurred) to be that more than one processor holds tokens and the variant to be the shortest length of any ring segment ( c o n tiguous sequence of arcs) containing all tokens.(See Fig. 5.) With those de nitions, for proof of termination we simply note that (refer assumptions of Lem.7.1) 1. the guard and invariant imply that the variant is bounded below b y 1 a n d a b o ve b y N , and 2. the invariant is trivially maintained and 3. the variant decreases strictly with probability at least p(1 ; p), which is nonzero since 0 < p < 1. (Let the least-clockwise processor in the shortest segment decide to pass while the most-clockwise processor decides to keep.)The conclusion of Lem.7.1 gives us certain termination: that eventually only one processor contains tokens (negated guard), and that it has all N of them (invariant). 15 10 Conclusion Our main results are Thm.4.6 for total correctness of iterations when the condition is known, and Thm.8.2 for termination with probability 1.With the examples of Sections 5 and 9 we h a ve shown that probabilistic reasoning for partial correctness | on this scale at least | is not much more complex than standard reasoning.For total correctness it seems harder however to achieve simpli cation using grossly pessimistic variants (a familiar technique in the standard case).Our experience so far suggests that it is often necessary to use accurate bounds on the number of iterations remaining, and that can require intricate calculation.
We do not have general rules for determining the termination condition when it is not 1 at this stage it seems those situations have to be handled by using the wp semantics to extract a recurrence relation to which standard probabilistic methods can then be applied.A promising approach h o wever is to use (probabilistic) data re nement to extract not a recurrence relation but a simple(r) program, involving only the variant captured by a single variable.That program's termination condition is equal to the original, but could perhaps be taken straight from the literature one would then have access to a collection of termination `paradigms'.
A longer term approach to probabilistic termination is to build a temporal logic over the probabilistic predicate transformers 14], generalising a similar construction by Morris 17] o ver standard transformers.The resulting properties are then very like those of Ben-Ari, Pnueli and Manna 1], and allow termination conditions to be determined for quite complicated programs using structured arguments in the style for example of UNITY 2].
Note that the use of a ring is not essential for correctness: in fact if each processor chooses probabilistically from all others where to pass its tokens (with a nonzero probability for each possible recipient), then termination is still certain | and in fact is easier to show than with a ring.The variant is just the number of processors holding tokens, and cannot increase it decreases with nonzero probability p(1 ; p)r, where the extra factor r is the minimum probability o ver all processor pairs P P 0 that P will choose P 0 as its recipient.
That `chaotic' scheme remains correct even if the processors execute asynchronously, p r o vided their scheduling is starvationfree.We observe as a special case that wp:prog:Q 0 & wp:prog:Q 1 V wp:prog:(Q 0 & Q 1 ) since for any prog and Q we h a ve wp:prog:Q V wlp:prog:Q.I f wlog both Q 0 and wp:prog:Q 0 are standard, the above reduces further to sub-distributivity o f u. 2 Fact B.3 sub-distributivity of + For any program prog and postconditions Q 0 Q 1 we h a ve wp:prog:Q 0 + wlp:prog:Q 1 V wlp:prog: with equality when prog is deterministic. 2 loop:(G u I ) :

2 4
The reasoning fails at the point of concluding G u I V wp:body:I from G u I V wp:body:1 and G u I V wlp:body:I : Applying Fact B.2 to those two inequalities gives only wp:body:I wp:body:(1 & I ) W wlp:body:I & wp:body:1 W (G u I ) & ( G u I ) G u (I & I )

Notation 4 . 2 2 Lemma 4 . 3 1 Fact B. 2 W
The termination condition of loop is de ned T : = wp:loop:1 : Let invariant I satisfy G u I V wlp:body:I: Then I & T V wp:loop:(G u I ) : Proof: wp:loop:(G u I ) wp:loop:((G u I ) & 1 ) W wlp:loop:(G u I ) & wp:loop:ces for many situations, in particular those in which either I or T is standard since in that case I & T I u T .When both I and T are probabilistic, however, the precondition of Lem.4.3 can be too low (pessimistic, though still correct).But as the following 5 shows, we cannot just replace & by u on the left-hand side.Example 4.4 Take i n variant I : = n = 0 ] =2 + n = 1] in the program loop, de ned do n = 0 !n: =;1

Theorem 4 . 6 2 5
If I is a wp-invariant o f loop with deterministic body, a n d I V T , t h e n I V wp:loop:(G u I ) : Proof: We show rst that wp-invariance of I implies wlp-invariance of I 0 : = I + 1 ; T : Note we rely on I V T for well-de nedness (that I 0 V 1).We reason wlp:body:I 0 wlp:body:(I + ( 1 ; T )) de nition I 0 wp:body:I + wlp:body:1 ; wp:body:T Fact B.3 twice body deterministic wp:body:I + 1 ; wp:body:T Fact B.4 W G u (wp:body:I + 1 ; wp:body:T ) G u wp:body:I + G ; G u wp:body:T G standard G u wp:body:I + G ; G u T Lem.4.5 W G u (G u I ) + G ; G u T assumed wp-invariance of I G u (I + 1 ; T ) G standard G u I 0 : de nition I 0 From Lem. 4.3 we then conclude immediately I I 0 & T V wp:loop:(G u I 0 ) wp:loop:(G u I )since for the last step we h a veG u I 0 G u (I + 1 ; T ) G u I + G ; G u T G standardG u I : G implies immediate termination: thus G G u T 2 Thm.4.6 is extended to the nondeterministic case by Thm.A.3, and it is not hard to show that the latter in turn implies Lem.4.3: thus they are of equal power.The following example shows the wp-(rather than wlp-) invariance of the invariant I to be necessary for soundness of Thm.4.6 in general.(Recall from Sec. 3 that it is not necessary in the standard case.)Example 4.7 For this example let loop be do b !b: =false 1=2 abort od for Boolean b, and note that we h a ve for termination T :b] + b]=2 : De ne I : = 1 =2, so that I V T as required by Thm.4.6, and reason init !l h: =0 N loop !do l + 1 6 = h !p: =( l + h) 2 l: =p h;p p;l h: =p od The program prog is the whole of the above.We write m n as a convenient abbreviation for m=(m+n) .Figure 1: Example 5.1, uniform binary selection.wlp:body:I wlp:(b: =false 1=2 abort):(1=2) (1=2)(wlp:(b: =false):(1=2)) + (1=2)(wlp:abort:(1=2)) (1=2)(1=2) + (1=2)(1) 3=4 W b] u 1=2 G u I to show wlp-invariance of I , the other requirement of the theorem.But wp:loop:(G u I ) wp:loop:( :b]=2) wp:(if b then b: =false 1=2 abort ):( :b]=2) unfold loop b] u ((1=2)(1=2 ) + ( 1 =2)(0)) t :b] u :b]=2 b]=4 t :b]=2 showing the conclusion of Thm.4.6 to be false in this case: the precondition b]=4 t :b]=2 is not at least I , since when b for example the former is 1=4 and the latter 1=2.Three examples of total correctnessWith Thm.4.6 we are able to discover total correctness properties of loops, provided we are given their termination conditions.Rigorous termination arguments themselves are the subject of Sec. 7 below here we treat termination informally.
wp:prog: c = C + 1 ] W 2=P : With probability at least 2=P the gambler eventually increases his capital by exactly 1 6 we h a ve G u p(I ) p(G u I ) V p(wp:body :I ) wp:body:(p(I)) so that also p(I ) i s a wp-invariant o f loop.W e then reason p(I ) V wp:loop:(G u p(I )) wp-invariance of p(I ) p(I ) V T , Thm. 4.6 wp:loop:(p(G u I ))

Fact
inductive h ypothesis V p n (wp:body:(I u V < L + n])) t T p(T ) V T Assumption 3 strengthened wp:body:(p n (I u V < L + n])) t T

Figure 4 :
Figure 4: Example ring topology (N = with initial token assignment s h o wn.

Lemma A. 2 2 B 2 Fact B. 2
For any loop and postcondition Q there is a dloop such that body det and wp:dloop:Q wp:loop:Q : Proof: De ne P : = wp:loop:Q, and use Fact B.5 to choose det so that body v det and wp:body:P wp:det:P : (8) Then we h a ve G u Q t G u wp:det:P G u Q t G u wp:body:P by construction (8) P de nition P refolding of iteration so that P satis es the (least) xed-point equation for wp:dloop:Q.Hence wp:dloop:Q V P and, from loop v dloop and monotonicity, w e h a ve wp:dloop:Q wp:loop:Q as required.2 With Lem.A.2 we h a ve our theorem easily.Theorem A.3 If I is a wp-invariant o f loop and I V T , then I V wp:loop:(G u I ) :Proof: Use Lem.A.2 to choose deterministic re nement det of body so that wp:dloop:(G u I ) wp:loop:(G u I ) and observe that since body v det we h a ve I a wp-invariant o f dloop also.The result is then immediate from Thm. 4.6.Facts about probabilistic wp and w l p Proofs of these facts are to be found in other publications of theGroup 19].Fact B.1 For standard program prog and standard postcondition Q we h a ve wlp:prog:Q u wp:prog:1 V wp:prog:Q : sub-distributivity of & For program prog and postconditions Q 0 Q 1 we h a ve wlp:prog:Q 0 & wp:prog:Q 1 V wp:prog:(Q 0 & Q 1 ) : Let predicate I be a wlp-invariant of loop, t h us satisfying G u I V wlp:body:I : We n o w prove our main theorem for total correctness of deterministic loops note that we assume a wp-invariance property (stronger than the wlp-invariance assumption of Lem.4.3).
The program prog is the whole of the above.The gambler's capital c is initially C , and his intended bet b is initially 1.On each iteration, if his intended bet does not exceed his capital, he is allowed to place it and has 1=2 c hance of winning.If he wins, he receives twice his bet in return, and sets his intended bet to 0 to indicate he is nished if he loses, he receives nothing and doubles his intended bet | hoping to win next time.If he loses su ciently often (in succession), his intended bet b will eventually be more than he can a ord | his remaining capital c | and he will then be `trapped' forever within the iteration.Figure 3: Example 5.3, the martingale.