2
votes

I am currently experimenting with the floating point (FP) theory, in combination with bit-vectors; I am working with Z3 4.6.0.0.

I have found limited documentation in the use of the FP API (it seems the only real hits are on z3.py itself), so I have try to provide "complete" examples, to also act as a demonstration of how I (believe) the API should be used.

Here was my first attempt at using the FP theory inside of Z3py:

#!/usr/bin/env python

from z3 import *

s = Solver()

#
# Create our value
#
to_find = "-12.345"
fp_val = FPVal(to_find, Float32())

#
# Create our variable
#
fp_var = Const("fp_var", Float32())

#
# Create the constraints
#
s.add(fp_var == fp_val)

assert s.check() == sat, "assertion never fails because the instance is SAT"

m = s.model()

#
# Prints -12.3449993134
#
print eval(str(m.eval(fp_var, model_completion=True)))

# EOF

If you run this example, it works as expected, and we indeed get that fp_var is equal to (I presume) the float nearest to -12.345 (so far so good; apart from the use of Python's eval to obtain the value as a Python float).

My next step was to try and coerce a floating point value into a bit-vector, whilst checking a non-integer value:

#!/usr/bin/env python

from z3 import *

s = Solver()

#
# Create our value
#
to_find = "-12.345"
fp_val = FPVal(to_find, Float32())

#
# Create our variable
#
bv_var = Const("bv_var", BitVecSort(32))

#
# We now use a "cast" to allow us to do the floating point comparison
#
fp_var = fpSignedToFP(RNE(), bv_var, Float32())

#
# Create the constraints
#
s.add(fp_var == fp_val)

#
# This is UNSAT because fpSignedToFP only supports _integers_!
#
assert s.check() == unsat, "instance is UNSAT, so assertion doesn't fail"

# EOF

In this example, we attempt to "convert" from the bit-vector into a floating point value (using fpSignedToFP), and then assert that the floating point value is equal to the value we are looking for. However, and this matches the documentation for Z3, we get UNSAT, because fpSignedToFP only supports integers.

I then started to "get creative" and see if I could use transitivity with the fpToSBV API call, which does not state it has the limitation of being restricted to only integer floats:

#!/usr/bin/env python

from z3 import *

print get_full_version()

s = Solver()

#
# Create our value
#
to_find = "-12.345"
fp_val = FPVal(to_find, Float32())

#
# Create our variable
#
bv_var = Const("bv_var", BitVecSort(32))

#
# We now create an additional, ancillary variable
#
fp_var = Const("fp_var", Float32())

#
# Create the constraints
#
s.add(fp_var == fp_val)
s.add(bv_var == fpToSBV(RNE(), fp_var, BitVecSort(32)))

assert s.check() == sat, "this example works"

m = s.model()

#
# Prints -12.3449993134
#
print eval(str(m.eval(fp_var, model_completion=True)))

#
# To read out the value from the BV, we create a new variable, assert it is
# the same as the originating BV, and then extract our new variable
#
eval_fp = Const("eval_fp", Float32())
s.add(bv_var == fpToSBV(RNE(), eval_fp, BitVecSort(32)))

assert s.check() == sat, "this cannot change the satisfiability"

m = s.model()

#
# Prints +oo True True
#
print str(m.eval(eval_fp, model_completion=True)),
print m.eval(eval_fp, model_completion=True).isInf(),
print m.eval(eval_fp, model_completion=True).isPositive()

# EOF

To explain this example slightly: we encode our problem as normal, but rather than creating a term for our floating point expression, we create a whole new constant, and assert that our bit-vector is equal to the floating point value, via fpToSBV. Our assertion on the value we wish to find is then against our floating point value. If this model is satisfiable, we then create another floating point constant, that points to our original bit-vector and we try to extract the value for that constant.

This gives a chain like this:

  • bv_var == fpToSBV(fp_var)
  • fp_var == -12.345
  • fpToSBV(eval_fp) == bv_var

which I had hoped by transitivity would have either given the same value for fp_var and eval_fp. However, while fp_var gets the correct value, eval_fp comes out as positive infinity!

I also tried experimenting with the use of fpToSBV and fpSignedToFP together to try and see if that would work (I was sceptical of this, due to the limitations in fpSignedToFP):

#!/usr/bin/env python

from z3 import *

s = Solver()

#
# Create our value
#
to_find = "-12.345"
bv_val = fpToSBV(RNE(), FPVal(to_find, Float32()), BitVecSort(32))

#
# Create our variable
#
bv_var = Const("bv_var", BitVecSort(32))

#
# Create the constraints
#
s.add(bv_var == bv_val)

assert s.check() == sat, "this example works"

m = s.model()

#
# Floating point constant to evaluate
#
eval_fp = fpSignedToFP(RNE(), bv_var, Float32())

#
# Prints -12.0
#
print eval(str(m.eval(eval_fp, model_completion=True)))

# EOF

This had the most success and did actually give an integer close the the floating point value of our constraint (i.e., -12.0). However, it still does not contain the values after the decimal place.

So, my question is: in Z3, is it possible to coerce a floating point value into a bit-vector, and then "back out" again? If so, how should it be done? Alternatively, are the bit-vector/floating point operations only "partially interpreted" (like with int2bv and friends)?

Update

After experimenting some more, I found that the following works (using the Z3-specific extension fpToIEEEBV):

#!/usr/bin/env python

from z3 import *

s = Solver()

#
# Create our value
#
to_find = "-12.345"
fp_val = FPVal(to_find, Float32())

#
# Create our variable
#
bv_var = Const("bv_var", BitVecSort(32))

#
# Convert the value to check to a BV
#
bv_val = fpToIEEEBV(fp_val)

#
# Create the constraints
#
s.add(bv_var == bv_val)

assert s.check() == sat, "this example is SAT, so don't see this"

m = s.model()

#
# Evaluation expression
#
eval_expr = fpBVToFP(bv_var, Float32())

#
# Prints -12.3449993134
#
print eval(str(m.eval(eval_expr)))

# EOF

Is this the correct way of dealing with this issue?

1

1 Answers

1
votes

Putting aside how the Python API works, this is already specified in the definition of the floating-point logic, see here: http://smtlib.cs.uiowa.edu/theories-FloatingPoint.shtml

In particular, note the partiality of some of these functions:

"All fp.to_* functions are unspecified for NaN and infinity input values. In addition, fp.to_ubv and fp.to_sbv are unspecified for finite number inputs that are out of range (which includes all negative numbers for fp.to_ubv). This means for instance that the formula

(= (fp.to_real (_ NaN 8 24)) (fp.to_real (fp c1 c2 c3))) 

is satisfiable in this theory for all binary constants c1, c2, and c3 (of the proper sort). "

So, what you're doing (pending Python specific aspects) is per the standard, with the above caveats.

Using the interchange format

For conversions to/from IEEE754 interchange format, the logic provides a way to convert a given-bit vector to a corresponding float:

; from single bitstring representation in IEEE 754-2008 interchange format, ; with m = eb + sb ((_ to_fp eb sb) (_ BitVec m) (_ FloatingPoint eb sb))

In the other direction, the logic has this to say:

There is no function for converting from (_ FloatingPoint eb sb) to the corresponding IEEE 754-2008 binary format, as a bit vector (_ BitVec m) with m = eb + sb, because (_ NaN eb sb) has multiple, well-defined representations. Instead, an encoding of the kind below is recommended, where f is a term of sort (_ FloatingPoint eb sb):

(declare-fun b () (_ BitVec m))
(assert (= ((_ to_fp eb sb) b) f))

Again, all this is in http://smtlib.cs.uiowa.edu/theories-FloatingPoint.shtml