0
votes

I have been working with the optimizer in Z3PY, and only using Z3 ints and (x < y)-like constraints in my project. It has worked really well. I have been using up to 26 variables (Z3 ints), and it takes the solver about 5 seconds to find a solution and I have maybe 100 soft constraints, at least. But now I tried with 49 variables, and it does not solve it at all (I shut it down after 1 hour). So I made a little experiment to find out what was slowing it down, is it the amount of variables or the amount of soft constraints? It seems like the bottle neck is the amount of variables.

I created 26 Z3-ints. Then I added as hard constraints, that it should not be lower than 1 or more than 26. Also, all numbers must be unique. No other constraints was added at all. In other words, the solution that the solver will find is a simple order [1,2,3,4,5....up to 26]. Ordered in a way that the solver finds out.

I mean this is a simple thing, there are really no constraints except those I mentioned. And the solver solves this in 0.4 seconds or something like that, fast and sufficient. Which is expected. But if I increase the amount of variables to 49 (and of course the constraints now are that it should not be lower than 1 or more than 49), it takes the solver about 1 minute to solve. That seems really slow for such a simple task? Should it be like this, anybody knows? The time complexity is really extremely increased?

(I know that I can use Solver() instead of Optimizer() for this particular experiment, and it will be solved within a second, but in reality I need it to be done with Optimizer since I have a lot of soft constraints to work with.)

EDIT: Adding some code for my example.

I declare an array with Z3 ints that I call "reqs".
The array is consisting of 26 variables in one example and 49 in the other example I am talking about.

solver = Optimize()

 for i in (reqs):
     solver.add(i >= 1)

for i in (reqs):
    solver.add(i <= len(reqs))

    d = Distinct(reqs)
    solver.add(d)

res = solver.check()
print(res)
1
The optimization engines are not expected to be performant; nor they receive the attention and speed-up tricks that existing solvers regularly do. But each benchmark is different, so you should post specific examples to see what can be done, if anything.alias
I have added the code to the original post now. But yeah, I understand. But it feels kind of weird that it is so much slower when changing from 26 variables to 49. With Yices (written and executed in SMTlib2) it takes a millisecond or something. Here with Z3PY it takes a minute, with 49 variables.leiz0r

1 Answers

2
votes

Each benchmark is unique, and it's impossible to come up with a good strategy that applies equally well in all cases. But the scenario you describe is simple enough to deal with. The performance problem comes from the fact that Distinct creates too many inequalities (quadratic in number) for the solver, and the optimizer is having a hard time dealing with them as you increase the number of variables.

As a rule of thumb, you should avoid using Distinct if you can. For this particular case, it'd suffice to impose a strict ordering on the variables. (Of course, this may not always be possible depending on your other constraints, but it seems what you're describing can benefit from this trick.) So, I'd code it like this:

from z3 import *

reqs = [Int('i_%d' % i) for i in range(50)]

solver = Optimize()

for i in reqs:
  solver.add(i >= 1, i <= len(reqs))

for i, j in zip(reqs, reqs[1:]):
  solver.add(i < j)

res = solver.check()
print(res)
print(solver.model())

When I run this, I get:

$ time python a.py
sat
[i_39 = 40,
 i_3 = 4,
 ...
 i_0 = 1,
 i_2 = 3]
python a.py  0.27s user 0.09s system 98% cpu 0.365 total

which is pretty snippy. Hopefully you can generalize this to your original problem.