
by Tim Varelmann
In the last week of October 2025, the European edition of the Gurobi Summit took place in Vienna. About 250 people gathered to share optimization success stories, explore the latest developments, and admire the stunning Hofburg Palace — a venue that made even discussions on models, algorithms and mathematical software feel grand.
Several presentations stood out to me:
Between inspiring talks, great coffee chats, and traditional Austrian food, Gurobi announced a solver tuning challenge.
Optimization solvers like Gurobi can handle many problems “out of the box.” But with the right adjustments — tuning solver parameters — you can often cut the time to solve by up to a factor of ten. Mixed-integer optimization has always been a “bag of tricks,” as the Gurobi founders put it, and tuning lets you decide which tricks to emphasize or suppress.
The prize?
A Wiener Sachertorte — the iconic Viennese chocolate cake from the original imperial confectionary.
The task?
Find the best set of non-default solver parameters to make Gurobi solve a given problem as fast as possible.
I was intrigued. And, admittedly, very motivated by the cake ;)
The challenge had an interesting twist: your submission would be evaluated based on the average runtime across three runs, each with a different random seed.
That detail might sound minor, but it’s brilliant. Solvers often use randomness internally — for example, when exploring alternative solution paths. The same model can take wildly different amounts of time to solve depending on the random seed.
By enforcing multiple runs, Gurobi cleverly made a point that many practitioners overlook: never judge performance based on a single run.
Even experienced optimizers (including me!) were reminded that “a few quick tests” can be dangerously misleading.
Next time I benchmark solver performance, I’ll make sure to use multiple seeds — not just one.
I picked my model and started experimenting. On average, the default parameters gave a runtime of about 200 seconds, but individual runs varied dramatically — from 75 to 330 seconds.
To understand what was happening, I looked at Gurobi’s detailed log files, which track the progress of the solver. Here’s roughly the pattern I got to see:

The blue line shows the best feasible solution found so far.
The red line represents the lower bound — the best guaranteed value that’s still possible.
The red shaded area between them is the optimality gap.
You can see that the optimal solution is found early — but proving that it’s truly optimal takes a long time. Most of the runtime is spent narrowing that red gap.
That observation hit me:
If the optimal solution is found early, why spend minutes proving it’s optimal to many digits of precision?
By default, Gurobi uses an optimality tolerance of 1e-6, meaning it won’t stop until it’s sure no solution even slightly better exists. But in many real-world problems — especially in chemical or production engineering, my academic background — our models are simplifications of reality. A precision of 1% or even 5% is often all we can justify.
So, why chase perfection that the model itself can’t reflect?
I briefly considered submitting a “cheeky” entry that only changed the optimality tolerance — loosening it just enough to finish almost instantly. But I decided against it; that felt too easy for a professional challenge. As a matter of fact, I just had no information on the model precision. Thus, I tuned several parameters to make the solver genuinely faster under the same precision requirements.
When the results were announced, Gurobi’s Mario Ruthmair showed a plot similar to the one above and mentioned, smiling, that someone had tried exactly what I’d first thought of — changing the tolerance. At that moment, I thought I was out of the running. I had probably overcomplicated things while someone else had gone for the simple trick.
Then Mario added, still amused, that he’d decided to disqualify that entry because it didn’t meet the required optimality criterion. When he continued, he mentioning in passing a 'very good solution' finished in “something like 45 seconds.”
My final run? 45.04 seconds.
I was back in the game.

Yes — I got to walk up and claim the Sachertorte. Here’s the combination of solver parameters that earned it:
This parameter tells Gurobi what to focus on: finding feasible solutions, proving optimality, or tightening the lower bound.I started with “tighten the lower bound,” since that seemed to be the bottleneck. Two runs solved in just 20 seconds — but one failed to converge even after 10 minutes. Too risky. So I switched to “prove optimality,” a more balanced focus. That gave robust, consistently fast results.
Cuts are mathematical shortcuts that help eliminate unpromising parts of the search space. I maxed out cut aggressiveness — and saw an additional performance boost.
By letting Gurobi spend more effort detecting and breaking symmetries (essentially equivalent parts of the model that cause redundant work), I squeezed out a bit more performance. This was actually an idea my AI assistant suggested when I got frustrated that other parameters could not squeeze out more — and it worked!
Heuristics help find good feasible solutions early. But since my model already produced one quickly, I reduced this effort from the default 5% of runtime to 0%. Disabling heuristics entirely didn’t hurt — the solution was found early anyway.
Winning the Sachertorte was fun, but the real takeaway was deeper.
It reminded me that:
The Gurobi team noted that using too many non-default parameters can be risky in real-world deployments. That’s true — but in this context, over-tuning was the point. And it paid off sweetly.
When I got home, my family was delighted that, for once, I didn’t just bring back “nerdy insights” from a conference — I brought back cake.
Thanks to the Gurobi team for a fantastic event, a clever challenge, and a truly authentic prize.
Never miss out on my ideas from the world of optimization!


