Sketcher performance - where do those CPU cycles go.

About the development of the Part Design module/workbench. PLEASE DO NOT POST HELP REQUESTS HERE!
Forum rules
Be nice to others! Respect the FreeCAD code of conduct!
acolomitchi
Posts: 34
Joined: Tue Dec 06, 2022 1:41 am
Location: Melbourne, Australia
Contact:

Re: Sketcher performance - where do those CPU cycles go.

Post by acolomitchi »

DeepSOIC wrote: Fri Dec 30, 2022 10:27 pm
acolomitchi wrote: Fri Dec 30, 2022 7:50 pm Personally, when if comes to hypot/square, I see anything beyond double(1e-8) relative precision as questionable.
this is an absolute comparison, relative precision is mostly irrelevant.
Absolute comparison is much worse, as it scales with the magnitude of the terms involved.
That is to say: in the unit (1.0) range, the absolute imprecision of going through square/sqrt is around 1e-8; in the magnitude of A, the absolute imprecision is going to be around A*1e-8. Like, in the order of magnitude of 1000, one cannot trust the decimal digits coming lower that 1e-5 - they may by good, but then again there will be enough cases in which they garbage.
DeepSOIC wrote: Fri Dec 30, 2022 10:27 pm BTW, there are other interesting precision-related problems in freecad. For example geometric primitives are by default created with tolerance of precision::confusion, which is 1e-7 mm. This is used to test if geometry is coincident or intersecting (e.g. in boolean operations), and can cause trouble for very big things (like, space-elevator-scale models are impossible, unless you alter this tolerance; see this piece of discussion for example).
Then altering the tolerance is a better path to follow. Like in: working with relative tolerances. But then again, neither this is a silver bullet - that's just the nature of the beast.

If one really needs 1e-10 absolute precision, the only way to obtain this in more cases is to perform the computations in "long double" (work with a finer granularity of the real-number axis).
DeepSOIC wrote: Fri Dec 30, 2022 10:27 pm
acolomitchi wrote: Fri Dec 30, 2022 7:50 pm (the other rant I feel growing inside me is the use of "always distances, never displacements" - like when you set your "X-dist" constraint for you geom to a positive value no matter if it's on the left or the right relative to Yaxis. Then you are living with the sword of Damocles above your head, praying a drag on a remotely related DoF won't slide the solver into the other local minimum and decide the best way to solve that constraint is by mirroring it across Y axis)
X-dist and Y-dist flipping in particular were actually fixed some long time ago, by me (i think). The minus sign is automatically removed by swapping the points when the constraint is being created.
Good to hear. FYI, I did run in such flipping behaviours when I was fooling around with line-2-line angle constraints. Actually, I'm not quite sure which constraint was flipping sides, there were more in the sketch than the l-2-l angle. I'll try find some time to get a repro.
User avatar
DeepSOIC
Veteran
Posts: 7896
Joined: Fri Aug 29, 2014 12:45 am
Location: used to be Saint-Petersburg, Russia

Re: Sketcher performance - where do those CPU cycles go.

Post by DeepSOIC »

acolomitchi wrote: Sat Dec 31, 2022 1:18 am That is to say: in the unit (1.0) range, the absolute imprecision of going through square/sqrt is around 1e-8;

Code: Select all

>>> sqrt(pi)**2 - pi
-4.440892098500626e-16
seems like you're about 5 orders of magnitude off
acolomitchi
Posts: 34
Joined: Tue Dec 06, 2022 1:41 am
Location: Melbourne, Australia
Contact:

Re: Sketcher performance - where do those CPU cycles go.

Post by acolomitchi »

DeepSOIC wrote: Sat Dec 31, 2022 11:43 am
acolomitchi wrote: Sat Dec 31, 2022 1:18 am That is to say: in the unit (1.0) range, the absolute imprecision of going through square/sqrt is around 1e-8;

Code: Select all

>>> sqrt(pi)**2 - pi
-4.440892098500626e-16
seems like you're about 5 orders of magnitude off

Code: Select all

>>> def f(a) :
... 	return (math.sqrt(math.pi) + a)*(math.sqrt(math.pi) - a) + a*a -math.pi
... 
>>> f(1)
-4.440892098500626e-16
>>> f(10)
3.552713678800501e-15
>>> f(100)
3.304023721284466e-13
>>> f(1000)
-5.126565838509123e-12
>>> 
¯\_(ツ)_/¯
BTW, a Happy New Year and a better (than 2022) too.
user1234
Veteran
Posts: 3350
Joined: Mon Jul 11, 2016 5:08 pm

Re: Sketcher performance - where do those CPU cycles go.

Post by user1234 »

acolomitchi wrote: Sat Dec 31, 2022 1:18 am Then altering the tolerance is a better path to follow. Like in: working with relative tolerances. But then again, neither this is a silver bullet - that's just the nature of the beast.

If one really needs 1e-10 absolute precision, the only way to obtain this in more cases is to perform the computations in "long double" (work with a finer granularity of the real-number axis).
There are two CAD kernel conceptions out there, relative and absolute. OCCT is absolute.

While per part reducing the precision seems comprehensible, this part in combination in assembly can make issues. I worked with a relative kernel before and on big assemblies with part with different absolute sizes, it is error prone. So maybe an absolute kernel with 10e-7mm seems not optimal, especially on big scales, but at least you get expectable results. (Besides, i know single objects, there are >10m, but the CAD/CAM output have to be +/-0.001mm (to get rl results better then +/-0.01mm, then a relative kernel is on the edge).

Greetings
user1234
acolomitchi
Posts: 34
Joined: Tue Dec 06, 2022 1:41 am
Location: Melbourne, Australia
Contact:

Re: Sketcher performance - where do those CPU cycles go.

Post by acolomitchi »

user1234 wrote: Sat Dec 31, 2022 4:02 pm
acolomitchi wrote: Sat Dec 31, 2022 1:18 am Then altering the tolerance is a better path to follow. Like in: working with relative tolerances. But then again, neither this is a silver bullet - that's just the nature of the beast.

If one really needs 1e-10 absolute precision, the only way to obtain this in more cases is to perform the computations in "long double" (work with a finer granularity of the real-number axis).
There are two CAD kernel conceptions out there, relative and absolute. OCCT is absolute.
Thanks for sharing the insight.
What I'm missing from the picture is what the way the sketcher's solver works has to do with the graphics kernel.
I mean, OK, the sketch may go onto more loosely respecting the constrains (as in 1e-7 instead of 1e-10 tolerance, so that a constrained rectangle may become a wee bit outta square), but I assume the topology of the sketched shapes/contours is still preserved. Or is my assumption invalid?
User avatar
DeepSOIC
Veteran
Posts: 7896
Joined: Fri Aug 29, 2014 12:45 am
Location: used to be Saint-Petersburg, Russia

Re: Sketcher performance - where do those CPU cycles go.

Post by DeepSOIC »

Ideally, we should derive solution_accuracy of solved sketch from constraint errors, and write out shape tolerances as something like 3*(solution_accuracy). I'm not sure if it's possible for sketches with remaining degrees of freedom, though. 1e-7 mm is not hard-wired into OCC, it's just a default that is usually good enough.
acolomitchi
Posts: 34
Joined: Tue Dec 06, 2022 1:41 am
Location: Melbourne, Australia
Contact:

Re: Sketcher performance - where do those CPU cycles go.

Post by acolomitchi »

DeepSOIC wrote: Sun Jan 01, 2023 2:10 am Ideally, we should derive solution_accuracy of solved sketch from constraint errors, and write out shape tolerances as something like 3*(solution_accuracy). I'm not sure if it's possible for sketches with remaining degrees of freedom, though. 1e-7 mm is not hard-wired into OCC, it's just a default that is usually good enough.
A cheap criterion to stop the iterations is on the line of "when the rate of convergence is smaller than a limit" - on the ground of "diminishing returns - too much effort to squeeze the improvements in the least significant bits".

The advantage of the criterion is that it's insensitive to the dimensional scale of the sketch.

The disadvantage of the criterion: it's insensitive to the dimension of the sketch and, as a consequence, to the actual magnitude of the error (e.g. makes it prone to give up early in the case of ill-conditioned Jacobians); but then again, in such cases, it's not like persisting in extremely slow improvements it's gonna make for a better user experience. Which is to say "ill-conditioned Jacobians" cases need to be mitigated by other means (i.e. if possible, have at least another way to predict slow convergence different from the "just go ahead and deal with the slow or lack of convergence when it actually happens" approach).
User avatar
DeepSOIC
Veteran
Posts: 7896
Joined: Fri Aug 29, 2014 12:45 am
Location: used to be Saint-Petersburg, Russia

Re: Sketcher performance - where do those CPU cycles go.

Post by DeepSOIC »

acolomitchi wrote: Sun Jan 01, 2023 3:21 am A cheap criterion to stop the iterations is on the line of "when the rate of convergence is smaller than a limit" - on the ground of "diminishing returns - too much effort to squeeze the improvements in the least significant bits".
I have an impression that usually, if the solver has reached say 1e-7 relative precision, the next iteration is very likely to bring it all the way to 1e-14, which is the numerical precision limit, and further iterations will offer no improvement at all. This is because at that point, the response change(dofs) -> change(errs) is very linear, and some of the solvers do just that - solve a linear-response problem. So if you are looking to improve solver speed by changing the termination condition, i doubt it will help.

By the way, you can type App.ActiveDocument.Sketch.calculateConstraintError(index) into py console to check values of error functions. They can be slightly off to what the solver saw, as the solver geometry is rebuilt from occ geometry for these calls, and added up with hypot() if the gui constraint is internally more than one constraint, but may still offer some insight.
user1234
Veteran
Posts: 3350
Joined: Mon Jul 11, 2016 5:08 pm

Re: Sketcher performance - where do those CPU cycles go.

Post by user1234 »

acolomitchi wrote: Sat Dec 31, 2022 5:18 pm I mean, OK, the sketch may go onto more loosely respecting the constrains (as in 1e-7 instead of 1e-10 tolerance, so that a constrained rectangle may become a wee bit outta square), but I assume the topology of the sketched shapes/contours is still preserved. Or is my assumption invalid?
In FreeCAD / OCCT? I do not really know, often when you have too small elements, you get errors, but i do not know if they are from FreeCAD (to prohibit errors) or form OCCT. On the other CAD, with relative precision, i worked before, all elements under the tolerances vanishes.

Greetings
user1234
acolomitchi
Posts: 34
Joined: Tue Dec 06, 2022 1:41 am
Location: Melbourne, Australia
Contact:

Re: Sketcher performance - where do those CPU cycles go.

Post by acolomitchi »

DeepSOIC wrote: Sun Jan 01, 2023 1:24 pm
acolomitchi wrote: Sun Jan 01, 2023 3:21 am A cheap criterion to stop the iterations is on the line of "when the rate of convergence is smaller than a limit" - on the ground of "diminishing returns - too much effort to squeeze the improvements in the least significant bits".
I have an impression that usually, if the solver has reached say 1e-7 relative precision, the next iteration is very likely to bring it all the way to 1e-14, which is the numerical precision limit, and further iterations will offer no improvement at all. This is because at that point, the response change(dofs) -> change(errs) is very linear, and some of the solvers do just that - solve a linear-response problem. So if you are looking to improve solver speed by changing the termination condition, i doubt it will help.
Thanks. This means the solver is likely as good as it gets within the problem specific.

I haven't looked yet into the guts of the solver, even if I took a peek in the "by subsystem solving" method (the System::solve(SubSystem *subsysA, SubSystem *subsysB, bool /*isFine*/, bool isRedundantsolving). To my uneducated taste, there may be cases in which the line-search could be a bit off , as the search is a forward-looking one - what happens if some floating-point errors in int status = qp_eq(B, grad, JA, resA, xdir, Y, Z); make the position of the actual minimum be just a wee bit behind alpha = 0 and the correction vector h = x - x0; is zero?)
DeepSOIC wrote: Sun Jan 01, 2023 1:24 pmBy the way, you can type App.ActiveDocument.Sketch.calculateConstraintError(index) into py console to check values of error functions. They can be slightly off to what the solver saw, as the solver geometry is rebuilt from occ geometry for these calls, and added up with hypot() if the gui constraint is internally more than one constraint, but may still offer some insight.
👍
Question: is there any logging mechanism that one could use to peek into the evolution of different variables during the (iterative-)solving time?
Post Reply