Sketcher performance - where do those CPU cycles go.

About the development of the Part Design module/workbench. PLEASE DO NOT POST HELP REQUESTS HERE!
Forum rules
Be nice to others! Respect the FreeCAD code of conduct!
User avatar
DeepSOIC
Veteran
Posts: 7896
Joined: Fri Aug 29, 2014 12:45 am
Location: used to be Saint-Petersburg, Russia

Re: Sketcher performance - where do those CPU cycles go.

Post by DeepSOIC »

acolomitchi wrote: Wed Dec 21, 2022 11:35 pm eliminate the "impedance mismatch" between large linear dimensions vs angles in the (-pi, pi) range, leading to fudge factors that don't quite work
okay, i see you already know quite a bit about this =) , I hope you'll actually fix it.
acolomitchi
Posts: 34
Joined: Tue Dec 06, 2022 1:41 am
Location: Melbourne, Australia
Contact:

Re: Sketcher performance - where do those CPU cycles go.

Post by acolomitchi »

DeepSOIC wrote: Thu Dec 22, 2022 11:05 am
acolomitchi wrote: Wed Dec 21, 2022 11:35 pm I'll try to define the "constraint errors" and their gradients into "square distance errors" wherever they are defined in terms of "distance errors".
I'm not 100% sure what you are up to, but be careful with it.
1) if the gradient of error function zeroes out where error zeroes out - that is bad. It causes trouble with convergence, solver precision and dof counting. A "classic" case of such situation, which can be created with just the UI, is constraining a point to an intersection of tangent things (point on circle + point on line + tangent between line and circle).
Oh, wow, then I can't do it properly, at least not on that cases. Totally different from the error minimization algos, based on hermitians.
Thank you, I'll start with those first, see what happens - if it goes sour at least I won't waste heaps of effort.
DeepSOIC wrote: Thu Dec 22, 2022 11:05 am 2) scales of error functions should be comparable. At the moment they are not (some error funcrtions are effectively millimeters, while others are radians). This causes troubles with sketches of large geometric size (a few meters and up). It'd be great if you can fix it (i began fixing it, that led me to try and rewrite solver architecture completely, which i did half-way and then abandoned).
I might have some ideas that can work just at the level of the constraints, not touching the solver.
  1. If I am to go with "angles=>distance" in re errors, I'll probably derive some distance from the elements in the constraint and return the length of the "error chord" (circle with radius the derived distance, use the error angle, return the chord len)
  2. if I'll go with the "angle=>sq.distance", I might use some (diff in the) areas of some triangles formed with some segments involved in the constraints. E.g. AngleViaPoint - 1 point, 2 segs, one angle, define some triangles; if one pick/build them triangles well, a zero err (seg supports cross in the point and have the prescribed angle) should map in those areas becoming zero and those triangles being degenerated.
User avatar
DeepSOIC
Veteran
Posts: 7896
Joined: Fri Aug 29, 2014 12:45 am
Location: used to be Saint-Petersburg, Russia

Re: Sketcher performance - where do those CPU cycles go.

Post by DeepSOIC »

acolomitchi wrote: Thu Dec 22, 2022 5:28 pm If I am to go with "angles=>distance" in re errors, I'll probably derive some distance from the elements in the constraint and return the length of the "error chord" (circle with radius the derived distance, use the error angle, return the chord len)
I think this is approximately what i did in the thread you mentioned. Maybe it is still worth trying.
acolomitchi
Posts: 34
Joined: Tue Dec 06, 2022 1:41 am
Location: Melbourne, Australia
Contact:

Re: Sketcher performance - where do those CPU cycles go.

Post by acolomitchi »

DeepSOIC wrote: Thu Dec 22, 2022 6:10 pm
acolomitchi wrote: Thu Dec 22, 2022 5:28 pm If I am to go with "angles=>distance" in re errors, I'll probably derive some distance from the elements in the constraint and return the length of the "error chord" (circle with radius the derived distance, use the error angle, return the chord len)
I think this is approximately what i did in the thread you mentioned. Maybe it is still worth trying.
"Approximately" is a good qualifier :)

The first part of the ConstraintAngleViaPoint::error()

Code: Select all

    double ang=*angle();
    DeriVector2 n1 = crv1->CalculateNormal(poa);
    DeriVector2 n2 = crv2->CalculateNormal(poa);

    //rotate n1 by angle
    DeriVector2 n1r (n1.x*cos(ang) - n1.y*sin(ang), n1.x*sin(ang) + n1.y*cos(ang) );
Now, if the n1 is the not-normalized normal vector, one would have some ideas of the size. Problem is, some curves return the not-normalized normal (Line is fine, the Circle not quite but the modulus of its normal is still in distance units), but others are not (the conics return the normalized/dimensionless normal, BSpline doesn't return anything of value in 0.21).

But it doesn't matter anyway, the next line wipes out any size/distance information.

Code: Select all

    double err = atan2(-n2.x*n1r.y+n2.y*n1r.x, n2.x*n1r.x + n2.y*n1r.y);
    //essentially, the function is equivalent to atan2(n2)-(atan2(n1)+angle). The only difference is behavior when normals are zero (the intended result is also zero in this case).
    return scale * err;
(with the note that scale seems to be 1.0 all the time, maybe added for future extensions)

The possible (non solver-intrusive) approaches that I see:
  1. adjust the geometries code to always provide normals that have their dimensional size (and continue using the ConstraintAngleViaPoint constraint); or
  2. write specific constraints that deal with the special types of curves and special type of constraints (tangent on point, normal on point, whatever) and know how to deal with the specific of the two curves (and get rid of ConstraintAngleViaPoint constraint, as being not specific enough); or
  3. ask any Curve class to implement one more method (say, getRepresentativeDistance() const that the ConstraintAngleViaPoint can use in transforming the error angle into an error distance
Does anyone see other approaches?

Solution 1 is "vague" - the requirement of "sized normals" can't be enforced/tested by ConstraintAngleViaPoint code can lead to a "fall-back into the same bad habits" if news type of curves get added.

Solution 2 is the most C++-ish approach, but may lead to an explosion of specialized constraint classes (not that there aren't enough already)

Solution 3 is sorta hack-ish, but is the most conservative in regards with the new/modified code. So I'll be going down along it - in my fork, see how it works
abdullah
Veteran
Posts: 4935
Joined: Sun May 04, 2014 3:16 pm
Contact:

Re: Sketcher performance - where do those CPU cycles go.

Post by abdullah »

I can only double down on the word of caution. Not that I am trying to discourage you. Au contraire, I would love to see improvements!! But the solver is rather sensitive to gradients and errors. The solver is about two things, making things go to the right position (errors and gradients affect convergence speed), and have meaningful diagnosis (rank, redundant/conflicting constraints, restricted parameters). If gradients are zero (where there is still a dependency), the latter breaks down.

While my algebra is passable, my infinitesimal calculus capabilities are worse than they should be. DeepSOIC has always been a good reference for me there. I am very happy to see him around here and certainly he can be of much more use than me (should he have time for it).

That said and my own limitations exposed: (2) leads to an explosion of classes for each pair of curves. It was actually the way before DeepSOIC invented AngleViaPoint. I would rather avoid (2), if only, because it would lead to a substantial amount of code to maintain (and forces new code to be created every time a new curve is implemented).

I am not sure what @DeepSOIC may think of it (and certainly he knows better than me).

I am unsure if correct scaling of the vectors (getRepresentativeDistance()) is always necessary.

If it is always, then (1) could work and the effort is in ensuring future developers understand the need to provide correct scaling (so it is mostly a documentation effort, aside of the correction effort). At the end (3) will also depend on the future developer understanding what he or she has at hands.

If it is not always that this scaling is necessary, then (3) does not look hackish to me, but it looks rather like the way to go. It provides the flexibility of providing the right scaling with the ability to provide only a vector (such a normalised vector), and the function to get the scaling shows the intent (some documentation effort may be needed, as probably harmonisation into what kind of vectors should be provided, e.g. normalised).

For sure, feel free to experiment and let us know :)
acolomitchi
Posts: 34
Joined: Tue Dec 06, 2022 1:41 am
Location: Melbourne, Australia
Contact:

Re: Sketcher performance - where do those CPU cycles go.

Post by acolomitchi »

abdullah wrote: Sat Dec 24, 2022 9:51 am I can only double down on the word of caution. Not that I am trying to discourage you. Au contraire, I would love to see improvements!! But the solver is rather sensitive to gradients and errors.
I know that (but please don't ask me how, I'll tell you further down anyway :mrgreen:)
abdullah wrote: Sat Dec 24, 2022 9:51 amThe solver is about two things, making things go to the right position (errors and gradients affect convergence speed), and have meaningful diagnosis (rank, redundant/conflicting constraints, restricted parameters). If gradients are zero (where there is still a dependency), the latter breaks down.
I imagine I could try plugging in some other solvers, but none that I know could perform the redundant/conflicting constraints detection (and I know too little of the current one to touch). So this will remain just in my imagination (I'd be a fool to disappoint myself by trying and failing :mrgreen:, especially since I don't have enough time to dig as deep as my perfectionist side would like)
abdullah wrote: Sat Dec 24, 2022 9:51 amWhile my algebra is passable, my infinitesimal calculus capabilities are worse than they should be. DeepSOIC has always been a good reference for me there. I am very happy to see him around here and certainly he can be of much more use than me (should he have time for it).
Raising my hat to both of you - I spent enough time looking into the constraints code and on forum today to know the respect for the knowledge and the work is due.
abdullah wrote: Sat Dec 24, 2022 9:51 amThat said and my own limitations exposed: (2) leads to an explosion of classes for each pair of curves. It was actually the way before DeepSOIC invented AngleViaPoint. I would rather avoid (2), if only, because it would lead to a substantial amount of code to maintain (and forces new code to be created every time a new curve is implemented).
Feeling of (my) guts - the solver would have converged faster with specialized constraints (be it because many of the tangent/normal constr would deal with nicer angles). I can relate tho' with the increased in code complexity, in the context of not enough time/heads/hands to do it and do it coherently.

abdullah wrote: Sat Dec 24, 2022 9:51 am I am unsure if correct scaling of the vectors (getRepresentativeDistance()) is always necessary.
As for now, I'm unsure either. I implemented the code to return a measure of the curve size (doesn't need to be very exact, it's a dirty hack anyway) and the code that would consume it (ConstraintAngleViaPoint and ConstraintL2LAngle). Got myself a python script to build a sketch that uses both but still lets 3 DoF for me, so I can shake the geometry around.

Functionally, mixed results:
  • on usual sizes (80-100 mm), it works well. But again, so does the unmodified constraints
  • on sizes of 800 meters (yes, meters) - I had to use a scaled down "representative size" by 0.01 to make messages like

    Code: Select all

    App.getDocument('Unnamed').getObject('Sketch').movePoint(0,2,App.Vector(-237772.343750,783827.062500,0),0)
    20:00:18  Drag point: Not able to move point with the id and type: (0, 2)
    less frequent. They still happen now and then. And occasionally the solver fails to converge for a drag gesture - grab another point or curve to drag from and the things fall back into place (until the next time it happens)
(see? Told you I'll tell you how I got to know :mrgreen:)

Performance-wise (number of iteration until it converges)? I don't know yet. I'll need to identify a place where can insert a "log entry" or a "cout <<" or something to let me know the iteration-to-convergence count (and maybe the quality of the solution).
I'll have a look tomorrow to identify such a place (any hint will be appreciated)
abdullah wrote: Sat Dec 24, 2022 9:51 amIf it is always, then (1) could work and the effort is in ensuring future developers understand the need to provide correct scaling (so it is mostly a documentation effort, aside of the correction effort).
May worth it, but again it may not.

Why it may not: starting from DeepSOIC's post on the "errors/gradients of similar magnitudes", I fooled around with dragging circle arcs of different sizes. At large sizes (800m), the dragging becomes less fluent, sorta choppy (tho' not at the level shown by DeepSOIC's post, nothing like converging to different configurations, but dragging is still a bit jumpier than when the sketch has sizes in the 10-100mm). And the ArcOfCircle is constrained by the ConstraintCurveValue (for start/stop angles), which rely on the curve's Value(...) method and the latter provides exact values for the gradient.

This is to say: even with exact gradients (i.e. not normalized), when we're dealing with large sizes, the solver takes more iterations to converge - I imagine there could be a (distance type of) size limit of the sketch above which the solver will start failing to converge

Question - is the success condition of the iterative solving absolute or relative to the sketch size? Does the solver stop once an absolute precision has been reached or is the precision relative to sketch size?
abdullah wrote: Sat Dec 24, 2022 9:51 amAt the end (3) will also depend on the future developer understanding what he or she has at hands.
Forget it, it's a hack. Good for experimentation purposes, but that's about it.

abdullah wrote: Sat Dec 24, 2022 9:51 am For sure, feel free to experiment and let us know :)
But of course. Tomorrow is another day to fool around (what else to do, all the shops a closed and the dam'd OCCT create geometries with errors :mrgreen: ), but I doubt that there'll be other days dedicated to fooling.
User avatar
DeepSOIC
Veteran
Posts: 7896
Joined: Fri Aug 29, 2014 12:45 am
Location: used to be Saint-Petersburg, Russia

Re: Sketcher performance - where do those CPU cycles go.

Post by DeepSOIC »

acolomitchi wrote: Sat Dec 24, 2022 2:17 pm Question - is the success condition of the iterative solving absolute or relative to the sketch size? Does the solver stop once an absolute precision has been reached or is the precision relative to sketch size?
it is hypot(err1, err2, ...) < convergence (convergence = 1e-10 by default; err1 and err2 and so on are error functions of constraints). Some solvers have slightly different formulations of this condition (i think, one of them tests for maximum absolute error of all constrains), but it doesn't really change the meaning.
User avatar
DeepSOIC
Veteran
Posts: 7896
Joined: Fri Aug 29, 2014 12:45 am
Location: used to be Saint-Petersburg, Russia

Re: Sketcher performance - where do those CPU cycles go.

Post by DeepSOIC »

have a look here, for example:
https://github.com/DeepSOIC/FreeCAD-ell ... DogLeg.cpp
it is my reimplementation of sketcher's dogleg solver, with slight twists due to architecture change, but also with quite a few more comments.
acolomitchi
Posts: 34
Joined: Tue Dec 06, 2022 1:41 am
Location: Melbourne, Australia
Contact:

Re: Sketcher performance - where do those CPU cycles go.

Post by acolomitchi »

DeepSOIC wrote: Tue Dec 27, 2022 1:10 am
acolomitchi wrote: Sat Dec 24, 2022 2:17 pm Question - is the success condition of the iterative solving absolute or relative to the sketch size? Does the solver stop once an absolute precision has been reached or is the precision relative to sketch size?
it is hypot(err1, err2, ...) < convergence (convergence = 1e-10 by default; err1 and err2 and so on are error functions of constraints). Some solvers have slightly different formulations of this condition (i think, one of them tests for maximum absolute error of all constrains), but it doesn't really change the meaning.
Ummm... so... if you're using Sketcher** for landscaping (100 m order of magnitude), construction or heavy machinery (10m oom), machinery (1m oom) or plastic-3d-printables (0.1-0.01m), you can rest assured the Sketcher will do its best in (attempting to) compute everything with 1Å precision? Is my understanding correct?

If I'm correct, then I'm not surprised any more of Sketcher GUI getting jumpy when dragging large arc-of-circles or occasionally failing to converge.


For the academic inclined, an oldie-but-Goldie on the perils of FP arithmetic What every computer scientist should know about floating point numbers - DAVID GOLDBERG

For the experimentalists in what the hypot(err1, err2, ...) may do to you and how much trust one can put in it, the attached is my fooling around the uncertain precision in two ways of Sketcher-themed computations of hypot (umm... almost) by two means and the difference between the two results as the CPU sees it. Personally, when if comes to hypot/square, I see anything beyond double(1e-8) relative precision as questionable.

---
** Sketcher as geometry engine is pretty decent, can save a lot of formulae scribbling on a paper to get an answer. Or it is so when its solver doesn't fail to converge :mrgreen: (the other rant I feel growing inside me is the use of "always distances, never displacements" - like when you set your "X-dist" constraint for you geom to a positive value no matter if it's on the left or the right relative to Yaxis. Then you are living with the sword of Damocles above your head, praying a drag on a remotely related DoF won't slide the solver into the other local minimum and decide the best way to solve that constraint is by mirroring it across Y axis)
Attachments
Dummy.cpp
(2.59 KiB) Downloaded 79 times
User avatar
DeepSOIC
Veteran
Posts: 7896
Joined: Fri Aug 29, 2014 12:45 am
Location: used to be Saint-Petersburg, Russia

Re: Sketcher performance - where do those CPU cycles go.

Post by DeepSOIC »

acolomitchi wrote: Fri Dec 30, 2022 7:50 pm Ummm... so... if you're using Sketcher** for landscaping (100 m order of magnitude), construction or heavy machinery (10m oom), machinery (1m oom) or plastic-3d-printables (0.1-0.01m), you can rest assured the Sketcher will do its best in (attempting to) compute everything with 1Å precision? Is my understanding correct?
yes, that is unfortunately the truth, as far as i know. You can enable "advanced solver control" in preferences, where this tolerance value can be manually adjusted, but there are no instructions or recommendations for users to do so.

acolomitchi wrote: Fri Dec 30, 2022 7:50 pm Personally, when if comes to hypot/square, I see anything beyond double(1e-8) relative precision as questionable.
this is an absolute comparison, relative precision is mostly irrelevant.

BTW, there are other interesting precision-related problems in freecad. For example geometric primitives are by default created with tolerance of precision::confusion, which is 1e-7 mm. This is used to test if geometry is coincident or intersecting (e.g. in boolean operations), and can cause trouble for very big things (like, space-elevator-scale models are impossible, unless you alter this tolerance; see this piece of discussion for example).

acolomitchi wrote: Fri Dec 30, 2022 7:50 pm (the other rant I feel growing inside me is the use of "always distances, never displacements" - like when you set your "X-dist" constraint for you geom to a positive value no matter if it's on the left or the right relative to Yaxis. Then you are living with the sword of Damocles above your head, praying a drag on a remotely related DoF won't slide the solver into the other local minimum and decide the best way to solve that constraint is by mirroring it across Y axis)
X-dist and Y-dist flipping in particular were actually fixed some long time ago, by me (i think). The minus sign is automatically removed by swapping the points when the constraint is being created.
Post Reply