"GCS" Label in issues is generally a good filter:
https://github.com/FreeCAD/FreeCAD/issu ... abel%3AGCS
Here you have all kind of issues with the solver. So no specific to diagnosis. You will have to do some classifying work. Also, bear in mind that they are generally drafted from the perspective of the user. The root cause may be very different from what it may appear from the description.
If you want to produce functional tests from sketches, there is a new tool to create macros:
https://github.com/FreeCAD/FreeCAD/issues/9683
https://github.com/FreeCAD/FreeCAD/pull/9684
"Conflict" detection, which can ultimately be conflicting constraints or redundant, is IMO very well detected by the QR decomposition when constraints provide appropriate divergence values.
What happens is that in some corner cases one (or more) divergences of the Jacobian (that should not get zero) gets zero. This is because in the specific "position" of the geometry, due to the combination of constraints, and the way the constraint is written. This is being solved by modifying the constraint, either rewriting it in another form, or by not allowing it (the divergence) to become zero (but making it a minimum fixed value, 1e-13). This magnitude is believed to be high enough to be over the QR pivot threshold (minimum value not considered to be zero) and low enough not to interfere in DogLeg convergence. In practice it appears to behave as intended. This has been treated until now as "The constraint is the problem and should be rewritten".
When the Jacobian is not degenerate due to constraints, it is my impression that the QR decomposition does an awesome job of identifying the groups of "Conflicting constraints".
Then we have the weakest link in the chain, the identification of which constraints are the ones the user should remove (popularity contest).
After this, there are two possible outcomes: (a) there are conflicting/full redundant sketcher constraints, (b) there are partially redundant sketcher constraints.
Definition: Partially redundant sketcher constraint happens when a sketcher constraint corresponds to more than one solver constraint, wherein at least one solver constraint is redundant, but at least another solver constraint causes no problem at all. Thus, the cannot "just remove" the sketcher constraint, because part of it is necessary. This is being solved by the user changing the combination of constraints. Here, it should be possible to improve by identifying the substitution necessary and advising the user (or requesting permission for automatically substituting it).
In case (a) there are no convergence problems that may arise because there is no attempt to converge. DogLeg is not executed. The user is notified of the problem, together with the "advice" from the "weak link". From experience I can tell you that, when the issue is one of redundancy, if I by-pass the stopping mechanism and DogLeg is executed, DogLeg converges without any issue. However, allowing for redundant constraints ends up being a big problem (the weak chain algorithm becomes useless as they accumulate). Our decision here is notifying the user as earlier as possible so that the issue gets fixed.
In case (b) we allow convergence while ignoring the partial redundant. We mark it in the solver messages. This is the intermediate way we took between: (i) users thinking that all redundant constraints should be ignored and not bother the user, and (ii) those other users, more purist and mindful of what they are doing and wanting to be in control, who would advocate to stop convergence as in (a) and ask the user to fix it. So, at the end, there is an annoying solver message, but then it operates as if it there was not one. There has never been a convergence problem to my knowledge in these cases. But, accumulation of this partial redundant constraints do affect our "popularity contest" weakest link in the chain algorithm.
So answering the last part of your question: Unless two constraints are really conflicting, DogLeg generally converges. There are some cases where it does not (uncommon). Take a look in the issues. They need to be studied.
This question is not specific to FreeCAD or its solver.
One would need to agree what is to be understood by multi-solution in FreeCAD. Is it parallel convergence using different parameters (or different algorithms) from a same starting point (different ones is a little bit trickier in CAD, as we have a starting point from the previous operation) to try to discover a plurality of solutions (local minima)? Is it directed to find just one local minima in non-convex problems in which some algorithms or combinations of parameters will just not converge?
The problem with multiple solutions is knowing which one is the one the user wants. We have multiple solutions in sketches (flipping of geometry). I believe one could certainly implement algorithms to detect flipped solutions, with specialised algorithms or even by training a machine learning model. We have not explored that area.
I am under the impression that we do achieve convergence in the vast majority of the cases. But there are some tickets with convergence issues. Take a look at them. It is not common to have convergence issues. Where they happen it may be possible that such algorithms are helpful. I could not tell.
It was not designed to provide multiple solutions in parallel. I does try solving sequentially with other algorithms if the default fails. Generally the sequence is DogLeg => LM => BGFS. SQP is also used sometimes (I do not have a clear memory). Between algorithm executions, the system needs to be reset to the original state.
That said, I do not think the architecture is inherently bad in the sense of impossible to adapt. But it would certainly need modifications. Because the Sketcher is totally separate from GCS (Sketch.cpp is a kind of Facade software barrier between the two), it should be possible to create copies of the initial state, run several algorithms in parallel, wait for all solutions (or time out some after one arrived), pick one and update the Sketcher with it.