![](https://secure.gravatar.com/avatar/b47d036b8f12e712f4960ba78404c3b2.jpg?s=120&d=mm&r=g)
Jon A. Cruz wrote:
To clarify, you did not really explicitly say the "why" of the ends, just the "how" of the means you were looking at.
Let's keep the philosophy down to minimum
#2 is an "interesting" problem that might be much smaller than expected. This definitely does not require a high-performance structure.
There is nothing "high performance" about the structure I proposed - it's only a way to keep track of the needed information with minimal boilerplate. It is not a performance-oriented solution, it is a maintainability- and feature-oriented one. It might give better performance in some cases, but it's only a pleasant side effect.
#3 is a very large and complicated problem. For this I can see that some UI work and such would be required.
For deleting gradients and patterns, it is simple, clear the style or replace it with the default style. For masks, remove the mask. It might require some work for cases like deleting an LPE parameter.
#4 is another large problem that is more than just "who is referred to". That would be part of the solution, but more work is needed. Things like "not referenced but still desired" is critical but hard to implement without some thought.
Vacuum defs is intended as a cleanup action that will remove all unused objects to minimize file size. If you want to keep unreferenced invisible objects, then don't invoke it - simple. The problem of what to keep and what to delete is already solved by collection policies, and using smart references just makes the implementation able to catch more unused items than now.
#5 is technically not solved by just that simple structure. It could be *part* of the solution, but there really is a lot more to the problem.
I have already devoted some time to this problem and the only thing missing at the moment that prevents me from adding this feature to the node tool is the inability of reliably finding all references to an object.
#6 is not nearly as simple as you might guess. Using the simple structure chain to try to determine that one will fall down pretty quickly. Among other things that involves bounding boxes, transformations, intersections of bounding boxes, etc.
The renderer does not know about transforms - they are the responsibility of the SP layer. The rest depends on being able to determine which objects' appearance changed. For this we need smart references in the SP layer.
If you can't see more than 2 problems, then you really need to take a step back and think about it. From an architectural viewpoint those two are both "means" and not "ends". Thus they themselves are not the problems but only the means you are looking at to be able to solve some problems.
The distinction is purely philosophical. Both "find things to copy into the clipboard when user presses Ctrl+C" and "find objects which refer to this one" are valid problems ("ends"), but with different degrees of sophistication and varying relevance to the end user and developers. The proposed capability is very generic and I can assure you that solving those basic problems efficiently will lead to better solutions to more complicated problems.
Problem: avoid ID collision When does this occur?
- when a user pastes content from another SVG document
- when a user changes an object's ID via the XML editor or object
properties
* when the user changes the ID from the Object Properties dialog * when importing another document (should actually use the same code path as pasting, but doesn't at the moment)
Actually your point 2 is problematic. Personally I would expect the XML editor to remove all references to the changed ID from the document, and the object properties dialog to update references to the new ID.
These are two different use cases. The second use case is quite simple to address. A *highly* efficient structure to address this would be a map of IDs used in the document. Upon attempting to commit the ID change in the UI, a simple check for presence in the map would result in an immediate pass or fail. If, on the other hand, the three way pointer was relied upon then in order to detect if another object had the same ID the could would have to walk the *entire* document tree visiting each node and checking its ID.
The hard part of the problem is not determining whether some ID exists in the document, because this is already handled by XML::Document and its getObjectById method. The hard part is the ID update. Smart hrefs in no way preclude using an ID map. They are not intended to solve the "does this ID exist in the document" problem, because they *don't even know what the ID string actually is* (though their subclasses might), and we have other, better methods to solve that problem (ID map).
The ID collision problem can actually be decomposed into several parts (in parens means of solving): 1. Find all IDs used in the pasted document. (ID map of pasted doc) 2. For each ID, check whether it exists in the current document. (ID map of both documents) 3. If it does, find the object with that ID in the pasted document. (ID map) 4. Generate a new unique ID. (ID map, random number generator) 5. For each object that references the colliding object, change the ID it uses, then change the ID of the object. (smart hrefs + subclassing of ObjectRef)
Currently we can solve parts 1, 2, 3 and 4, but have big problems with part 5.
Oop! no we're not. What if some of the things that were copied are dangling references! Hmmmm... what should we do about that? Oh, and what if we're copying from the same document we might want a half- copy that will keep references to gradients, etc, and have those gradients also just in case we're pasting into a different document.
The safe thing to do is to create new gradients even if pasting into the same document, and let vacuum defs clean up gradients that are identical.
But... as I come to the end of this first pass analysis, I notice something quite interesting. The tri-pointer would only ever help with "ID changing" and not really with "clash prevention" itself.
Yes, because it's designed to simplify that part, and incidentally this is the part that causes problems. The other parts are easy because we already have this information available in another structure.
Hmmm... and while we're at it, the proposed class does seem a bit heavyweight for the problem it wants to solve. Also I don't see anything addressing the problem of pinning. I would think we would need weak references in here. That's another thing to add to the consideration.
What is this 'pinning problem'? If you mean something like cyclic references keeping things in the document, then it's not an issue, because this should be handled by vacuum defs (that's why I mentioned a mark-and-sweep algorithm). Finally, I wouldn't consider 6 pointers "heavyweight". Maybe if ObjectRef was intended to be used in millions of copies, it would make some difference, but realistically most documents will contain less than 100 references.
Regards, Krzysztof