On Feb 23, 2010, at 9:48 AM, Krzysztof KosiĆski wrote:
2010/2/22 Jon Cruz <jon@...18...>:
That proposed "fix" on the wiki really appears to be a work-around.
I do not agree with your definition of "fix" and "workaround". For me: Fix: something that removes the issue. Workaround: something that doesn't fix the issue but allows the program to function correctly.
Given those definitions, adding ifdefs to use some different hash implementation is a workaround, while the user replacing his broken header with a working one is a fix.
Just look through the existing config code in Inkscape.
Making our software build on many platforms is a goal and a realizable target. Telling all end users to just upgrade to the latest and greatest generally is not.
*If* the user has updated things, then we can fix it. But if not, it is fairly trivial to correct the code.
More important is that we should only use an optimized data structure when required.
What is the difference between a normal data structure and "optimized" data structure? In my book, the standard hash set and hash map are normal data structures.
Not in any of the books I have.
"set" is the base type. "hash set" or "unordered set" are specializations that are optimized for certain uses.
Are you suggesting we should use structures that do not correctly represent the semantics of data (e.g. storing set elements in a vector) because it is somehow simpler, or that using the correct structure is premature optimization?
That is not the most important thing.
In software engineering it has generally been accepted (through experience, quantified studies, etc.) that simple is better. Occam's razor is the preferred tool of the experienced professional.
Oh, and far better than optimizing the code is just optimizing the algorithms. Not "How can I do this faster" but "how can I not do this so we will be faster". The areas you are using this in are actually such areas, and we do have some HUGE potential for end-user improvement. That is more where I would like to be able to focus attention, due in part to the usability we can gain. If you can take a bit of time and lay out why this data structure is better, when it helps, when it doesn't, and some rough numbers on things then I think we can do some amazing work. A simple wiki page to gather it would be good.
I'd like to be doubly clear on this point. You have been working on some very low-level things, and have gotten some nice performance re-gains. However there are a few big-picture things that can be done to change approach and gain an order of magnitude in performance. I'd like to help pin those down so you can move on to those big win items.
More important is that we should only use an optimized data structure when required. This includes measuring performance before and after the switch.
I replied to this before. I don't need any justification for using the theoretically optimal structure, but you do need benchmark data to demonstrate that some other theoretically suboptimal structure is better in practice.
Well, then you just prove that you are not a software engineer.
We *need* to do the right thing. If the "theoretically optimal" structure is better in practice, then a five minute test is all it would take. However, every single book I have ever seen on software engineering stresses to actually test these theoretical issues. Start with simple, and *then* tune up those areas that benefit in the real world from it. And if such changes are not that complex, then testing them is not either.
But the *KEY* is that forcing an end user to patch their system is FAR UGLIER!!!
If the system is obviously broken, I don't see how it's ugly for the user to fix it.
Just ask anyone who has every dealt with customer support, either as a first-line person, as a support engineer, or as a software engineer trying to debug a customer issue. This is a real big ugly.
Apple's toolchain is not a sacred cow. Fixing the bug at its source is trivial (overwrite 1 file), and somebody who succeeded installing all the required Macports libraries that are required to build Inkscape will be perfectly capable of doing it.
I'm not saying Apple's toolchain is a sacred cow. Not at all.
What I am saying is that Inkscape can and *has* taken the choice of trying to work for as many people as possible. Given a single engineer taking a single afternoon and changing things in one place versus having hundreds of non-engineer end users changing their local systems in ways that can and do break things (C++ is notorious for version issues, but at least we're not on Qt where it is worse) the proper choice *should* be clear.
Please do not introduce useless cruft to fix a bug that is already fixed where it should be (GCC).
Again, this is not useless cruft. It is a *GOAL* of the Inkscape project to be cross-platform and portable. This includes NOT REQUIRING END USERS TO GO TO THE LATEST UPDATES OF EACH AND EVERY COMPONENT. Yes, we do want to draw the line at some point, but keeping things portable is one key to maintaining a low barrier to entry. Keeping a low barrier to entry is one key thing that keeps our project growing and progressing.
You keep having issues with this. If it were not an ongoing issue, I'd not make such a big deal. But again, just because "it works for me, Krzysztof" does not mean it works for everyone.
Oh, and keep in mind... the template you want to use is not standard. That was the problem early on, and that is a problem now. It is not part of the stl that is required. Now in newest versions of gcc it is usually present, but that is not always the case. Keeping our codebase flexible to the point of not requiring the use of an optional component is helpful for those using other than gcc. Of course I wouldn't suggest people use MSVC instead, but there are some nice high-end compilers that some end users have been using for Inkscape. And among other things that means that minimizing the patching they have to do on their end is just being a good citizen.