On Fri, 2005-08-05 at 23:39 -0700, Bryce Harrington wrote:
On Fri, Aug 05, 2005 at 01:43:45PM -0400, mental@...3... wrote:
Quoting Alan Horkan <horkana@...44...>:
Presumably each community has a clear and specific leader to steer them in the right direction and make sure there is the necessary cross communication with other groups.
Yes, we will need people to take leadership roles. This tends to happen organically on -devel, although I don't see so much of it on -user yet. I had been hoping we'd see some power users (aside from bulia) emerge and start mentoring the others.
Actually, there have been a few notables emerge from the -user group. They've shown strong leadership in several projects related to documentation, establishment of an art community, and gaining us a pretty kick ass about screen. :-)
I think this release was unusually rough because of the longer development time.
Unfortunately part of the problem was that many people decided to wait for the "final" release before trying it out.
Which is a consequence of normal human psychology, I think. I'm not sure how to encourage users to "take the grenade" of the prereleases. Maybe encouraging a testing community, as such, could help.
This is also my take on it.
Over the past few months I've been writing a technical paper for the Pacific Northwest Software Quality Conference about the NFSv4 testing experiences, and as part of this I've been reading a lot of research papers about Open Source, testing, and community organization.
It's well known among professional, commercial software testers that a "tester" and a "user" have very different approaches and motivations. The user basically just wants to get their work done, and builds on an assumption that the software is problem-free; if not, they report it. A person with a testing mindset, on the other hand, goes in with the assumption that the software is a little puzzle to figure out. How can you break it?
Now, a commercial software company would probably note that the testers seem like the right way to do QA, so they focus exclusively on that type of testing. It's easy to control, can be planned out in detail, etc.
For Open Source, though, the "user testing" is the cheaper approach. It's particularly valuable in that it doesn't _need_ to be controlled, it just happens, and it scales directly with the size of the userbase.
However, I believe both of these approaches can work together in a complementary fashion, and compensate for each other's weaknesses.
With NFSv4, it's interesting because we have both the users (who randomly report problems as they're setting it up and using it), and folks that participate strictly in doing formalized testing, which we've taken to calling "synthetic testing".
I was presenting about this work to some users in the film industry and showed some graphs of how one tester had pushed NFS by creating one directory and putting as many files as possible into it. This was quite an effective way to scare out bugs, and as he went he kept running into limits that exposed unusual problems within the kernel that the developers had never known about, but were quick to explore and solve. By the time of the presentation they'd reached 1.6 million files per directory, and I remarked that the testers and developers still weren't happy with that number. One of the audience chuckled and said, "I think our usage may have millions of files, but certainly not in one directory! This is great that someone is testing it this way and solving all the bugs, because it's definitely not something we users would be worrying about for a long time."
It's very interesting to read what other researchers have written about testing in open source projects, particularly because the things they point out doing are mostly things Inkscape already does: Nightly builds, a central bug tracker, a triaging process, an authority to declare when the release is ready, etc.
Research into synthetic testing coupled with standard open source user testing is scarcer and harder to find, however the evidence exists to show that while it can be difficult to do, it can really pay off big time.
Hearing Mozilla talk about this in specifics, and how they had a focused testing team, really showed how much that this sort of approach can have a huge impact on the quality of a multi-platform open source application.
That is so great...you know what would be great is a white-paper presenting a good practice approach to testing for open source projects...I think that would be very valuable to the open source world.
Maybe OSDL could sponsor that...it would rule :)
Jon
SF.Net email is Sponsored by the Better Software Conference & EXPO September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf _______________________________________________ Inkscape-devel mailing list Inkscape-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/inkscape-devel