W dniu 8 stycznia 2010 17:58 użytkownik Michal Suchanek <hramrach@...704...> napisał:
I know autotools suck but my limited experience with waf so far is that it sucks even more.
Care to share any potential advantages (or a pointer to a place where they are listed) ?
Here are some of them.
1. No need to list every source file in scripts, since one can use globbing.* 2. Build scripts are written in Python and can use the full power of a general programming language. This is immensely important once you start to do non-standard things during the build. 3. On Windows it does not require emulation layers or shell replacements like Cygwin or MSYS. 4. Good support for parallel builds, even when the sources and build scripts are in multiple directories. 5. Can replace the current btool + automake combo, so we finally have one build system for all platforms. (This is actually the main reason.) 6. It takes about 100 kB, so it can be bundled with the source tarball. We can ship a version tweaked to our liking, for example without tool modules we don't use or with extra features. (That's what I do now, I use r6828 of waf-1.6 branch, though perhaps I could tweak the scripts to work with waf 1.5) 7. It is easy to include generated .cpp code as the source of an executable or library (once you figure out the correct way - the docs are useless). 8. Everything created by the build system is kept in the "build" directory (the name can be changed), so distclean is as simple as deleting "build".
There are also some downsides.
1. I do not understand the maintainer's development philosophy. For example I refactored a portion of the function job_count() that retrieved the number of CPUs into a new function cpu_count(). He merged it back into job_count() and added a somewhat arbitrary limit of 1024 simultaneous jobs. Then there was an "ordered dictionary" that stored the insertion order in a list. When adding a key, it searched the list rather than the dictionary to see whether the key exists. He declined to change it to search in the dictionary, because it was an "unnecessary optimization". He also said that "the list will never reach 1000 elements". 2. Since the maintainer started working on the waf-1.6 branch, he does not keep it in a working state, but rather uses the SVN repository like remote storage for his experiments. Not to mention the fact that he uses the trunk for the stable version and the waf-1.6 branch for new development. 3. The API docs are non-existent. The only existing piece of documentation, the "Waf book", is in large part irrelevant to the end user and non-free (CC with no derivative works clause). When asked about why it's non-free and others can't improve it, the maintainer said it's to prevent distributions from shipping a system-wide Waf package, which is discouraged. This reason is bogus because there is a Waf package in Debian that simply doesn't include the non-free book. 4. Certain parts of the code rely on Python-specific tricks like replacing methods in classes or compiling code at runtime, which makes it very hard to see what's going on and is often unnecessary, because the same functionality can be obtained using more traditional techniques. 5. Support for multiple configurations (like VPATH builds in Autotools) is incomplete. 6. Checking for headers is sometimes annoying, because the check methods do not return True/False.
To sum up, Waf is similar in principle to SCons (scripts are Python), but its different architecture allows it to be faster. However, its code isn't great, it's mostly undocumented, and the maintainer isn't easy to work with. The best that could happen is if someone forked Waf, refactored and documented it. But this might not be realistic. I think Waf 1.5 is still less pain to use than Autotools.
*) I read somewhere that having to spell out every source file in Autotools was originally a design decision. The rationale for this is dubious ("people would forget to add files to source control") and makes it easy to keep dead code in the tree.
Regards, Krzysztof