[PATCH 0/2] Compilation warning fixes

Felipe Contreras felipe.contreras at gmail.com
Tue May 13 14:03:35 EDT 2014

Tomasz Wasilczyk wrote:
> On 05/13/2014 05:11 AM, Felipe Contreras wrote:
> > On Mon, May 12, 2014 at 3:43 AM, Tomasz Wasilczyk <twasilczyk at pidgin.im> wrote:
> >> 12 maj 2014 08:35 "Felipe Contreras" <felipe.contreras at gmail.com>
> >> napisał(a):
> >>> I've no idea why people are interested in
> >>> libpurple-mini, but it seems there's quite a few.
> >>
> >> Isn't it enough to compile official Pidgin with congifure switches that
> >> disables Pidgin and Finch? I'm just curious.
> >
> > That might work in Linux, but I'm not so sure it would be so easy to
> > compile for Windows.
> With autotools - it is as trivial as in Linux.

autotools does not come with Windows.

Moreover, I gave a try by installing MinGW on Windows, autoconf failed
immediately saying that the dnl macro was not available.

> > Moreover, I haven't actually tried it, but I bet my Makefile is at
> > least an order of magnitude faster.
> For Linux (and cross-compilation for win32 too), most of build time is 
> compilation, which is not affected by Makefiles.

That is a common misconception (often from autotools fans).

Here are some numbers

cat > /tmp/install-purple << EOF
./autogen.sh --prefix=/tmp/purple-2 --disable-gtkui \
--disable-consoleui --disable-meanwhile \
--disable-plugins --with-dynamic-prpls='' \
--disable-schemas-install &&
make install

== autoconf ==

# clean
/tmp/install-purple  120.43s user 10.13s system 184% cpu 1:10.88 total
/tmp/install-purple  118.48s user 9.03s system 188% cpu 1:07.53 total
/tmp/install-purple  126.36s user 9.86s system 155% cpu 1:27.84 total

# cc-cache
/tmp/install-purple  55.71s user 6.43s system 133% cpu 46.664 total
/tmp/install-purple  37.49s user 7.23s system 133% cpu 33.395 total
/tmp/install-purple  52.82s user 6.20s system 162% cpu 36.347 total

# no changes
make install  1.26s user 0.29s system 93% cpu 1.660 total
make install  1.09s user 0.20s system 90% cpu 1.416 total
make install  1.39s user 0.38s system 96% cpu 1.829 total

== make ==

# clean
make prefix=/tmp/purple-1 install  52.72s user 3.10s system 274% cpu 20.338 total
make prefix=/tmp/purple-1 install  53.33s user 3.10s system 346% cpu 16.270 total
make prefix=/tmp/purple-1 install  53.58s user 3.15s system 333% cpu 16.997 total

# cc-cache
make prefix=/tmp/purple-1 install  1.85s user 0.35s system 291% cpu 0.756 total
make prefix=/tmp/purple-1 install  1.85s user 0.35s system 291% cpu 0.756 total
make prefix=/tmp/purple-1 install  1.83s user 0.38s system 290% cpu 0.760 total

# no changes
make prefix=/tmp/purple-1 install  0.41s user 0.10s system 155% cpu 0.329 total
make prefix=/tmp/purple-1 install  0.39s user 0.11s system 152% cpu 0.326 total
make prefix=/tmp/purple-1 install  0.42s user 0.11s system 155% cpu 0.339 total

So my Makefile is from 4 to 50 times faster. Sure, maybe I'm missing a
few things, but still I bet you whatever you want that in a 1-to-1
build, the Makefile will *always* be faster.

Also, if most of the build time was on compilation, why do we have
people trying hard to improve the build systems? I'm not talking about
cmake, and other high lever toos that live on top of 'make', I'm talking
about tools that try to replace make:


> >>> BTW. I saw in the archives some talk to use autotools for building on
> >>> Windows which seems the absolute worst thing to do.
> >>
> >> Some Pidgin devs also state that. But I still have no idea *why* is it the
> >> worst thing to do. If we provide a repository with mingw dependencies it
> >> will be repeatable. The only disadvantage I see is a longer build time on
> >> Cygwin. But Cygwin is already a disaster.
> >
> > Have actually tried to run autotools in Windows? There's no perl,
> > there's no m4. I wouldn't dream of doing that, particularly when pure
> > make works just fine.
> The initial set of patches for win32-autotools build came from a person 
> who didn't even tried to compile on linux. Personally, I'm not going to 
> run any more builds on Windows, since cross-compilation finally works 
> for 3.0.0 branch.

So that's a no. The fact that *one person* was able to do a compilation
in Windows doesn't mean it's easy. Who knows how many hacks that person
had to do.

> > I've worked all my professional life
> > on embedded systems, and I can assure you; autotools are the worst for
> > cross-compiling.
> I still don't know *why*.

Because of the fundamental thing autotools do: run checks.

In a cross-compilation environment you can't run the target binaries.
Unless you use QEMU, or wine, and when you do you end up with a huge
amount of problems, as companies like Nokia had to learn the hard way.

It's just not worth it.

> I *really* have no idea why autotools is the worst. I don't know any
> issue about it.

It provides absolutely no value, it only makes things slower and more

> >> I looked at the one you mentioned and I'm not sure if it handles
> >> every quirk of every unix we build for.
> >
> > That's the whole reason you use GLib.
> GLib won't handle everything. For example optional external libs and 
> dependencies between them (gtk2/3 vs webkitgtk vs gst0.10/1.0).

That's what pkg-config is for.

> Or (I'm not sure about all of these)
> socklen_t/sockaddr.sa_len/fileno/inet_aton/etc routines, that may be
> missing on some systems.

GLib should handle all the network stuff.

> Please note, that I'm not trying to insist (yet ;)) on dropping the
> legacy win32 build. I'm trying to figure out the real issues behind
> the unified buildsystem based on autotools.

I actually don't care about the legacy win32 build, I never used it.

What I'm saying is that you can drop autotools completely and use
Makefiles for both Linux and Windows.

Builds would be *much* faster, there would be no cross-compilation
issues, and as a plus, it would be easier to build in Windows.

Felipe Contreras

More information about the Devel mailing list