The Debian "ifupdown" package and a possible replacement

If you have ever configured a Debian server, chances are you've interacted with the "ifupdown" package many times (hint, it's the commands "ifup" and "ifdown").  For what it claims to do, it's very simple to understand and configure, but that still leaves a lot to be desired for more complicated system administration.

Long story short, I got tired of its limitations and tried to make some changes to it myself only to discover that it is one of the least maintainable pieces of C code I have ever laid eyes on (possibly excepting a few IOCCC entries).  As a result, I've started a rewrite-from-scratch unimaginatively called "ifupdown-ng" and implemented in Python.  My #1 goal is bug-for-bug compatibility with ifupdown.  I eventually want to support a lot more features and make sysadmins' lives a lot easier, but it first needs to be a safe drop-in replacement without changing a single line in your "/etc/network" directory.

If you've never dived into exactly how "ifupdown" really does its thing and are wondering why I'd go to all the trouble, well, let me give several examples of where it falls over for more complicated configurations:
  • Interfaces are brought down according to the _current_ configuration, not the configuration at the time they were brought up.  This means that if you change an interface from DHCP to static while it is still up, "ifdown" will not actually terminate the DHCP client when the interface is later brought down.
  • It's virtually impossible to safely make changes over a remote connection (such as SSH).  The best you can do is "ifdown eth0 && ifup eth0" but as mentioned above that breaks down with more than the simplest possible configuration changes.
  • Most DHCP configuration parameters, despite being a key part of the interface configuration, cannot be specified from /etc/network/interfaces but must be configured in other config files:
    • Extra options to be passed to the DHCP server (or requested from it) must be manually described in /etc/dhcp/dhclient.conf.
    • It's not possible to use a different "dhclient.conf" file for different interfaces
    • If you want to run commands after bringing up the interface, the "up" option in the config won't do it reliably.  You instead need to add custom scripts in different locations depending on which DHCP client program you are using (and many of them don't support hooks at all).
    • The ISC DHCP client will always be told to "DHCRELEASE" its addresses on termination, even if that isn't desired on the current network; if you configured the address for Wake-On-LAN you may not want the OS to blindly release its 2-week lease every day at 5PM. .
  • Logical interfaces, such as 802.1Q VLANs, MAC-based VLANs, software bridges, tunnels, etc are not supported except via shell-script hooks installed as a part of other packages.  Furthermore, you can't extend the list of "methods" (dhcp, ppp, etc) or "address families" (inet, inet6) supported by ifupdown without making code changes.  Overall the functionality that is available is significantly more complicated than it really needs to be, and it suffers from all the limitations described above.
  • There is no dependency management and the default ordering is extremely problematic.  If I configure a VLAN 42 on eth0 ("eth0.42") device with both "inet" and "inet6" addresses, then I can only put the VLAN configuration options in one of the two interface declarations or "ifup -a" will try to create it twice, and even then I have to put it on the first one of the two.  Even worse, when I later run "ifdown -a" on the same configuration, it will take them down in the wrong order and report errors for every command in the second stanza because the interface was deleted by the config in the first one.
So after I got tired of dealing with all of those issues in my network configuration and testing, I decided to do something about it and just fix "ifupdown" once and for all.  I figured I could extend the code to have some kind of decent plugin features and add some kind of dependency chaining.

<Silly narrative>

So I downloaded the source package and started poking around:

kyle@artemis:~/deb-src$ apt-get source ifupdown
Reading package lists... Done
Building dependency tree       
Reading state information... Done
iNOTICE: 'ifupdown' packaging is maintained in the 'Hg' version control system at:
Need to get 106 kB of source archives.
Get:1 http://mirrors.us.kernel.org/debian/ testing/main ifupdown 0.7.5 (dsc) [1,588 B]
Get:2 http://mirrors.us.kernel.org/debian/ testing/main ifupdown 0.7.5 (tar) [104 kB]
Fetched 106 kB in 0s (371 kB/s)   
pdpkg-source: info: extracting ifupdown in ifupdown-0.7.5
dpkg-source: info: unpacking ifupdown_0.7.5.tar.gz

kyle@artemis:~/deb-src$ cd ifupdown-0.7.5/

kyle@artemis:~/deb-src/ifupdown-0.7.5$ ls
biblio.bib  COPYING        ifup.8            Makefile      TODO.scripts
BUGS        debian         ifupdown.nw       makenwdep.sh
ChangeLog   examples       interfaces.5.pre  modules.dia
contrib     execution.dia  makecdep.sh       README

Hmm... the only things that look like source files are a couple shell scripts and that one "ifupdown.nw" file.  I wonder what that is?  Let's open it in VIM:



\title{ Interface Tools\thanks{
Copyright \copyright\ 1999--2007 Anthony Towns. This program is free
software; you can redistribute it and/or modify it under the terms of
the GNU General Public License as published by the Free Software
Foundation; either version 2 of the License, or (at your option) any
later version.

Whoops... nope, that's definitely LaTeX documentation, I wonder where the code is?  Let's see if we can find a "main" function around here somewhere:

kyle@artemis:~/deb-src/ifupdown-0.7.5$ grep 'int main' -A3 -B3 -r .
./ifupdown.nw-querying /etc/network/interfaces (when called as [[ifquery]]).
./ifupdown.nw:int main(int argc, char **argv) {
./ifupdown.nw- <<variables local to main>>
./ifupdown.nw- <<ensure environment is sane>>

Oh god... it found the LaTeX file...  I may be in trouble now.  Even worse, I'm pretty sure that code snippets such as "<<variables local to main>>" are not valid ANSI C.

</Silly narrative>

Long story short, the code is written in what's called "literate programming", specifically a language called "noweb".  The gist of the idea is to write your code and comments in a special self-documenting form so you can process it one way to get a design doc and a different way to get the actual usable source-code for a compiler.  Unfortunately, while the idea isn't bad, the implementation is terrible and the result is completely unreadable spaghetti code.

Here's one representative sample:

\subsection{File Handling}

So, the first and most obvious thing to deal with is the file
handling. Nothing particularly imaginative here.

<<variables local to read interfaces>>=
FILE *f;
int line;

<<open file or [[return NULL]]>>=
f = fopen(filename, "r");
if ( f == NULL ) return NULL;
line = 0;

<<close file>>=
line = -1;

Each of these is used from precisely one place, inside the body of the read_interfaces() function.  Even worse, there are another 3 full blocks of <<variables local to read interfaces>> which are appended together during macro expansion.  In order to edit the code for this one relatively straightforward function, you are scrolling across 500 lines of crap just to see the full list of local variables.

Even worse is that most of the comments are exactly as useful as the one above, "... Nothing particularly imaginative here" indeed!

At this point I gave up, installed noweb, and just built the package so I could have some C files to inspect and reverse-engineer.  This worked out OK, and some magic even managed to preserve indentation across all the macro-expansion, but the resulting file was only marginally better than the over-documented spaghetti-code.  Since the author put his full faith in his "self-documenting" noweb style, he didn't bother to put more than about 36 lines of real C comments in the code, and most of those aren't even full comment lines but just a few words at the end of a code line.

Having gotten entirely exhausted from wading through that (please pardon my language) crock of shit, I decided that what I really wanted to do now was start from scratch and stretch my Python muscles a bit.  I've spent much of the last year learning to write Python pretty well for my work at Google, but this was the first personal project it really seemed appropriate to use on.

Anyways, the results are still in progress and it doesn't actually even do anything yet, but I'm interested in outside opinions on the overall approach.  If you missed the link above, I've hosted it here on GitHub:


Moving on to Google, finding a few old code gems from high-school

I posted on Google+ a short while back that I am going to work for Google as an SRE. As a result, I have been digging through and cleaning out some of my old computers and trying to generally get my digital life a bit more in order.

One of the entertaining things I found was a bit of code I wrote back in high-school (at TJHSST) as a project for my "Computer Architecture" programming class.  We were using the Structured Computer Organization textbook (although I don't remember what edition), which described a MIC-1 microarchitecture and used it to implement a MAC-1 virtual machine.  My project was about 1000 lines of Perl which implemented an assembler and interpreter for the microarchitecture code as well as a MAC-1 wrapper program with the microcode for that virtual machine.

Looking back at the code today, I'm actually kind of impressed with how well I managed to comment it at the time, and it's possible I have actually gone downhill a bit with some of my more recent Perl scripts.  The error handling and the high-level design is missing, but the basic functionality is sound and properly done.

Aside from code comments, the documentation is basically missing, in large part because I lost the project specification it was implementing somewhere during the last 8 years.

Since it's one of my better examples of code from my younger days, I've decided to publish the code on GitHub under kmoffett/perl-mic1, feel free to check it out and let me know what you think!

Kyle Moffett


A long-delayed Debian PowerPC SPE update

Unfortunately, several internal projects and some unexpected GCC and kernel issues have been blocking progress on the Debian PowerPC SPE port over the last few weeks.

Aside from some internal projects that I can't talk about on this blog, I spent the last few weeks rebasing our board support code onto Linux kernel v3.2-rc1 (now -rc3).  I also had to update several miscellaneous architecture code cleanup patches and fix a few minor bugs introduced upstream since my last rebase.

As far as the GCC compiler issues go, Alan Modra saved the day again and with the new non-crashing kernel I was able to test out his patches for GCC PR target/50906.  Several testcases fixed... Huzzah!

In the next few days I hope to actually get the buildd started up as I wanted to do a few weeks ago.

Kyle Moffett


Debian powerpcspe progress, new sbuild installed!

(UPDATE: added "omitdebsrc=true" to the "BD-*" multistrap config entries)

This is part of the Debian PowerPC e500 porting effort, the series on my blog starts at: "How to bootstrap a new Debian port".

I have great news to report today!  I finally managed to get the finicky "gcc-4.6", "gcj-4.6", and "gcc-defaults" packages to all build and install, so now I have an actual working system with enough stuff to build most of the archive, including "build-essential" and "sbuild".

Simon McVittie wrote an excellent tutorial on this particular bit of setup on his blog: Space-efficient reproducible builds using schroot, sbuild and LVM.  Unfortunately, things are a bit more complicated for powerpcspe for a few reasons:
  • The core system packages will come from both the "unstable" and "unreleased" Debian-Ports repositories, so we need to use both.
  • Right now all of my hand-built packages have just been stuffed into my local package repository using "dput" and "mini-dinstall", which also needs to be included in the core packages list.
Once the archive is mostly up-to-date and the relevant patches merged into the official packaging, you will be able to use the same "cdebootstrap" commands from Simon's tutorial, but I will be using a variant of my old "multistrap" configuration for now.

NOTE: You will need an unused partition with at bare minimum 40G of free space in order to make sbuild work properly with LVM snapshots, but preferably 80G or more.  When you are installing your root filesystem with multistrap or cdebootstrap then make sure you leave enough room.  In particular, the "schroot" logical volume needs to have enough free space to be able to install all the development packages needed for any given package build (including stuff like TeX, etc), typically around 6-8GB, but only for one system at a time due to independent LVM snapshots used for each build.  Additionally, each snapshot taken will use 6-8GB depending on your LVM config, so leave some free space in the volume group.

Once you have a working bootable image, you will need to install a few packages to get this going:
  $ aptitude install build-essential sbuild lvm

Now, set up your spare partition as an LVM volume group.  Since I am booting from NFS, I'm using an entire disk ("/dev/sda") and my volume group is called "nfs1u".  The "tune2fs" command can be omitted, I just pasted it in here because those are the defaults I generally tend to use for most servers.

WARNING: MAKE SURE YOU USE THE EMPTY VOLUME HERE OR YOU WILL DESTROY YOUR DATA!!!  If you do manage to erase your root filesystem or something, please feel free to give up here as it's only going to get more complicated from this point on.  Also, please send me a note so I can laugh at you mercilessly for 5 minutes before I accidentally do the same thing to myself.

Creating the volume group:
  # pvcreate -M 2 --dataalignment 4M /dev/sda
  # vgcreate -M 2 nfs1u /dev/sda

Creating the chroot volume; minimum size for a few chroots is 16GB, I used 32GB:
  # lvcreate -n schroot -L 32G nfs1u
  # mke2fs -t ext4 -L nfs1u:schroot /dev/mapper/nfs1u-schroot
  # tune2fs -e remount-ro -c 1 -i 0 -o user_xattr,acl,journal_data_ordered \

Creating the build scratch-space volume; you need up to 12GB to build some packages, but I used 32GB again to leave room just in case:
  # lvcreate -n build -L 32G nfs1u
  # mke2fs -t ext4 -L nfs1u:build /dev/mapper/nfs1u-build
  # tune2fs -e remount-ro -c 1 -i 0 -o user_xattr,acl,journal_data_ordered \

Mounting both volumes (you probably want to put them in "/etc/fstab" too): [EDIT: Changed "/srv/build" to "/var/lib/sbuild/build" and fixed permissions as per sbuild docs]
  # mkdir     /srv/schroot
  # chmod 000 /srv/schroot
  # mount /dev/mapper/nfs1u-schroot /srv/schroot
  # chmod 000 /srv/schroot/lost+found
  # chattr +i /srv/schroot/lost+found
    # chmod 000 /var/lib/sbuild/build
  # mount /dev/mapper/nfs1u-build /var/lib/sbuild/build
  # chown sbuild /var/lib/sbuild/build
  # chgrp sbuild /var/lib/sbuild/build
  # chmod 2770   /var/lib/sbuild/build
  # chmod 000    /var/lib/sbuild/build/lost+found
  # chattr +i    /var/lib/sbuild/build/lost+found

Then you will need to set up the "schroot" tool to work with the new logical volume setup, so create the file "/etc/schroot/chroot.d/sid-powerpcspe-sbuild":
  description=Debian unstable ("sid") for PowerPCSPE
  lvm-snapshot-options=-L 6G

NOTE: For my system I also temporarily added this line to "/etc/schroot/sbuild/fstab":
  /srv/local-pkgmirror /srv/local-pkgmirror none ro,bind 0 0

Now you need to reinstall a totally fresh clean system into the chroot.  If the archive is in good shape the following command should Just Work™:
  # cdebootstrap --flavour=build sid sid http://DEBIAN-PORTS-MIRROR

On the other hand, I didn't have the luxury of a working archive, so what I did was create a "multistrap" configuration file at "/srv/schroot/sid-powerpcspe-sbuild.conf":
  aptsources=DP-unstable DP-unreleased Local BD-unstable BD-unreleased
  bootstrap=DP-unstable DP-unreleased Local-Bootstrap
  packages=build-essential fakeroot
  source=copy:///srv/local-pkgmirror/ unstable/
  source=file:///srv/local-pkgmirror/ unstable/

I installed the base system:
  # multistrap -f /srv/schroot/sid-powerpcspe-sbuild.conf

Then I forcibly disabled package init-scripts (to prevent daemons from starting in sbuild chroots).  Note that if you use the "sbuild" flavor of "cdebootstrap" then this should be done for you already.
  # cat >/srv/schroot/sid-powerpcspe-sbuild/usr/sbin/policy-rc.d <<'EOF'
  > #!/bin/sh
  > echo "****************************************" >&2
  > echo "All rc.d operations are denied by policy" >&2
  > echo "****************************************" >&2
  > exit 101
  > EOF
  # chmod 755 "/srv/schroot/sid-powerpcspe-sbuild/usr/sbin/policy-rc.d"

Go ahead and add a user account to the "sbuild" group so they can build packages:
  # adduser kmoffett sbuild

You can test that your new chroot works like this:
  kmoffett$ schroot -c sid-powerpcspe-sbuild echo "Hello world"

I'll try walk through how to configure sbuild itself tomorrow, and then hopefully the buildd daemon some time next week.

Kyle Moffett

And Alan Modra saves the day again!

The last time we had GCC bugs affecting the Debian PowerPC SPE port, Alan Modra was extremely helpful at getting some of them solved:  PR44169PR44364PR44606.

Fortunately, when I managed to trigger yet another GCC bug a week or so ago, Alan Modra stepped up yet again to help us get it resolved (PR50906).  I owe him a case or two of his favorite beverage one of these days.

Seeing as the e500v2 chips are all FreeScale parts I would think that FreeScale ought to take responsibility for fixing their own bugs, instead of having somebody from an IBM lab in Australia do all their work for them.

Anyways, if somebody knows Alan personally, please let him know we really appreciate the help he has given us with GCC.

Kyle Moffett


Google+ finally available for Google Apps users


I am finally on Google+ after months of waiting; here is my Profile (which is also conveniently linked from the Blogger sidebar).

Kyle Moffett


Debian multiarch is cool, except when it breaks GCC


The last few days have been frustrating.  GCC does not have a nice build-system to play with, and encountering a build error 6 hours in to a 7-hour build really sucks.  Even worse is GCJ (the GCC Java compiler and runtime) failing 6 hours into its 7-hour build with an error that you have to go back and redo the 8-hour GCC build to fix.

I'm afraid I'm a bit burned out from all that, so all I have for you today is a few links:

Kyle Moffett


Yet another GCC e500v2 bug bites the Debian powerpcspe port

I just spent probably 10 hours pouring over the libffi assembly by hand and under GDB, completely sure that a segmentation fault issue was caused by a bug in there.  I finally figured out that the stack (or the unwind data or something) was being overwritten, and that it was happening inside of the innermost function.

After literally single-stepping through about 8000 lines of ASM and not being able to find anything wrong with the libffi parts, I finally got fed up.  I ripped the testcase out and created a basic C++ function that sets up the same data-structures without using any assembly at all, and that failed too!!!

The most annoying thing I ran into while debugging was that GDB would tell me the stack was garbage, but if I actually followed the stack pointers by hand it all looked perfect.  IE: this is what GDB gave me under my stripped down testcase:

  (gdb) bt
  #0  closure_test_fn1 (cif=<value optimized out>, resp=0xbffff46c,
      args=<value optimized out>, userdata=<value optimized out>)
      at unwindtestfunc.cc:39
  #1  0x00000001 in ?? ()
  #2  0x00000001 in ?? ()
  Backtrace stopped: previous frame inner to this frame (corrupt stack?)

And yet printing by hand it looked like a valid stack:
  (gdb) print (void (*)(void))*($r1 + 4)
  $1 = (void (*)(void))
       0x10000970 <closure_test_fn1(ffi_cif*, void*, void**, void*)+320>
  (gdb) print (void (*)(void))*(*$r1 + 4)
  $2 = (void (*)(void)) 0x100006a0 <main()+324>

AAARRRGGGHHH!!!!  I hate GCC bugs!!!

The issue apparently crops up only when building with "-Os" (not with "-O2", which is almost the same), so there's probably a really stupid bug hanging around somewhere, but the stack itself looks fine and I don't understand the C++ unwind data-structures well enough to track it down.

So I filed GCC PR target/50906, which causes GCC to miscompile e500v2 floating point code using exceptions. This was causing the libffi testsuite to fail miserably with a SIGSEGV in "unwindtest.cc" when build with "-Os".

This not exactly the first major issue that mainline GCC has had with this sub-architecture port: PR44169, PR44364 PR44606. It's further worth noting that despite our pleas with FreeScale, it ended up being an IBM developer (Alan Modra from Australia) who helped us get those previous bugs solved.

I'm hopeful that this will be a relatively obvious bug and therefore very easily solved.

Kyle Moffett


Debian PowerPC e500v2 port, part 8

This is part of the Debian PowerPC e500 porting effort, the series on my blog starts at: "How to bootstrap a new Debian port"

Sorry that it's been so long since the last post... I was able to get my board booted from NFS, but after that things got complicated.

I spent quite a while trying to re-cross-bootstrap more packages; but GCC 4.6 refused to build in a reverse-cross configuration (IE: build a compiler on amd64 that runs on powerpcspe and targets powerpcspe).

Unfortunately, there's a bit of a nasty dependency loop that prevented me from doing a native build either.  In order to build a new GCC 4.6 with multiarch support, I needed to install a new multiarch libc6... except that multiarch libc6 "Breaks" my old GCC 4.4 (which doesn't have multiarch).

I ended up doing a native build of a new multiarch libc6 on my NFS-booted system and forcibly installing that (which leaves my GCC 4.4 broken).  Thankfully libc6 creates a file in "/etc/ld.so.conf.d/" that GCC uses to help find libraries and I could work around the runtime crt*.o objects like this:

  # cd /usr/lib
  # for i in powerpc-linux-gnuspe/*; do ln -s "$i"; done

With that I was able to build a new GCC 4.6.  Unfortunately, I still had an old gcc-defaults package (depending on gcc-4.4) and the new gcc-defaults package would not build unless I could install gcc-4.6 and gcj-jdk and a half-dozen other things I don't have.

Thankfully, with a couple more quick compatibility symlinks I could just remove "gcc" and "g++" and leave "build-essential" technically broken but have a working build environment:

  # cd /usr/bin
  # for i in *g{cc,++}-4.6; do ln -s "$i" "${i%-4.6}"; done

After that, I had a new libc6 installed, along with "multiarch-support"!  Finally!  Woohoo!!!

I upgraded dpkg, apt, aptitude, and about a half-dozen other libraries and tools that all need multiarch these days...

Now I need to figure out what I need to install/build/rebuild in order to make sbuild work again, then I should be able to get a new "buildd" server going.

And of course once I have one going I have enough spare hardware to start about 18 of them.

I'll hopefully post again later today with more progress.

Kyle Moffett


Debian PowerPC e500v2 port, part 7

This is part of the Debian PowerPC e500 porting effort, the series on my blog starts at: "How to bootstrap a new Debian port"

So last time I managed to get the old Debian archive to install onto a new NFS root filesystem, but I had not yet booted or configured that NFS root.  To do that, you need to be able to boot your board (with kernel) and get that filesystem mounted as "/" somehow; whether using busybox or some other mechanism.

At this point "/etc/inittab" doesn't exist yet, so you will need to boot with "init=/bin/bash" in the kernel arguments.  Since you're probably booting with a serial console, you also want "console=ttyS0,115200n1".  Additionally, you probably need some root filesystem options like "root=/dev/hda1 rw" or "root= rw" or something.

Once the board boots and displays a shell, you're halfway there!  Before you can get packages to configure, though, you need to workaround a "libc6" postinst bug.  Fake the existence of an actual "init" program by running these command:
  $ mkfifo /dev/initctl
  $ cat </dev/initctl >/dev/null &

Next work around a "dash" postinst bug with these commands:
  $ dpkg-divert --package dash --divert /bin/sh.old --add /bin/sh
  $ dpkg-divert --package dash --divert /usr/share/man/man1/sh.1.gz.old \

Finally you can configure all the packages.  Run this command and reply to the interactive prompts as necessary:
  $ dpkg --configure -a

Once that finishes, you just have 2 steps remaining before you can reboot.  Set up a serial console and change root's password:
  $ echo 'T0:2345:respawn:/sbin/getty -L ttyS0 115200 xterm-color' \
  $ passwd root
  Enter new UNIX password:
  Retype new UNIX password:
  passwd: password updated successfully

Ok, you're done!  Sync and reboot!  This time you don't need to pass any special options.
  $ sync && reboot -f

Unfortunately most of the last few days has been dealing with other problems like merging our HWW-1U-1A U-Boot board support upstream and dealing with other internal projects, so this is it for today.

Kyle Moffett


Convenient remote in-system flashing with a BDI-3000

This is part of the Debian PowerPC e500 porting effort, the series on my blog starts at: "How to bootstrap a new Debian port"

The hardware I'm using is based on FreeScale's P2020 processor, with a 128MB NOR boot flash and 2GB of ECC DDR2 SDRAM.  To make it convenient to do in-system reflashing (for example, during U-Boot development), I have written a script "bditool" which does the grunt-work of generating config files and connecting to my ABATRON BDI-3000 JTAG device.  Unfortunately the BDI-3000 itself is not exactly a cheap piece of hardware (a few thousand US dollars), and the ABATRON software licenses for additional processor families are about the same.

It's released under the GPLv2 here:

It requires an existing TFTP server that can be accessed by SSH and by your BDI-3000 at the same address.  Additionally, the TFTP server is expected to be able to telnet to the BDI (since the   To customize it for your hardware and network configuration, just edit the script directly.

The first few lines contain the default command-line values, but to adjust the SDRAM configuration for your hardware you will need to edit the list of initialization values much further down in the file.

The usage is pretty simple (assuming the defaults are set up right):
  $ bditool flash ./my-uboot.bin
  $ bditool exec "help"
  $ bditool exec "info"
  $ bditool exec "config"
  $ bditool boot

Please let me know if you have questions!

Kyle Moffett

Debian PowerPC e500v2 port, part 6

This is part of the Debian PowerPC e500 porting effort, the series on my blog starts at: "How to bootstrap a new Debian port"

Well, after all that work building new packages I figured it was time to see how far I could get at building a chroot filesystem and upgrading it to be current.  I'll be doing all of my work in "/srv/stuff", but you can use any volume you would like with several gigs of free disk space.

The first step is building a chroot with the most recent historical snapshot of the Debian-Ports archive that actually still worked, taken on 2011-04-03, at 00:58:02 GMT.  Note that the "unstable" and "experimental" archives don't include sources (see "omitdebsrc=true" below) as those must only be binary builds of upstream Debian sources.  The "unreleased" archive does include sources, as that is intended for packages which need temporary patches to build.

NOTE: I spent a few hours trying various ways to get the multistrap to pull some packages from the current archives (including my fresh-built stuff), but so much stuff was uninstallable (especially Perl stuff) that I determined it was less work to install an old archive and try to upgrade by hand.

First I created "/srv/stuff/e500chroot.multistrap.conf":
  aptsources=Old-Unstable Old-Experimental Old-Unreleased
  bootstrap=Old-Unstable Old-Experimental Old-Unreleased

  packages=build-essential debhelper fakeroot sbuild masqmail





Then I wrote "/srv/stuff/setup-e500chroot.sh" to copy QEMU into the chroot before starting the multistrap process:
  #! /bin/sh

  set -e


  install -d "${CHROOT}/usr/bin"
  install -t "${CHROOT}/usr/bin" "/usr/bin/qemu-ppc-static"
  install -d "${CHROOT}/usr/local/bin" "/usr/local/bin/qemu-e500v2-static"

  exec multistrap -f "${MULTISTRAP_CONF}"

I ran my multistrap wrapper script and waited about 5-10 minutes for the download and file extraction to complete.

WARNING: The chroot is NOT DONE YET!!!  It has been unpacked, but none of the packages have been configured yet (with "dpkg --configure -a"), because that needs to be run on the target system.

I tried to do the testing entirely on my amd64 workstation using the the QEMU procedure I described in "Using qemu-user-static to help the Debian e500 bootstrap".  Unfortunately I almost immediately started getting "Illegal instruction" exceptions out of QEMU; it looks like it might be fixed by this patch to qemu-ppc but I don't really feel like trying to rebuild all of QEMU for that.

So I need to get the files onto one of my actual HWW-1U-1A boards.  Fortunately for me, I have an old build of an initramfs (based on busybox and a few other things) which lets me do a very-basic NFS boot.  Unfortunately for anyone trying to do this over again, I don't have the original sources for that initramfs so I can't actually distribute it.  In that situation I strongly recommend formatting a plain old SATA disk with ext4 to do the multistrap onto; then just connect the disk to your hardware.

NOTE: This assumes that you already have an existing kernel you can boot on your hardware (probably from U-Boot) with everything you need built in.  Eventually you will need to switch to a Debian-standard kernel, don't worry about it for now.

Once I finish getting NFS and TFTP working I'll be back with another post.

Kyle Moffett


Debian PowerPC e500v2 port, part 5

This is part of the Debian PowerPC e500 porting effort, the series on my blog starts at: "How to bootstrap a new Debian port"

I've been continuing working on cross-compiling a fresh base system, continuing from Part 4.  Since I got stuck on a GCC 4.6 bug I don't know how to fix at the moment, I'm working off as many other packages as I can for the moment.

Additional packages (with notes and Debian bugs) listed below:

bzip2           - 1.0.5-7               - OK
xz-utils        - 5.1.1alpha+20110809-2 - DEB_BUILD_OPTIONS=nocheck
pcre3           - 8.12-4                - OK
grep            - 2.9-2                 - OK
gdbm            - 1.8.3-10              - OK
libsepol        - 2.1.0-1               - #638018
libselinux      - 2.1.0-1.1             - #645121 (and DEB_STAGE=stage1)
hostname        - 3.06                  - OK
make-dfsg       - 3.81-8.1              - OK
debconf         - 1.5.41                - N/A (Architecture: all)
build-essential - 11.5                  - OK
findutils       - 4.5.10-1              - OK (from experimental, see #645274)
patch           - 2.6.1-2               - OK (DEB_BUILD_OPTIONS=nocheck)
gpm             - 1.20.4-4              - #645278
ncurses         - 5.9-2                 - OK
lsb             - 3.2-28                - BAD (needs python-all-dev)

I'm slowly inching closer to being able to install a new root filesystem!

Kyle Moffett


Using "dput" and "mini-dinstall" to create a local Debian package repository

While working on bootstrapping a Debian port, I needed a place to keep a bunch of packages for later cross or native installation.  Even though I happen to have access to the debian-ports mirror for powerpcspe, I still need a place for the cross-compiled or bootstrap packages that are not appropriate for that repository.

So I created a local repository using the "mini-dinstall" command.  Configuration and setup was pretty easy, I just created a "~/.mini-dinstall.conf" file with the following contents.
  archivedir = /srv/stuff/local-pkgmirror
  mail_to =
  verify_sigs = false
  architectures = all, powerpcspe, amd64
  archive_style = flat
  generate_release = true
  mail_on_success = false
  release_codename = Local
  release_description = Local Packages
  release_label = kmoffett
  release_origin = kmoffett
  release_suite = local

Please note that "/srv/stuff" is just a convenient local partition with lots of free space, you can put the repository anywhere you would like.  I then created a "~/.dput.cf" file (make sure the repository location matches):
  fqdn = localhost
  method = local
  incoming = /srv/stuff/local-pkgmirror/mini-dinstall/incoming
  run_dinstall = 0
  post_upload_command = mini-dinstall -b

To get everything created and ready, I ran "mini-dinstall -b" once first, then to "upload" packages into the repository I just run "dput -u local $PACKAGE.changes".  Note that if that command fails you may need to delete the "$PACKAGE.local.upload" file in order to try again.

Kyle Moffett


Debian PowerPC e500v2 port, part 4

This is part of the Debian PowerPC e500 porting effort, the series on my blog starts at: "How to bootstrap a new Debian port"

So in a previous posting, I was discussing how to cross-compile packages.  For the moment, much of the existing powerpcspe archive is still perfectly usable, but several of the toolchain packages are so badly out of date it is easier to just re-cross from scratch.

Last time I built the "gzip" package as a demo, now I am trying to get the target build-dependencies for gcc-4.6 set up so I can build a new GCC for my target environment.  For the moment I have the QEMU setup disabled to prevent any accidents, but I may reenable it if I run into any stubborn packages.

As before, to satisfy build-dependencies you will need some mix of native packages (EG: "doxygen") and cross-compiled "dpkg-cross" packages (EG: "libelfg0-dev-powerpcspe-cross").  There's no really automatic way to tell which is which, although if it begins with "lib" you probably need to cross-compile it.

Since most of these so far have been pretty uneventful, I'll just list the packages I completed and any notes or Debian bugs related to the crossbuilding:

gzip        - 1.4-1              - #644785
zlib        - 1:   - OK
gmp         - 2:5.0.2+dfsg-1     - OK
mpfr4       - 3.1.0-2            - OK
ppl         - 0.11.2-4           - #645003 (and DEB_STAGE=stage1)
cloog-ppl   - 0.15.9-3           - OK
libelf      - 0.8.13-3           - OK
libmpc      - 0.9-4              - DEB_BUILD_OPTIONS=nocheck
base-files  - 6.5                - OK
base-passwd - 3.5.23             - OK
binutils    - - OK
gcc-4.6     - 4.6.1-15           - #645018 #645021 STUCK HERE

After doing this now have a much larger list of "-X" options that need to be passed to dpkg-cross when installing headers and libraries for cross-building:

-X libc-bin -X libc-dev-bin -X multiarch-support -X dpkg -X install-info -X ncurses-bin -X texinfo -X make -X python

Kyle Moffett


Using qemu-user-static to help the Debian e500 bootstrap

This is part of the Debian PowerPC e500 porting effort, the series on my blog starts at: "How to bootstrap a new Debian port"

I was trying to figure out an easy way to do basic testing without directly having access to the hardware and I discovered that QEMU supports several MPC85xx processor variants for its user-mode emulation.  The standard "qemu-user-static" package even allows you to trivially run non-native programs (either in a chroot or on your normal system) without much complexity.

Unfortunately, I could not figure out any way to convince QEMU to pick the right CPU type by default, especially since there might not be any reasonable config files in the chroot.  The solution ended up being to add a second binary alongside the normal "/usr/bin/qemu-ppc-static" file.  I wrote the following short little C program:

  #include <stdlib.h>
  #include <unistd.h>

  #define QEMU_BIN "/usr/bin/qemu-ppc-static"
  #define CPU_TYPE "MPC8548E_v21"

  int main(int argc, char **argv, char **envp)
          int i;

          char **newargv = malloc(sizeof(*newargv) * (argc + 3));
          if (newargv) {
                  newargv[0] = QEMU_BIN;
                  newargv[1] = "-cpu";
                  newargv[2] = CPU_TYPE;
                  for (i = 1; i < argc; i++)
                          newargv[i + 2] = argv[i];
                  execve(QEMU_BIN, newargv, envp);

Then I compiled it like this and put it in /usr/local/bin: (UPDATE: Fixed options for static compile)
  $ gcc -static -Wall -Wextra -Werror -ggdb3 -o qemu-e500v2-static qemu-e500v2-static.c
  $ sudo install qemu-e500v2-static /usr/local/bin/

Next I installed the Debian package "qemu-user-static" on my dev system and disabled the previously-loaded PPC support: (UPDATE: Diverted somewhere the init script won't find it)
  $ sudo aptitude install qemu-user-static binfmt-support
  $ sudo update-binfmts --package qemu-user-static \
            --remove qemu-ppc /usr/bin/qemu-ppc-static
  $ sudo dpkg-divert --divert /usr/share/binfmt.diverted.qemu-ppc \
            --local --rename --add /usr/share/binfmts/qemu-ppc

Then I needed to modify the existing PPC handler for e500v2 (and my custom C program)
  $ cp /usr/share/binfmt.diverted.qemu-ppc ~/qemu-e500v2.binfmt
  $ vim ~/qemu-e500v2.binfmt

The resulting file should look somewhat like this (the flags/offset/magic/mask are unmodified from the original):
  package <local>
  interpreter /usr/local/bin/qemu-e500v2-static
  flags: OC
  offset: 0
  magic \x7fELF......stuff....
  mask \xff\xff\xff\xff.........stuff.....

Then you just need to load your new version and off you go:
  $ sudo update-binfmts --import ~/qemu-e500v2.binfmt
  $ LD_LIBRARY_PATH=/usr/powerpc-linux-gnuspe/lib \

With the rest of the multiarch support slowly going into Debian, this seems extremely promising to me for being able to avoid a lot of the classic cross-build hassle by doing pretend-native builds with a cross-compiler and a bunch of native headers and other tools.  Eventually you will also be able to have a native-architecture-only chroot with only those two binaries (the qemu-e500v2-static and the qemu-ppc-static) in it.

Unfortunately not enough of multiarch is working yet to really make it 100% feasible, but it should at least allow me to work around many of the egregious cross-compiling bugs by actually being able to run the compiled binaries locally.

On the other hand, if my goal was instead to actually identify and fix such cross-compiling bugs, then I would probably need to disable the qemu binfmt lest I get falsely successful builds.

The more likely use-case is that I will use this to perform some basic testing of my freshly built packages when I don't have direct hardware access.

Kyle Moffett


Still waiting for Google Apps accounts to support "Google Profiles"

With all the hype about "Google+" I would love to give it a shot... but there's one teensy tiny problem:

The lack of a Google Profile makes it impossible to sign up for Google+, and it has been this way for months.  I already have waaay too many accounts to keep track of, so I refuse to create a separate Gmail account just for that one service.  Google keeps promising that it's coming, but it's been quite a while since they said:
We're actively working on making Profiles (and Google+) available for Google Apps - it should be available in the coming months  –  John Costigan @ Jun 28, 2011
Here's hoping that it will be sometime soon!

Kyle Moffett

Debian PowerPC e500v2 port, part 3

This is part of the Debian PowerPC e500 porting effort, the series on my blog starts at: "How to bootstrap a new Debian port"

Before we can actually start cross-compiling packages, we need to create a quick set of symlinks so that they can find our compiler.  The "gcc-4.6" packages we built in the previous steps created a set of tools named like this:
Unfortunately, by default autoconf won't actually find those, and the "gcc-defaults" package does not have a cross-compiler mode (although it can be cross-"compiled", as we will need to do later).  So symlink the relevant binaries to versions without the "-4.6" at the end:
$ cd /usr/bin
$ for i in powerpc-linux-gnuspe-*-4.6; do sudo ln -s "$i" "${i%-4.6}"; done

So the next step is to make sure I have all of the Essential and Build-Essential packages up-to-date for installation into a usable root filesystem for building everything else.  That means I need to figure out a list of those packages first.

To make this easier, we can use the "grep-dctrl" tool to look through the APT package cache on our favorite Debian box.  MAKE SURE YOU USE A BOX WITH DEBIAN TESTING/UNSTABLE.  If you don't then you won't get the right package lists for your new port.

Let's first start with the "Essential: yes" packages:
$ sudo apt-get install grep-dctrl
$ grep-available -Xn -s Package -FEssential 'yes' >new-pkgs.txt

Now add "build-essential" to the list and get ready to list dependencies:
$ echo 'build-essential' >>new-pkgs.txt
$ : >all-pkgs.txt

Then we need to recursively follow dependencies.  It's all basically done with the following scriptlet:
$ while [ -s new-pkgs.txt ]; do \
      cat new-pkgs.txt >>all-pkgs.txt; \
      for i in $(cat new-pkgs.txt); do \
          grep-available -PXn -s Depends,PreDepends "$i"; \
      done | sed -e 's/([^)]\+)//g' -e 's/, /\n/g' \
          -e 's/ *|[^\n]*//g' | grep . | sort | uniq >dep-pkgs.txt; \
      cat dep-pkgs.txt all-pkgs.txt all-pkgs.txt \
          | sort | uniq -u >new-pkgs.txt; \
Once that's done running (it should take ~30 seconds), you will have the complete binary package list in all-pkgs.txt.  Now we need to find all the source packages necessary to build those binary packages:
$ for i in $(cat all-pkgs.txt); do \
      grep-available -PXn -sSource:Package "$i"; \
  done | sed -e 's/ (.*//' | sort | uniq >all-srcs.txt
Go ahead and use the resulting all-srcs.txt file to make yourself a checklist, because the rest of this is going to be a painfully long and manual process.  I've included my list below, wrapped onto a single line to minimize wasted space:
base-files base-passwd bash binutils build-essential bzip2 coreutils dash db debconf debianutils diffutils dpkg e2fsprogs eglibc findutils gcc-4.6 gcc-defaults gdbm gmp grep gzip hostname insserv libselinux libsepol libtimedate-perl linux-2.6 lsb make-dfsg mpclib mpfr4 ncurses patch perl sed sensible-utils shadow sysvinit tar tzdata util-linux xz-utils zlib
Not too unreasonable, right?  What you will rapidly realize is that build-dependencies are the key, and we have not actually tried to list those out yet.  The biggest problem tends to be docs-generation tools (EG: LaTeX, Doxygen, etc) which are entirely unnecessary for bootstrap purposes but have lots of dependencies of their own (EG: xorg, qt4, etc).

So let's pick one of these and get started.  I've included the precise versions I used below, but you can just leave off the "=1.4-1" bit to get the latest unstable version assuming your APT is set up correctly.
$ mkdir gzip && cd gzip && apt-get source gzip=1.4-1
First you should check the "Build-Depends" field in the "debian/control" file, to make sure you have all of the necessary dependencies.  Note that some dependencies (automake, autoconf, texinfo, etc) should be satisfied by your build-system architecture (EG: amd64), while others (libc6, etc) should be satisfied by "-cross" packages for your target architecture.
$ grep Build-Depends: gzip-1.4/debian/rules
Build-Depends: debhelper (>= 5), texinfo, autoconf, automake, autotools-dev
 Ok, everything looks good there.  Now check the "debian/rules" file to check for the following few issues:

  1. The rules should reference DEB_HOST_GNU_TYPE to talk about the target system.  If it seems to use DEB_BUILD_GNU_TYPE for that then it's probably broken.
  2. Any testsuites should be disabled when DEB_HOST_GNU_TYPE != DEB_BUILD_GNU_TYPE, otherwise the build will likely fail.  You may be able to work around these kinds of issues by setting DEB_BUILD_OPTIONS=nocheck.
There are certainly lots of other ways that packages can fail to cross-compile correctly, but if it uses autoconf/automake and either cdbs or debhelper then it should be mostly OK.  The next step is to try a cross-compile:
$ ( cd gzip-1.4 && dpkg-buildpackage -a"${MYARCH}" -us -uc -B ) 2>&1 | tee build-gzip.log
Even if the build finishes successfully, you should use "lintian" and "dpkg-deb -c" to verify that the package files look correct.  If you have any errors or unexpected package contents then you have some serious work to do to figure out why and fix the package.  In the "gzip" case I identified a minor bug (Debian bug #644785) in the packaging which resulted in one particular setting coming from the build-system instead of from the host.

NOTE: If you have to make any changes at all, make sure you generate a nice patch and submit it back to Debian with the "reportbug" tool.

Once you have the package built, you should check to see if it resulted in any library packages (lib* and lib*-dev).  If so, you should try to get those installed with "dpkg-cross" the same way that the eglibc ones were installed previously, as you may need them to satisfy other cross-build dependencies later on.

That's it for tonight!

Kyle Moffett