Un-Xhiering MFCF

Ray Butterworth — 2013 August 13

Main Report


At MFCF's June 17 Management meeting, Wayne asked me to investigate getting rid of Xhier.

Does Xhier refer to the various infrastructure tools currently associated with that name, or to the software that currently makes use of it, or to the philosophy behind its design, or to all of the above?

People have varying ideas as to what the term Xhier refers to, and conflation is a constant problem, so it could be any or all of these.

(It would help if we were all familiar with the Xhier philosophy. If you haven't read it recently, please read Xhier Philosophy and Goalshttp://www.math.uwaterloo.ca/mfcf/info/overviews/xhier-philosophy.)

Quick and Dirty Interpretation

If we take one extreme, that it means all of the above, the solution is reasonably simple. We already run some machines without use of Xhier, so we have some experience maintaining this kind of setup. It's simply a matter of going to each of our xhiered machines:

That could all be done in a few weeks, but I certainly wouldn't recommend this direction. [But see Minority Report in the Appendix.]

Gentler Interpretation

At the other extreme, it might mean retaining our current software and the xhier philosophy, eliminating only the infrastructure currently provided directly by xhier and replacing it with a combination of externally provided packages and, where necessary, some locally written software.

Some of the most common services provided by Xhier are:

Code Portability and Centralized Compilation
The various architectures we use often provide different and incompatible functions and structures. Xhier provides compatibility libraries (e.g. libcfix) and header files (e.g. <mfcf/libc/xxx.h>) that allow code to be written independently of the architecture on which they are to be built and run.
All source code is maintained on one machine. One can make a change to some code, then use Xhier-provided programs (e.g. xh-sdist) that distribute this source to the architecture masters, compile it using whatever programs and libraries are appropriate for each architecture, and install the update into the software package. Later, the updated files will be automatically distributed to all clients machines that request the package. All this can be done in the very few seconds that it takes to type the one simple command.
Each week, Xhier automatically compares the build dates on the source master against those on the architecture master and reports to each package maintainer with a list of files that need to be (re)built (typically because the arch master was unavailable at the time the software was rebuilt).
Automated Distribution
Xhier automatically distributes software packages from the architecture masters to their client machines.
Xhiered packages can be requested at any level of the distribution hierarchy, and distribution restrictions (e.g. for licensed software) can be controlled by the higher levels.
Centralized and Non-centralized Administration and Configuration
Administration and configuration can be centralized at several levels, from the shared configuration on the source master, to architecture-specific, administration-specific (e.g. MFCF or CSCF), region-specific, or individual machine.
In particular, much software makes use of xhier regions, where several machines (often of different architecture) share a common set of home directories and user accounts (e.g. the student.math, general.math, and iqc.math regions).
Uniform Presentation
In addition to the uniform environment provided to the package maintainers for building and distributing their source, Xhier provides standard ways for packages to be configured. Regardless of the package, local system administrators know exactly where to look to determine each package's configuration, crontab entries, weekly installation scripts, etc.
Similarly, since packages can be built independently of which architecture they are installed on, the set of commands and services provided by each package is identical on all machines. E.g. users don't need to know that they must use one option flag on Linux and a different one on Solaris.
Other Benefits
Xhier allows multiple versions of the same package to be installed at the same time on the same macine, each version using different configuration. One package would be considered the default, but users could easily put a different version into their personal search paths. Most common Linux and Solaris packages do not allow this.
Xhier installs as little as possible into the vendor space. In particular, such things as configuration, crontab, and boottime scripts are not made available to the operating system until the package had confirmed that it has been installed correctly. Most common Linux and Solaris packages do not treat system changes specially, and so could end up making themselves available even though parts of the package are absent, broken, or misconfigured.

Replacing these (and other) aspects of Xhier, while retaining a similar level of service to maintainers and clients would require considerable work:

We are aware of some external packages (e.g. Puppet, Bcfg2) that can provide a part of the above (configuration and to some extent distribution), but no existing package or collection of packages that can provide it all. (See Building Source Code in the Appendix.) A large part of this task would effectively be the reinvention of Xhier under a different name.

Furthermore, unless we can convince CSCF, IST, and other Campus groups to follow our lead, we would lose support for many xhiered packages that they currently maintain (e.g. the number and maintainers of xhiered packages that are currently on our Solaris systems: 80 MFCF, 50 CSCF, 40 IST, 50 other).

A reasonable estimate of the time required to fully provide the current level of service to the package maintainers and clients would be in man years.

The benefit to such an undertaking would be nowhere near the cost. The effort would be much better spent teaching staff how to use Xhier, and perhaps even how to improve it.

Xhier hasn't outlived its usefulness, it's outlived the people that know how to use it.

Appendix 1 — Building From Source

Most xhiered source can be built with a one-line xh-imakefile for the package. Similarly each program could have a one-line file specifying the name of the program, its language, and its intended use (e.g. general users or system maintainers), often with additional lines specifying required libraries or other compile-time values.

These xh-imakefiles are completely portable across all supported architectures (currently Solaris, IRIX, several versions of Linux, and MacOS).

I'm not aware of any open-source or commercial product that provides this capability. Web searches turn up similar requests, but the answers are generally No, build by hand using the vendor's native environment.. See examples below (emphasis mine).

It is possible, but for each individual package (and perhaps command) one would have to write a very package-specific and OS-specific configuration file. And each of the thousands of source files might need to be tailored to handle the various OS-specific differences.

Even within a specific architecture, such as Linux, and even for pre-built software, there isn't any one standard method of performing installation and updating. E.g. RedHat has yum, Ubuntu has apt-get, SuSE has zypper.

I know it's difficult to prove non-existence, but I'd say there simply is nothing approaching the simplicity or power of xhier's generic build-and-install xh-imakefile.

Below are typical responses to the question of using Puppet or CFEngine to maintain and build source code.


How can I use Puppet to build from source? (asked Nov 13 '12)

I have a webserver and I want to download, unpack, configure and compile and install apache. How can I get Puppet to do this for me only once?

answered Nov 28 '12

While you could have a series of exec{} resources that each check that the commands need to be run have been run and build relationships between them for ordering, you do not want to go down that road.

All software installed on a system should be done through that OS's packaging software, not through compilation. Then you can just use a package{} resource. This also gives you the benefit of leveraging the packaging software that acts as a source of truth for installed packages and generally knows the files on disk for each package.

answered Dec 18 '12

Using native packages is almost always the sanest way to go, as other answers suggest. That said, Puppet as a framework is capable of supporting build-from-source style application deployments.

A generic stub of what a defined type for building from source might look like: https://gist.github.com/2597027

The gist is unfinished and most notably does not include "unless" statements or refreshonly parameters, but gets the idea across in a simple way.


Can CFEngine track packages installed from source (RedHat)

So I have searched this group as well as google and trudged through a portion of the documentation but I am unclear if CFEngine can track and report packages which are installed from source

CFEngine leverages native packaging systems. Source code has literally no uniformity or anything even remotely resembling standardization.

That being said, CFEngine can very easily manage source based packaging systems (e.g., BSD ports, Homebrew). If you can find (or devise) a standard convention for source based packages then CFEngine can manage it for you.

However, I would strongly urge you not to. Creating native packages on most systems is not that hard. And there's always fpm.

Appendix 2 — IST

(From an August 1 meeting with Mike Borkowski, Jason Gorrie, and Shawn Winnington-Ball.)


Things are generally frozen as far as Xhier is concerned. Current packages and architectures will continue to be supported, but there is no development and little maintenance. Demand for the service is expected to continue to decline.

Configuration and Distribution

In the non-Xhiered world, IST mostly uses CFEngine, but they believe that Puppet is more useful and are moving toward using that instead.

Puppet is good at providing configuration and consistency checks on a large number of machines.

Rather than distributing software updates, Puppet reports where native software needs to be installed or updated. This process does not scale the way Xhier does, and so requires more individual attention for each machine. The number of supported machines is not expected to significantly increase however, and in many cases massive updates can be made by periodic re-imaging and where necessary letting the owners re-do their own customization.


Unlike Xhier, Puppet does not provide an administration hierarchy. This is not a problem for IST itself, as it does not currently make use of the administration and region hierarchy that Xhier provides. Individual units, such as MFCF, or within it IQC, would have to run their own Puppet server, but could make use of modules provided by IST.


IST doesn't have much locally written software, and what they do have is distributed to only a small number of machines, all of the same architecture. Losing Xhier's ability to maintain portable source for multiple architectures will not be a significant issue for them.

Appendix 3 — CSCF

(From an August 2 meeting with Fraser Gunn.)


Like MFCF, CSCF cares about this issue, but has fuzzy plans, and isn't doing much about it yet.

They would miss Xhier's ability to provide multiple versions of the same package, independently configured, on the same machine. This can't be done easily with many Linux packages. Xhier is also much better at ensuring that packages are properly installed before it makes changes to the system configuration.


Unlike MFCF, CSCF is well on the way to moving to a homogeneous environment with only one architecture. They are also arranging that their new packages are created in Debian package format.

Having everything on Debian/Ubuntu eliminates the need for portable code, though it does mean that they may eventually end up with a lot of architecture-specific code that will eventually make moving to or incorporating any other system an expensive task.

Much of the need for many xhiered packages has gone away over the years due to improvements in vendor software. Previously it was very inconsistent from one architecture to another, was lacking in features (which had to be added locally), and was often buggy, with slow or nonexistent response for fixes.

But in terms of xhiered software, they are still very dependent on some locally written packages, such as the accounts management system.

Configuration and Distribution

CSCF has experimented with CFEngine and Bcfg2, but don't really like them. Puppet seems like a much more promising system, especially if used with GIT for content management.

Appendix 4 — Minority Report


Colin Powell tells a story from his Vietnam War experiences. An Army outpost was stationed in a very vulnerable location, but the resulting casualties were deemed necessary in order to provide defence for an essential airfield nearby. Meanwhile, the Air Force's planes were continually under attack every time they landed or took off, but the resulting casualties were deemed necessary in order to provide supplies for an essential army base nearby. And of course, no one knows why, other than to defend and supply each other, either the base or the airfield were needed in the first place. It sounds like a sick joke, but is apparently true.

There are stories of corporations where, as the company grew, someone gathering statistics eventually became several people, and eventually involved the operations of multiple departments. People would carefully screen data and gather information, ship it off to others who would organize it, ship off the results to others who would analyze it and prepare reports, which were sent to be filed away in a warehouse. Everyone worked hard at their jobs and understood how important and essential their work was. But because this particular work stream was only part of what each department did, no one could see the big picture of this one set of tasks. The person that had originally requested the statistics had died decades ago, but the project continued to thrive and grow long after anyone had any need for it.


The preceding examples of circular purpose were taken from something I wrote a few years ago. It's not obvious that the situation that MFCF finds itself in isn't similar.

It's easy to justify the necessity of a system that makes it easy for us to support, distribute, and configure the large number of xhiered packages that are on our machines.

And it's easy to justify the large number of xhiered packages that are on our machines because of how easy they make the task of supporting the software on those machines.

Is it possible this is another example of circular purpose?

It might be a large task, certainly beyond the scope of this report, but perhaps we should investigate the reason each of the couple of hundred xhiered packages that are currently on mfcf.math is needed.

Perhaps it might turn out that what we really need for the long term is actually a lot smaller and simpler than we think.

General Direction

The world is moving away from communal environments. Serious computing tends to be done on individual machines (or clusters of machines), which are maintained and customized by their owners. Most other users tend to be happy using web-browsers and a few other common tools (e.g. document preparation) from their laptops. More and more services will be provided from central servers as web applications, not requiring any support on individual machines.

Once we finally make the goal of eliminating mail service from MFCF-supported machines true in practice rather than only in theory, it will be much easier to measure the actual usage on our systems. After eliminating the effects of the large number of users that access these machines only for mail, the results might very well indicate that our customer base is far different from the one we think we are serving.