gnomeos: We can now build gobject-introspection

This commit is contained in:
Colin Walters 2012-01-03 19:09:12 -05:00
parent bd4fc401e4
commit 5b0084994e
22 changed files with 614 additions and 298 deletions

View File

@ -19,21 +19,21 @@ ostbuild: src/ostbuild/ostbuild.in Makefile
sed -e s,@libdir\@,$(libdir), -e s,@datarootdir\@,$(datarootdir), -e s,@PYTHON\@,$(PYTHON), $< > $@.tmp && mv $@.tmp $@
bin_SCRIPTS += ostbuild
bin_SCRIPTS += \
src/ostbuild/ostbuild-nice-and-log-output \
$(NULL)
pyostbuilddir=$(libdir)/ostbuild/pyostbuild
pyostbuild_PYTHON = \
src/ostbuild/pyostbuild/__init__.py \
src/ostbuild/pyostbuild/builtins.py \
src/ostbuild/pyostbuild/main.py \
src/ostbuild/pyostbuild/ostbuildlog.py \
src/ostbuild/pyostbuild/subprocess_helpers.py \
src/ostbuild/pyostbuild/buildutil.py \
src/ostbuild/pyostbuild/builtin_autodiscover_meta.py \
src/ostbuild/pyostbuild/builtin_build.py \
src/ostbuild/pyostbuild/builtin_chroot_compile_one.py \
src/ostbuild/pyostbuild/builtin_commit_artifacts.py \
src/ostbuild/pyostbuild/builtin_compile_one.py \
src/ostbuild/pyostbuild/builtins.py \
src/ostbuild/pyostbuild/__init__.py \
src/ostbuild/pyostbuild/kvfile.py \
src/ostbuild/pyostbuild/main.py \
src/ostbuild/pyostbuild/ostbuildlog.py \
src/ostbuild/pyostbuild/ostbuildrc.py \
src/ostbuild/pyostbuild/subprocess_helpers.py \
$(NULL)
bin_PROGRAMS += src/ostbuild/ostbuild-user-chroot
@ -41,3 +41,5 @@ bin_PROGRAMS += src/ostbuild/ostbuild-user-chroot
ostbuild_user_chroot_SOURCES = src/ostbuild/ostbuild-user-chroot.c
ostbuild_user_chroot_CFLAGS = $(AM_CFLAGS)
bin_SCRIPTS += src/ostbuild/ostbuild-nice-and-log-output

1
gnomeos/3.4/glib.txt Normal file
View File

@ -0,0 +1 @@
SRC=git:git://git.gnome.org/glib

View File

@ -0,0 +1 @@
SRC=git:git://git.gnome.org/gobject-introspection

View File

@ -0,0 +1,2 @@
SRC=git:git://git.gnome.org/gtk-doc-stub
COMPONENT=devel

View File

@ -0,0 +1,2 @@
SRC=svn:http://libarchive.googlecode.com/svn/trunk/libarchive-read-only
CONFIGURE_OPTS=--disable-bsdtar --disable-bsdcpio

2
gnomeos/3.4/libxslt.txt Normal file
View File

@ -0,0 +1,2 @@
SRC=git:git://git.gnome.org/libxslt
EXTRA_OECONF = "--disable-static"

11
gnomeos/3.4/manifest.json Normal file
View File

@ -0,0 +1,11 @@
{
"name": "gnomeos-3.4",
"architectures": ["i686"],
"base": "yocto/gnomeos-3.4",
"components": [
"gtk-doc-stub",
"gobject-introspection",
"glib"
]
}

56
gnomeos/README Normal file
View File

@ -0,0 +1,56 @@
Overview
--------
The build process is divided into two levels:
1) Yocto
2) ostbuild
Yocto is used as a reliable, well-maintained bootstrapping tool. It
provides the basic filesystem layout as well as binaries for core
build utilities like gcc and bash. This gets us out of circular
dependency problems.
At the end, the Yocto build process generates two tarballs: one for a
base "runtime", and one "devel" with all of the development tools like
gcc. We then import that into an OSTree branch
e.g. "bases/gnomeos-3.4-yocto-i686-devel".
Now we also assume that you have ostree installed on the host build
system via e.g. jhbuild or RPM if doing a cross build. The core
ostbuild tool can then chroot into a checkout of the Yocto base, and
start generating artifacts.
Each generated artifact is committed to an OSTree branch like
"artifacts/gnomeos-3.4-i686-devel/libxslt/master/runtime". Then, a
"compose" process merges together the individual filesystem trees into
the final branches (e.g. gnomeos-3.4-i686-devel), and the process
repeats.
ostbuild details
-------------------
The simple goal of ostbuild is that it only takes as input a
"manifest" which is basically just a list of components to build. A
component is a pure metadata file which includes the git repository
URL and branch name, as well as ./configure flags (--enable-foo).
There is no support for building from "tarballs" - I want the ability
to review all of the code that goes in, and to efficiently store
source code updates.
For GNOME, tarballs are mostly pointless - it's easy enough to just
run autogen.sh. However there are two challenges:
1) Tarballs for modules which self-build-depend may include
pre-generated files. For example - flex's tarball includes a
generated .c file for the parser. For these, we can either move
the module build to the Yocto level (thus giving a convenient way
to pull in host files), or possibly add the ability to
hardlink/copy in host binaries to ostbuild.
2) Tarballs which include translations pulled from a different
location. For example - bison. For these, we basically have to
maintain our own git repositories.

View File

@ -1,104 +0,0 @@
Experimenting with multiple roots
---------------------------------
$ mkdir gnomeos-chroot
$ qemu-img create gnomeos.raw 2G
$ mkfs.ext2 -F gnomeos.raw
$ mount -o loop gnomeos.raw gnomeos-chroot
$ debootstrap --arch=amd64 squeeze gnomeos-chroot
<http://wiki.debian.org/QEMU#Setting_up_a_testing.2BAC8-unstable_system>
Follow the steps for making a disk image, downloading the business
card CD, booting it in QEMU and running through the installer. Note I
used the QCOW format, since it is more efficient. Here are the steps
I chose:
$ qemu-img create -f qcow2 debian.qcow 2G
$ qemu-kvm -hda debian.qcow -cdrom debian-testing-amd64-businesscard.iso -boot d -m 512
Test that the image works after installation too, before you start
modifying things below! Remember to remove the -cdrom and -boot
options from the installation QEMU command. It should just look like
this:
$ qemu-kvm -hda debian.qcow -m 512
Modifying the image
-------------------
You now have a disk image in debian.img, and the first partition
should be ext4.
The first thing I did was mount the image, and move the "read only"
parts of the OS to a new directory "r0".
$ mkdir /mnt/debian
$ modprobe nbd max_part=8
$ qemu-nbd --connect=/dev/nbd0 debian.qcow
$ mount /dev/nbd0p1 /mnt/debian/
$ cd /mnt/debian
$ mkdir r0
$ DIRS="bin dev etc lib lib32 lib64 media mnt opt proc root run sbin selinux srv sys tmp usr"
$ mv $DIRS r0
$ mkdir r0/{boot,var,home}
$ touch r0/{boot,var,home}/EMPTY
Note that /boot, /home and /var are left shared; we create empty
destination directories that will be mounted over. Now with it still
mounted, we need to move on to the next part - modifying the initrd.
Then I started hacking on the initrd, making understand how to chroot
to "r0". I ended up with two patches - one to util-linux, and one to
the "init" script in Debian's initrd.
See:
0001-switch_root-Add-subroot-option.patch
0001-Add-support-for-subroot-option.patch
$ git clone --depth=1 git://github.com/karelzak/util-linux.git
$ cd util-linux
$ patch -p1 -i ../0001-switch_root-Add-subroot-option.patch
$ ./autogen.sh; ./configure ; make
Now you have a modified "sys-utils/switch_root" binary. Let's next
patch the initrd and rebuild it:
$ cd ..
Make a backup:
$ mkdir initrd
$ cp /mnt/debian/boot/initrd.img-3.0.0-1-amd64{,.orig}
Unpack, and patch:
$ zcat /mnt/debian/boot/initrd.img-3.0.0-1-amd64 | (cd initrd; cpio -d -i -v)
$ (cd initrd && patch -p1 -i ../0001-Add-support-for-subroot-option.patch)
Repack:
$ (cd initrd; find | cpio -o -H newc) | gzip > /mnt/debian/boot/initrd.img-3.0.0-1-amd64.new
$ mv /mnt/debian/boot/initrd.img-3.0.0-1-amd64{.new,}
Unmount:
$ umount /mnt/debian
Running hacktree inside the system
----------------------------------
This means that after booting, every process would be in /r0 -
including any hacktree process. Assuming objects live in say
/objects, we need some way for hacktree to switch things. I think
just chroot breakout would work. This has the advantage the daemon
can continue to use libraries from the active host.
Note there is a self-reference here (as is present in Debian/Fedora
etc.) - the update system would at present be shipped with the system
itself. Should they be independent? That has advantages and
disadvantages. I think we should just try really really hard to avoid
breaking hacktree in updates.

View File

@ -26,16 +26,21 @@ BRANCH=$1
test -n "$BRANCH" || usage
shift
ARCH=x86
YOCTO_ARCH=x86
MACHINE=i686
BUILDROOT="gnomeos-3.4-${MACHINE}-${BRANCH}"
BASE="bases/yocto/${BUILDROOT}"
OSTREE_VER=$(cd $SCRIPT_SRCDIR && git describe)
BUILDDIR=$WORKDIR/tmp-eglibc
OSTREE_REPO=$WORKDIR/repo
BUILD_TAR=$BUILDDIR/deploy/images/gnomeos-contents-$BRANCH-qemu${ARCH}.tar.gz
BUILD_TAR=$BUILDDIR/deploy/images/gnomeos-contents-$BRANCH-qemu${YOCTO_ARCH}.tar.gz
BUILD_TIME=$(date -r $BUILD_TAR)
ostree --repo=${OSTREE_REPO} commit --skip-if-unchanged -s "Build from OSTree ${OSTREE_VER}" -b "gnomeos-yocto-$ARCH-$BRANCH" --tree=tar=${BUILD_TAR}
ostree --repo=${OSTREE_REPO} diff "gnomeos-yocto-$ARCH-$BRANCH"^ "gnomeos-yocto-$ARCH-$BRANCH"
ostree --repo=${OSTREE_REPO} commit --skip-if-unchanged -s "Build from OSTree ${OSTREE_VER}" -b "${BASE}" --tree=tar=${BUILD_TAR}
ostree --repo=${OSTREE_REPO} diff "${BASE}"^ "${BASE}" || true
cp ${OSTREE_REPO}/refs/heads/${BASE} ${OSTREE_REPO}/refs/heads/${BUILDROOT}

View File

@ -1,8 +0,0 @@
#!/usr/bin/python
#
# Copyright 2011 Colin Walters <walters@verbum.org>
# Licensed under the new-BSD license (http://www.opensource.org/licenses/bsd-license.php)
import os,sys,subprocess,tempfile,re
for

View File

@ -0,0 +1,38 @@
# Copyright (C) 2011 Colin Walters <walters@verbum.org>
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the
# Free Software Foundation, Inc., 59 Temple Place - Suite 330,
# Boston, MA 02111-1307, USA.
import re
ARTIFACT_RE = re.compile(r'^artifact-([^,]+),([^,]+),([^,]+),([^,]+),(.+)-((?:runtime)|(?:devel))\.tar\.gz$')
def parse_artifact_name(artifact_basename):
match = ARTIFACT_RE.match(artifact_basename)
if match is None:
raise ValueError("Invalid artifact basename %s" % (artifact_basename))
return {'buildroot': match.group(1),
'buildroot_version': match.group(2),
'name': match.group(3),
'branch': match.group(4),
'version': match.group(5),
'type': match.group(6)}
def branch_name_for_artifact(a):
return 'artifacts/%s/%s/%s/%s' % (a['buildroot'],
a['name'],
a['branch'],
a['type'])

View File

@ -72,10 +72,7 @@ class OstbuildAutodiscoverMeta(builtins.Builtin):
def _discover_version_from_git(self):
if os.path.isdir('.git'):
try:
version = subprocess.check_output(['git', 'describe'])
except subprocess.CalledProcessError, e:
version = subprocess.check_output(['git', 'rev-parse', 'HEAD'])
version = subprocess.check_output(['git', 'describe', '--long', '--abbrev=42', '--always'])
return version.strip()
return None

View File

@ -0,0 +1,262 @@
# Copyright (C) 2011 Colin Walters <walters@verbum.org>
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the
# Free Software Foundation, Inc., 59 Temple Place - Suite 330,
# Boston, MA 02111-1307, USA.
import os,sys,subprocess,tempfile,re,shutil
import argparse
import json
from . import builtins
from .ostbuildlog import log, fatal
from .subprocess_helpers import run_sync, run_sync_get_output
from . import ostbuildrc
from . import buildutil
from . import kvfile
class BuildOptions(object):
pass
class OstbuildBuild(builtins.Builtin):
name = "build"
short_description = "Rebuild all artifacts from the given manifest"
def __init__(self):
builtins.Builtin.__init__(self)
def _ensure_vcs_mirror(self, name, keytype, uri, branch):
assert keytype == 'git'
mirror = os.path.join(self.srcdir, name)
tmp_mirror = mirror + '.tmp'
if os.path.isdir(tmp_mirror):
shutil.rmtree(tmp_mirror)
if not os.path.isdir(mirror):
run_sync(['git', 'clone', '--mirror', uri, tmp_mirror])
os.rename(tmp_mirror, mirror)
return mirror
def _get_vcs_checkout(self, name, keytype, mirrordir, branch):
checkoutdir = os.path.join(self.srcdir, '_checkouts')
if not os.path.isdir(checkoutdir):
os.makedirs(checkoutdir)
dest = os.path.join(checkoutdir, name)
tmp_dest = dest + '.tmp'
if os.path.isdir(dest):
shutil.rmtree(dest)
if os.path.isdir(tmp_dest):
shutil.rmtree(tmp_dest)
subprocess.check_call(['git', 'clone', '--depth=1', '-q', mirrordir, tmp_dest])
subprocess.check_call(['git', 'checkout', '-q', branch], cwd=tmp_dest)
subprocess.check_call(['git', 'submodule', 'update', '--init'], cwd=tmp_dest)
os.rename(tmp_dest, dest)
return dest
def _get_vcs_version_from_checkout(self, name):
vcsdir = os.path.join(self.srcdir, name)
return subprocess.check_output(['git', 'rev-parse', 'HEAD'], cwd=vcsdir)
def _parse_src_key(self, srckey):
idx = srckey.find(':')
if idx < 0:
raise ValueError("Invalid SRC uri=%s" % (srckey, ))
keytype = srckey[:idx]
if keytype not in ('git'):
raise ValueError("Unsupported SRC uri=%s" % (srckey, ))
uri = srckey[idx+1:]
idx = uri.rfind('#')
if idx < 0:
branch = "master"
else:
branch = uri[idx+1:]
uri = uri[0:idx]
return (keytype, uri, branch)
def _parse_artifact_vcs_version(self, ver):
idx = ver.rfind('-')
if idx > 0:
vcs_ver = ver[idx+1:]
else:
vcs_ver = ver
if not vcs_ver.startswith('g'):
raise ValueError("Invalid artifact version '%s'" % (ver, ))
return vcs_ver[1:]
def _get_ostbuild_chroot_args(self, architecture):
current_machine = os.uname()[4]
if current_machine != architecture:
args = ['setarch', architecture]
else:
args = []
args.extend(['ostbuild', 'chroot-compile-one',
'--repo=' + self.repo])
return args
def _launch_debug_shell(self, architecture, buildroot, cwd=None):
args = self._get_ostbuild_chroot_args(architecture)
args.extend(['--buildroot=' + buildroot,
'--workdir=' + self.workdir,
'--debug-shell'])
run_sync(args, cwd=cwd, fatal_on_error=False, keep_stdin=True)
fatal("Exiting after debug shell")
def _build_one_component(self, name, architecture, meta):
(keytype, uri, branch) = self._parse_src_key(meta['SRC'])
component_vcs_mirror = self._ensure_vcs_mirror(name, keytype, uri, branch)
component_src = self._get_vcs_checkout(name, keytype, component_vcs_mirror, branch)
buildroot = '%s-%s-devel' % (self.manifest['name'], architecture)
branchname = 'artifacts/%s/%s/%s' % (buildroot, name, branch)
current_buildroot_version = run_sync_get_output(['ostree', '--repo=' + self.repo,
'rev-parse', buildroot])
current_buildroot_version = current_buildroot_version.strip()
previous_commit_version = run_sync_get_output(['ostree', '--repo=' + self.repo,
'rev-parse', branchname],
stderr=open('/dev/null', 'w'),
none_on_error=True)
if previous_commit_version is not None:
log("Previous build of '%s' is %s" % (branchname, previous_commit_version))
previous_artifact_version = run_sync_get_output(['ostree', '--repo=' + self.repo,
'show', '--print-metadata-key=ostbuild-artifact-version', previous_commit_version])
previous_artifact_version = previous_artifact_version.strip()
previous_buildroot_version = run_sync_get_output(['ostree', '--repo=' + self.repo,
'show', '--print-metadata-key=ostbuild-buildroot-version', previous_commit_version])
previous_buildroot_version = previous_buildroot_version.strip()
previous_vcs_version = self._parse_artifact_vcs_version(previous_artifact_version)
current_vcs_version = self._get_vcs_version_from_checkout(name)
vcs_version_matches = False
if previous_vcs_version == current_vcs_version:
vcs_version_matches = True
log("VCS version is unchanged from '%s'" % (previous_vcs_version, ))
else:
log("VCS version is now '%s', was '%s'" % (current_vcs_version, previous_vcs_version))
buildroot_version_matches = False
if vcs_version_matches:
buildroot_version_matches = (current_buildroot_version == previous_buildroot_version)
if buildroot_version_matches:
log("Already have build '%s' of src commit '%s' for '%s' in buildroot '%s'" % (previous_commit_version, previous_vcs_version, branchname, buildroot))
return
else:
log("Buildroot is now '%s'" % (current_buildroot_version, ))
else:
log("No previous build for '%s' found" % (branchname, ))
component_resultdir = os.path.join(self.workdir, name, 'results')
if os.path.isdir(component_resultdir):
shutil.rmtree(component_resultdir)
os.makedirs(component_resultdir)
chroot_args = self._get_ostbuild_chroot_args(architecture)
chroot_args.extend(['--buildroot=' + buildroot,
'--workdir=' + self.workdir,
'--resultdir=' + component_resultdir])
if self.buildopts.shell_on_failure:
ecode = run_sync(chroot_args, cwd=component_src, fatal_on_error=False)
if ecode != 0:
self._launch_debug_shell(architecture, buildroot, cwd=component_src)
else:
run_sync(chroot_args, cwd=component_src, fatal_on_error=True)
artifact_files = []
for name in os.listdir(component_resultdir):
if name.startswith('artifact-'):
log("Generated artifact file: %s" % (name, ))
artifact_files.append(os.path.join(component_resultdir, name))
assert len(artifact_files) >= 1 and len(artifact_files) <= 2
run_sync(['ostbuild', 'commit-artifacts',
'--repo=' + self.repo] + artifact_files)
artifacts = []
for filename in artifact_files:
parsed = buildutil.parse_artifact_name(os.path.basename(filename))
artifacts.append(parsed)
def _sort_artifact(a, b):
if a['type'] == b['type']:
return 0
elif a['type'] == 'runtime':
return -1
return 1
artifacts.sort(_sort_artifact)
return artifacts
def _compose(self, suffix, artifacts):
compose_contents = ['bases/' + self.manifest['base'] + '-' + suffix]
compose_contents.extend(artifacts)
child_args = ['ostree', '--repo=' + self.repo, 'compose',
'-b', self.manifest['name'] + '-' + suffix, '-s', 'Compose']
child_args.extend(compose_contents)
run_sync(child_args)
def execute(self, argv):
parser = argparse.ArgumentParser(description=self.short_description)
parser.add_argument('--manifest', required=True)
parser.add_argument('--start-at')
parser.add_argument('--shell-on-failure', action='store_true')
parser.add_argument('--debug-shell', action='store_true')
args = parser.parse_args(argv)
self.parse_config()
self.buildopts = BuildOptions()
self.buildopts.shell_on_failure = args.shell_on_failure
self.manifest = json.load(open(args.manifest))
if args.debug_shell:
debug_shell_arch = self.manifest['architectures'][0]
debug_shell_buildroot = '%s-%s-devel' % (self.manifest['name'], debug_shell_arch)
self._launch_debug_shell(debug_shell_arch, debug_shell_buildroot)
dirname = os.path.dirname(args.manifest)
components = self.manifest['components']
runtime_components = []
devel_components = []
runtime_artifacts = []
devel_artifacts = []
if args.start_at:
start_at_index = -1
for i,component_name in enumerate(components):
if component_name == args.start_at:
start_at_index = i
break
if start_at_index == -1:
fatal("Unknown component '%s' for --start-at" % (args.start_at, ))
else:
start_at_index = 0
for component_name in components[start_at_index:]:
for architecture in self.manifest['architectures']:
path = os.path.join(dirname, component_name + '.txt')
f = open(path)
component_meta = kvfile.parse(f)
artifact_branches = self._build_one_component(component_name, architecture, component_meta)
target_component = component_meta.get('COMPONENT')
if target_component == 'devel':
devel_components.append(component_name)
else:
runtime_components.append(component_name)
for branch in artifact_branches:
if branch['type'] == 'runtime':
runtime_artifacts.append(branch)
devel_artifacts.extend(artifact_branches)
f.close()
devel_branches = map(buildutil.branch_name_for_artifact, devel_artifacts)
self._compose(architecture + '-devel', devel_branches)
runtime_branches = map(buildutil.branch_name_for_artifact, runtime_artifacts)
self._compose(architecture + '-runtime', runtime_branches)
builtins.register(OstbuildBuild)

View File

@ -39,13 +39,13 @@ class OstbuildChrootCompileOne(builtins.Builtin):
short_description = "Build artifacts from the current source directory in a chroot"
def execute(self, argv):
parser = argparse.ArgumentParser(description="Build a module in a given root")
parser = argparse.ArgumentParser(description=self.short_description)
parser.add_argument('--workdir')
parser.add_argument('--repo')
parser.add_argument('--repo', required=True)
parser.add_argument('--resultdir')
parser.add_argument('--branch')
parser.add_argument('--buildroot', required=True)
parser.add_argument('--meta')
parser.add_argument('--debug-shell', type=bool)
parser.add_argument('--debug-shell', action='store_true')
args = parser.parse_args(argv)
@ -67,7 +67,7 @@ class OstbuildChrootCompileOne(builtins.Builtin):
workdir_is_tmp = (args.workdir is None)
if workdir_is_tmp:
workdir = tempfile.mkdtemp(prefix='ostree-chroot-compile-')
workdir = tempfile.mkdtemp(prefix='ostbuild-chroot-compile-')
else:
workdir = args.workdir
@ -79,10 +79,10 @@ class OstbuildChrootCompileOne(builtins.Builtin):
shutil.rmtree(child_tmpdir)
os.mkdir(child_tmpdir)
rev = subprocess.check_output(['ostree', '--repo=' + args.repo, 'rev-parse', args.branch])
rev = subprocess.check_output(['ostree', '--repo=' + args.repo, 'rev-parse', args.buildroot])
rev=rev.strip()
metadata['BUILDROOT'] = args.branch
metadata['BUILDROOT'] = args.buildroot
metadata['BUILDROOT_VERSION'] = rev
rootdir = os.path.join(workdir, 'root-' + rev)
@ -130,14 +130,14 @@ class OstbuildChrootCompileOne(builtins.Builtin):
'--mount-proc', '/proc',
'--mount-bind', '/dev', '/dev',
'--mount-bind', child_tmpdir, '/tmp',
'--mount-bind', os.getcwd(), chroot_sourcedir,
'--mount-bind', args.resultdir, '/ostbuild/results',
rootdir,
'/bin/sh']
'--mount-bind', os.getcwd(), chroot_sourcedir]
if args.resultdir:
child_args.extend(['--mount-bind', args.resultdir, '/ostbuild/results'])
child_args.extend([rootdir, '/bin/sh'])
if not args.debug_shell:
child_args += ['-c',
'cd "%s" && ostbuild-compile-one-impl OSTBUILD_RESULTDIR=/ostbuild/results OSTBUILD_META=_ostbuild-meta' % (chroot_sourcedir, )
]
child_args.extend(['-c',
'cd "%s" && ostbuild compile-one --ostbuild-resultdir=/ostbuild/results --ostbuild-meta=_ostbuild-meta' % (chroot_sourcedir, )
])
run_sync(child_args, env=BUILD_ENV)
if workdir_is_tmp:

View File

@ -26,13 +26,13 @@ import argparse
from . import builtins
from .ostbuildlog import log, fatal
from .subprocess_helpers import run_sync
from . import buildutil
class OstbuildCommitArtifacts(builtins.Builtin):
name = "commit-artifacts"
short_description = "Commit artifacts to their corresponding repository branches"
def execute(self, argv):
artifact_re = re.compile(r'^artifact-([^,]+),([^,]+),([^,]+),([^,]+),([^.]+)\.tar\.gz$')
parser = argparse.ArgumentParser(self.short_description)
parser.add_argument('--repo')
@ -45,21 +45,14 @@ class OstbuildCommitArtifacts(builtins.Builtin):
for arg in args.artifacts:
basename = os.path.basename(arg)
match = artifact_re.match(basename)
if match is None:
fatal("Invalid artifact name: %s" % (arg, ))
buildroot = match.group(1)
buildroot_version = match.group(2)
name = match.group(3)
branch = match.group(4)
version = match.group(5)
parsed = buildutil.parse_artifact_name(basename)
branch_name = 'artifacts/%s/%s/%s' % (buildroot, name, branch)
branch_name = buildutil.branch_name_for_artifact(parsed)
run_sync(['ostree', '--repo=' + args.repo,
'commit', '-b', branch_name, '-s', 'Build ' + version,
'--add-metadata-string=ostree-buildroot-version=' + buildroot_version,
'--add-metadata-string=ostree-artifact-version=' + version,
'commit', '-b', branch_name, '-s', 'Build ' + parsed['version'],
'--add-metadata-string=ostbuild-buildroot-version=' + parsed['buildroot_version'],
'--add-metadata-string=ostbuild-artifact-version=' + parsed['version'],
'--skip-if-unchanged', '--tar-autocreate-parents', '--tree=tar=' + arg])
builtins.register(OstbuildCommitArtifacts)

View File

@ -36,43 +36,12 @@ _BLACKLIST_REGEXPS = map(re.compile,
_DEVEL_REGEXPS = map(re.compile,
[r'/usr/include/',
r'/usr/share/pkgconfig/',
r'/usr/share/aclocal/',
r'/(?:usr/)lib(?:|(?:32)|(?:64))/pkgconfig/.*\.pc$',
r'/(?:usr/)lib(?:|(?:32)|(?:64))/[^/]+\.so$'
r'/(?:usr/)lib(?:|(?:32)|(?:64))/[^/]+\.a$'
])
class BuildSystemScanner(object):
@classmethod
def _find_file(cls, names):
for name in names:
if os.path.exists(name):
return name
return None
@classmethod
def get_configure_source_script(cls):
return cls._find_file(('./configure.ac', './configure.in'))
@classmethod
def get_configure_script(cls):
return cls._find_file(('./configure', ))
@classmethod
def get_bootstrap_script(cls):
return cls._find_file(('./autogen.sh', ))
@classmethod
def get_silent_rules(cls):
src = cls.get_configure_source_script()
if not src:
return False
f = open(src)
for line in f:
if line.find('AM_SILENT_RULES') >= 0:
f.close()
return True
f.close()
return False
class OstbuildCompileOne(builtins.Builtin):
name = "compile-one"
short_description = "Build artifacts from the current source directory"
@ -81,25 +50,12 @@ class OstbuildCompileOne(builtins.Builtin):
builtins.Builtin.__init__(self)
self.tempfiles = []
def _search_file(self, filename, pattern):
f = open(filename)
for line in f:
if line.startswith(pattern):
f.close()
return line
f.close()
return None
def _find_buildapi_makevariable(self, name, builddir='.'):
var = '.%s:' % (name, )
line = None
path = os.path.join(builddir, 'Makefile.in')
if os.path.exists(path):
line = self._search_file(path, var)
path = os.path.join(builddir, 'Makefile')
if not line and os.path.exists(path):
line = self._search_file(path, var)
return line is not None
def _has_buildapi_configure_variable(self, name):
var = '#buildapi-variable-%s' % (name, )
for line in open('configure'):
if line.find(var) >= 0:
return True
return False
def execute(self, args):
self.default_buildapi_jobs = ['-j', '%d' % (cpu_count() * 2, )]
@ -134,9 +90,9 @@ class OstbuildCompileOne(builtins.Builtin):
for arg in args:
if arg.startswith('--ostbuild-resultdir='):
self.ostbuild_resultdir=arg[len('ostbuild-resultdir='):]
elif arg.startswith('ostbuild-meta='):
self.ostbuild_meta=arg[len('ostbuild-meta='):]
self.ostbuild_resultdir=arg[len('--ostbuild-resultdir='):]
elif arg.startswith('--ostbuild-meta='):
self.ostbuild_meta=arg[len('--ostbuild-meta='):]
elif arg.startswith('--'):
self.configargs.append(arg)
else:
@ -145,10 +101,10 @@ class OstbuildCompileOne(builtins.Builtin):
self.metadata = {}
if self.ostbuild_meta is None:
output = subprocess.check_output(['ostbuild-autodiscover-meta'])
output = subprocess.check_output(['ostbuild', 'autodiscover-meta'])
ostbuild_meta_f = StringIO(output)
else:
ostbuild_meta_f = open(ostbuild_meta)
ostbuild_meta_f = open(self.ostbuild_meta)
for line in ostbuild_meta_f:
(k,v) = line.split('=', 1)
@ -160,46 +116,33 @@ class OstbuildCompileOne(builtins.Builtin):
if k not in self.metadata:
fatal('Missing required key "%s" in metadata' % (k, ))
self.phase_bootstrap()
autogen_script = None
if not os.path.exists('configure'):
log("No 'configure' script found, looking for autogen/bootstrap")
for name in ['autogen', 'autogen.sh', 'bootstrap']:
if os.path.exists(name):
log("Using bootstrap script '%s'" % (name, ))
autogen_script = name
if autogen_script is None:
fatal("No configure or autogen script detected; unknown buildsystem")
def phase_bootstrap(self):
have_configure = BuildSystemScanner.get_configure_script()
have_configure_source = BuildSystemScanner.get_configure_source_script()
if not (have_configure or have_configure_source):
fatal("No configure or bootstrap script detected; unknown buildsystem")
return
need_v1 = BuildSystemScanner.get_silent_rules()
if need_v1:
log("Detected AM_SILENT_RULES, adding --disable-silent-rules to configure")
self.configargs.append('--disable-silent-rules')
if have_configure:
self.phase_configure()
if autogen_script is not None:
env = dict(os.environ)
env['NOCONFIGURE'] = '1'
run_sync(['./' + autogen_script], env=env)
else:
bootstrap = BuildSystemScanner.get_bootstrap_script()
if bootstrap:
log("Detected bootstrap script: %s, using it" % (bootstrap, ))
args = [bootstrap]
# Add NOCONFIGURE; GNOME style scripts use this
env = dict(os.environ)
env['NOCONFIGURE'] = '1'
run_sync(args, env=env)
else:
log("No bootstrap script found; using generic autoreconf")
run_sync(['autoreconf', '-f', '-i'])
self.phase_configure()
log("Using existing 'configure' script")
def phase_configure(self):
use_builddir = True
doesnot_support_builddir = self._find_buildapi_makevariable('buildapi-no-builddir')
doesnot_support_builddir = self._has_buildapi_configure_variable('no-builddir')
if doesnot_support_builddir:
log("Found .buildapi-no-builddir; copying source tree to _build")
shutil.rmtree('_build')
os.mkdir('_build')
log("Found no-builddir Build API variable; copying source tree to _build")
if os.path.isdir('_build'):
shutil.rmtree('_build')
shutil.copytree('.', '_build', symlinks=True,
ignore=shutil.ignore_patterns('_build'))
use_builddir = False
builddir = '.'
if use_builddir:
builddir = '_build'
@ -207,26 +150,20 @@ class OstbuildCompileOne(builtins.Builtin):
if not os.path.isdir(builddir):
os.mkdir(builddir)
configstatus = 'config.status'
if not os.path.exists(configstatus):
if use_builddir:
args = ['../configure']
else:
args = ['./configure']
args.extend(self.configargs)
if use_builddir:
run_sync(args, cwd=builddir)
else:
run_sync(args)
if use_builddir:
args = ['../configure']
else:
log("Found %s, skipping configure" % (configstatus, ))
self.phase_build(builddir=builddir)
build_status = False
def phase_build(self, builddir=None):
if not os.path.exists(os.path.join(builddir, 'Makefile')):
args = ['./configure']
args.extend(self.configargs)
if use_builddir:
run_sync(args, cwd=builddir)
else:
run_sync(args)
makefile_path = os.path.join(builddir, 'Makefile')
if not os.path.exists(makefile_path):
fatal("No Makefile found")
args = list(self.makeargs)
user_specified_jobs = False
for arg in args:
@ -234,35 +171,18 @@ class OstbuildCompileOne(builtins.Builtin):
user_specified_jobs = True
if not user_specified_jobs:
notparallel = self._find_buildapi_makevariable('NOTPARALLEL', builddir=builddir)
if not notparallel:
has_notparallel = False
for line in open(makefile_path):
if line.startswith('.NOTPARALLEL'):
has_notparallel = True
log("Found .NOTPARALLEL")
if not has_notparallel:
log("Didn't find NOTPARALLEL, using parallel make by default")
args.extend(self.default_buildapi_jobs)
run_sync(args, cwd=builddir)
self.phase_make_artifacts(builddir=builddir)
def make_artifact(self, name, from_files, tempdir=None, resultdir=None):
targz_name = name + '.tar.gz'
(fd,filelist_temp)=tempfile.mkstemp(prefix='ostree-filelist-%s' % (name, ))
os.close(fd)
self.tempfiles.append(filelist_temp)
f = open(filelist_temp, 'w')
for filename in from_files:
assert ('\n' not in filename)
f.write(filename)
f.write('\n')
f.close()
if resultdir:
result_path = os.path.join(resultdir, targz_name)
else:
result_path = targz_name
args = ['tar', '-c', '-z', '-C', tempdir, '-f', result_path, '-T', filelist_temp]
run_sync(args)
log("created: %s" % (os.path.abspath (result_path), ))
def phase_make_artifacts(self, builddir=None):
name = self.metadata['NAME']
assert ',' not in name
branch = self.metadata['BRANCH']
@ -280,7 +200,7 @@ class OstbuildCompileOne(builtins.Builtin):
artifact_prefix='artifact-%s,%s,%s,%s,%s' % (root_name, root_version, name, branch, version)
tempdir = tempfile.mkdtemp(prefix='ostree-build-%s-' % (name,))
tempdir = tempfile.mkdtemp(prefix='ostbuild-%s-' % (name,))
self.tempfiles.append(tempdir)
args = ['make', 'install', 'DESTDIR=' + tempdir]
run_sync(args, cwd=builddir)
@ -318,9 +238,6 @@ class OstbuildCompileOne(builtins.Builtin):
self.make_artifact(artifact_prefix + '-devel', devel_files, tempdir=tempdir, resultdir=self.ostbuild_resultdir)
self.make_artifact(artifact_prefix + '-runtime', runtime_files, tempdir=tempdir, resultdir=self.ostbuild_resultdir)
self.phase_complete()
def phase_complete(self):
for tmpname in self.tempfiles:
assert os.path.isabs(tmpname)
if os.path.isdir(tmpname):
@ -331,5 +248,24 @@ class OstbuildCompileOne(builtins.Builtin):
pass
except OSError, e:
pass
def make_artifact(self, name, from_files, tempdir=None, resultdir=None):
targz_name = name + '.tar.gz'
(fd,filelist_temp)=tempfile.mkstemp(prefix='ostbuild-filelist-%s' % (name, ))
os.close(fd)
self.tempfiles.append(filelist_temp)
f = open(filelist_temp, 'w')
for filename in from_files:
assert ('\n' not in filename)
f.write(filename)
f.write('\n')
f.close()
if resultdir:
result_path = os.path.join(resultdir, targz_name)
else:
result_path = targz_name
args = ['tar', '-c', '-z', '-C', tempdir, '-f', result_path, '-T', filelist_temp]
run_sync(args)
log("created: %s" % (os.path.abspath (result_path), ))
builtins.register(OstbuildCompileOne)

View File

@ -21,12 +21,24 @@ import os
import sys
import argparse
from . import ostbuildrc
from .ostbuildlog import log, fatal
_all_builtins = {}
class Builtin(object):
name = None
short_description = None
def parse_config(self):
self.repo = ostbuildrc.get_key('repo')
self.srcdir = ostbuildrc.get_key('srcdir')
if not os.path.isdir(self.srcdir):
fatal("Specified srcdir '%s' is not a directory" % (self.srcdir, ))
self.workdir = ostbuildrc.get_key('workdir')
if not os.path.isdir(self.workdir):
fatal("Specified workdir '%s' is not a directory", (self.workdir, ))
def execute(self, args):
raise NotImplementedError()

View File

@ -0,0 +1,23 @@
# Copyright (C) 2011 Colin Walters <walters@verbum.org>
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the
# Free Software Foundation, Inc., 59 Temple Place - Suite 330,
# Boston, MA 02111-1307, USA.
def parse(stream):
ret = {}
for line in stream:
(k,v) = line.split('=', 1)
ret[k.strip()] = v.strip()
return ret

View File

@ -23,6 +23,7 @@ import argparse
from . import builtins
from . import builtin_autodiscover_meta
from . import builtin_build
from . import builtin_chroot_compile_one
from . import builtin_commit_artifacts
from . import builtin_compile_one

View File

@ -0,0 +1,41 @@
# Copyright (C) 2011 Colin Walters <walters@verbum.org>
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the
# Free Software Foundation, Inc., 59 Temple Place - Suite 330,
# Boston, MA 02111-1307, USA.
import os,sys,ConfigParser
_config = None
def get():
global _config
if _config is None:
configpath = os.path.expanduser('~/.config/ostbuild.cfg')
parser = ConfigParser.RawConfigParser()
parser.read([configpath])
_config = {}
for (k, v) in parser.items('global'):
_config[k.strip()] = v.strip()
return _config
def get_key(name, provided_args=None):
config = get()
if provided_args:
v = provided_args.get(name)
if v is not None:
return v
return config[name]

View File

@ -23,9 +23,7 @@ import subprocess
from .ostbuildlog import log, fatal
def run_sync(args, cwd=None, env=None):
log("running: %r" % (args,))
f = open('/dev/null', 'r')
def _get_env_for_cwd(cwd=None, env=None):
# This dance is necessary because we want to keep the PWD
# environment variable up to date. Not doing so is a recipie
# for triggering edge conditions in pwd lookup.
@ -40,12 +38,57 @@ def run_sync(args, cwd=None, env=None):
env_copy['PWD'] = cwd
else:
env_copy = env
proc = subprocess.Popen(args, stdin=f, stdout=sys.stdout, stderr=sys.stderr,
return env_copy
def run_sync_get_output(args, cwd=None, env=None, stderr=None, none_on_error=False):
log("running: %s" % (subprocess.list2cmdline(args),))
env_copy = _get_env_for_cwd(cwd, env)
f = open('/dev/null', 'r')
if stderr is None:
stderr_target = sys.stderr
else:
stderr_target = stderr
proc = subprocess.Popen(args, stdin=f, stdout=subprocess.PIPE, stderr=stderr_target,
close_fds=True, cwd=cwd, env=env_copy)
f.close()
output = proc.communicate()[0].strip()
if proc.returncode != 0 and not none_on_error:
logfn = fatal
else:
logfn = log
logfn("pid %d exited with code %d, %d bytes of output" % (proc.pid, proc.returncode, len(output)))
if proc.returncode == 0:
return output
return None
def run_sync(args, cwd=None, env=None, fatal_on_error=True, keep_stdin=False):
log("running: %s" % (subprocess.list2cmdline(args),))
# This dance is necessary because we want to keep the PWD
# environment variable up to date. Not doing so is a recipie
# for triggering edge conditions in pwd lookup.
if (cwd is not None) and (env is None or ('PWD' in env)):
if env is None:
env_copy = os.environ.copy()
else:
env_copy = env.copy()
if ('PWD' in env_copy) and (not cwd.startswith('/')):
env_copy['PWD'] = os.path.join(env_copy['PWD'], cwd)
else:
env_copy['PWD'] = cwd
else:
env_copy = env
if keep_stdin:
target_stdin = sys.stdin
else:
target_stdin = open('/dev/null', 'r')
proc = subprocess.Popen(args, stdin=target_stdin, stdout=sys.stdout, stderr=sys.stderr,
close_fds=True, cwd=cwd, env=env_copy)
if not keep_stdin:
target_stdin.close()
returncode = proc.wait()
if returncode != 0:
if fatal_on_error and returncode != 0:
logfn = fatal
else:
logfn = log
logfn("pid %d exited with code %d" % (proc.pid, returncode))
return returncode