Update documentation

Signed-off-by: David Oberhollenzer <david.oberhollenzer@tele2.at>
This commit is contained in:
David Oberhollenzer 2018-10-01 17:34:25 +02:00
parent 38a9b5865c
commit 05298e3363
4 changed files with 186 additions and 99 deletions

View File

@ -29,7 +29,8 @@ see [docs/build.md](docs/build.md).
For an overview of the available documentation see [docs/index.md](docs/index.md).
By the way, before you ask: the default root password for the demo setup is
*reindeerflotilla*.
*reindeerflotilla*. The baud rate on the ALIX board is set to *38400* and on
all other boards to *115200*.
The wireless network is called *Pygos Demo Net* and the password
is *righteous*.
@ -37,42 +38,48 @@ is *righteous*.
## Target configuration
The Pygos build system is driven by toolchain and package configurations
that are divided into two categories:
that are split across multiple *layers*.
From the product name, a layer configuration file in the `product` sub
directory is read, specifying what configuration layers to use and in
whate order (later layers can override earlier layers).
The actual configuration for the build system is in the coresponding sub
directories in `layer/<name>`.
For details on the configuration layers see [docs/layers.md](docs/layers.md).
- Board specific settings
- Mostly board specific cross toolchain and kernel configuration
- Can specify a minimal set of packages required for minimal operation
of the board
- Product specific settings
- Specifies a list of boards for which the product can be built
- Specifies and configures extra packages needed
- Can override board settings and provide configuration files
specific to *that* product running on *that* board
### Demo configuration
The system currently contains configurations that allows it to run on the
following boards:
The system currently contains configurations for the following products:
- Raspberry Pi 3 (ARM, 32 bit)
- pc-engines ALIX board (x86, 32 bit)
- `qemu-test` - A simple target for testing with Qemu (64 bit x86). The boot
partition is a directory that is exposed as VFAT formated virtio drive.
The overlay partition is a local directory mounted via 9PFS via virtio.
The following demo product configurations exist:
- "router" builds on all boards. On the Raspberry Pi 3, a wireless network
is created. A DHPC server serves IP addresses and configures the board as
default gateway and DNS server. DNS queries are resolved using a local
unbound resolver. The ethernet interface is configured via DHCP and packets
are NAT translated and forwarded. On the ALIX board, the DHCP server serves
on two interfaces and the third interface is used for upstream forwarding.
- `router-alix` - A pc-engines ALIX board based router (32 bit x86). A DHCP
server serves IP addresses on the first two interfaces and configures the
board as default gateway and DNS server. DNS queries are resulved using
a local unbound resolver. The remaining interface is configured via DHCP
and packets from the first two are NAT translated and forwarded. The two
ports have different IP subnets and are not allowed to talk to each other.
A local SSH server can be reached via the first two ports. A user has to
be added first, since root login is disabled.
- `router-rpi3` - A Raspberry Pi 3 based wireless access point (32 bit ARM).
A DHPC server serves IP addresses and configures the board as default
gateway and DNS server. DNS queries are resolved using a local unbound
resolver. The ethernet interface is configured via DHCP and packets
are NAT translated and forwarded. A local SSH server can be reached on the
WLAN interface. A user has to be added first, since root login is disabled.
## How to build the system
The system can be built by running the mk.sh script as follows:
mk.sh <board> <product>
mk.sh <product>
This will start to download and build the cross toolchain and system in the
current working directory. The command can (and should) be run from somewhere
@ -81,25 +88,25 @@ outside the source tree.
Once the script is done, the final release package is stored in the following
location (relative to the build directory):
deploy/release-<board>/release-<board>.tar.gz
<product>/deploy/release-<board>/release-<board>.tar.gz
## Directory overview
The build system directory contains the following files and sub directories:
- board/
Contains the board specific configuration files. E.g. board/alix/ holds
files specific for the ALIX board.
- docs/
Contains further documentation that goes into more detailed aspects of
the build system and the final Pygos installation itself.
- layer/
Contains the layer specific configuration files. E.g. layer/bsp-alix/
contains a base configuration for building images for the ALIX board.
- pkg/
Contains package build scripts. Each package occupies a sub directory
with a script named "build" that is used to download, compile and
install the package.
- product/
Contains product specific configuration files. E.g. product/router/ holds
files for the "router" product.
Contains product configurations. E.g. product/router-alix.layers holds
the list of layers to apply to build the `router-alix` product.
- util/
Contains utility scripts used by mk.sh
- LICENSE

View File

@ -1,10 +1,7 @@
# The Pygos Build System
The Pygos system can be built by running the `mk.sh` shell script with
the following two arguments:
* the target board to build the system for
* the product to build
the desired product configuration as argument.
The shell script can be run from anywhere on the file system. All
@ -22,18 +19,20 @@ all upstream package sources to check if newer versions are available.
The `mk.sh` creates a `download` and a `src` directory. In the former it stores
downloaded package tar balls, in the later it extracts the tar balls.
downloaded package tar balls, in the later it extracts the tar balls and
applies patches.
For target specific files, a `<BOARD>-<PRODUCT>` directory is created.
Throughout the build system, this directory is referred to as *build root*.
For all other files and directories, a sub directory named after the product
configuration is created. Throughout the build system, this directory is
referred to as *build root*.
Inside the build root a `deploy` directory is created. Build output for each
package is deployed to a sub directory named after the package.
The cross toolchain is stored in `<BOARD>-<PRODUCT>/toolchain`.
The cross toolchain is stored in `<build root>/toolchain`.
Outputs and diagnostic messages of the build processes are stored in
`<BOARD>-<PRODUCT>/toolchain/log/<package>-<stage>.log`.
`<build root>/toolchain/log/<package>-<stage>.log`.
## Package Build Scripts
@ -76,10 +75,15 @@ The `build` script is also expected to implement the following functions:
* `deploy` is run after compilation to install the build output to the deploy
directory. Arguments and working directory are the same as for `build`. All
output and error messages from the script are piped to
`<packagename>-deploy.log`. Once the function returns, the `mk.sh` script
strips everything installed to `bin` and `lib`, so the implementation doesn't
have to do that. In fact `install-strip` Makefile targets should not be used
since many implementations are broken for cross compilation.
`<packagename>-deploy.log`. The `deploy` function is also expected to
generate a file named `rootfs_files.txt` that contains a listing of all files
in the deploy directory that should be included in the root filesystem and
what permissions should be set on them. Once the function returns,
the `mk.sh` script strips everything installed to `bin` and `lib`, so the
implementation doesn't have to do that. In fact `install-strip` Makefile
targets should not be used since many implementations are broken for cross
compilation. Further common steps are executed for packages that
produce `libtool` archives and `pkg-config` files.
* `check_update` is only used by the `check_update.sh` script. It is supposed
to find out if the package has a newer version available, and if so, echo it
to stdout.
@ -106,7 +110,7 @@ The following variables describe the target system and the build environment:
And a number of variables containing special directories:
* `BUILDROOT` contains the absolute path to the build root directory, i.e. the
working directory in which the `mk.sh` script was executed.
output directory within the working directory of the `mk.sh` script.
* `SCRIPTDIR` contains the absolute path to the script directory, i.e. the git
tree with the build system in it.
* `TCDIR` contains the absolute path to the cross toolchain directory.
@ -118,7 +122,7 @@ And a number of variables containing special directories:
* `PKGDOWNLOADDIR` holds the absolute path of the directory containing all
package tar balls
The toolchain bin directory containing the executable prefixed with `$TARGET-`
The cross toolchain directory containing the executable prefixed with `$TARGET-`
is also prepended to `PATH`.
### Utility Functions
@ -135,59 +139,47 @@ Some utility functions are provided for common package build tasks:
* `verson_find_greatest` can be used in `check_update` to find the largest
version number from a list. The list of version numbers is read from stdin.
Version numbers can have up to four dot separated numbers or characters.
* `run_configure` can be used to run `autoconf` generated `configure` scripts
with all the required options set for cross compilation. Extra options can
be to the options passed to `configure`.
## Configuration Files
Generally, when the build system tries to access configuration files, it
checks the following three locations in order:
The configuration for the build system is organized in layers, stored in
the `layer` directory in the git tree.
* `product/<product>/<board>`
* `product/<product>/`
* `board/<board>/`
The configuration on how to build an image for a specific target is a file
in the `product` sub directory that specifies, what configuration layers
to use and how to stack them on top of each other. Layers that are further
down in the file override the ones before them.
In most cases, if one location contains a file, searching stops. This means,
that a product configuration can *override* settings from the basic board
configuration and the product itself can contain *board specific* settings
that can override the *generic* product configuration.
In some cases, it makes more sense to merge the files from all three locations
to achieve the desired behavior. For files that contains shell variables,
merging is done in reverse order, this results in the same override behavior,
but on shell variable level.
The build system currently uses the following configuration files:
From the layer configuartion, the build system itself merges (in layer
precedence order) and processes the following configuration files:
* `ROOTFS` contains a list of packages that should be built and installed to
the root filesystem. This file is merged from all three config locations.
the root filesystem.
* `TOOLCHAIN` contains shell variables for the cross compiler toolchain.
Merged from all three config locations. See below for more detail.
See below for more detail.
* `LDPATH` contains a list of directories where the loader should look
for dynamic libraries. Merged from all three config locations.
* `INIT` contains shell variables configuring the init system. Merged from
all three config locations. See below for more detail.
* `BOARDS` contains a list of supported boards. It is directly read from
the product directory to check if a product can be built for the specified
board.
for dynamic libraries.
* `INIT` contains shell variables configuring the init system. See below
for more detail.
### Utility Functions
For working with configuration files, the following utility functions
can be used:
* `file_path_override` takes a file name and looks for it in the standard
config locations. The absolute path of the first found file is returned.
* `cat_file_override` takes a file name and looks for it in the standard
config locations. The first file found is printed to stdout.
* `cat_file_merge` takes a file name and looks for it in the standard
config locations. Every found file is printed to stdout.
* `include_override` takes a file name and looks for it in the standard
config locations. The first file found is included using the `source`
shell builtin.
* `include_merge` takes a file name and looks for it in the standard
config locations. Every found file is included using the `source`
shell builtin. Locations are processed in reverse to get default override
behavior on shell variable and function level.
* `file_path_override` takes a file name and looks for the last layer that
contains it. The absolute path of the first found file is `echo`ed.
* `cat_file_override` looks for the last layer that contains a file and
prints it to standard output.
* `cat_file_merge` prints the content of a file to standard output, for every
layer that contains the file, in layer precedence order.
* `include_override` includes a file using the `source` builtin from the last
layer that contains the file.
* `include_merge` includes a file using the `source` builtin from every layer
that contains the file, in layer precedence order.
### Toolchain Configuration
@ -198,19 +190,25 @@ that need to know information about the target system.
Currently, the following variables are used:
* `RELEASEPKG` contains the name of the release package to build to trigger a
build of the entire system. Typically this package depends on the `rootfs`
package, which in turn pulls all configured packages as dependencies. It gets
built last and packages the root filesystem image and boot loader files in
some device specific way, so they can be installed easily on the target
hardware.
* `LINUXPKG` contains the name of the kernel package. There is a default
package called 'linux' that builds a standard, main line LTS kernel. Other
packages can be specified for building vendor kernels.
* `TARGET` specifies the target triplet for the cross toolchain, which is also
the host triplet for packages cross compiled with autotools.
* `GCC_CPU` specifies the target processor for GCC.
* `GCC_EXTRACFG` extra configure arguments passed to GCC. For instance, this
may contain FPU configuration for ARM targets.
* `MUSL_CPU` contains the target CPU architecture for the Musl C library.
* `LINUXPKG` contains the name of the kernel package. There is a default
package called 'linux' that builds a standard, main line kernel. Other
packages can be specified for building vendor kernels.
* `LINUX_CPU` contains the value of the `ARCH` variable passed to the kernel
build system. Used by the generic main line kernel package.
* `LINUX_TGT` contains the make target for the generic main line kernel
package.
* `LINUX_TGT` contains the space seperated make targets for the generic,
main line, LTS kernel package.
* `OPENSSL_TARGET` contains the target architecture for the OpenSSL package.
@ -224,10 +222,14 @@ Currently, the following variables are used:
* `GETTY_TTY` contains a space separated list of ttys on which to start agetty
on system boot.
* `HWCLOCK` is set to yes if the system has a hardware clock that the time
should be synchronized with during system boot and shutdown.
should be synchronized with during system boot and shutdown. If set to
anything else, the init system is configured to keep track of time using
`ntpdate` and a file on persistent storage.
* `DHCP_PORTS` contains a space separated list of network interfaces on which
to operate a DHCP client for network auto configuration.
* `SERVICES` contains a space separated list of raw service names to enable.
* `MODULES` contains a space seperated list of kernel modules that should be
loaded during system boot.
For configuring network interfaces, a file `ifrename` exists that assigns

View File

@ -90,23 +90,21 @@ and if something should break, allows for a simple revert to the last known
good state.
Of course, if the overlay setup is not needed, it can be completely disabled in
which case bind mounts to `/cfg/preserve` are made during system boot instead
of overlay mounts. The filesystem then becomes completely read only, except for
the tmpfs mounts which are not persisted across reboots.
The overlay setup can also be disabled (resulting in bind mounts to
`/cfg/preserve`) or configured to use a tmpfs as backing store.
## Multiarch Directories
Some processors support executing op codes for slightly different architectures.
For instance, 64 bit x86 processors can be set into 32 bit mode and run
programs built for 32 bit x86. Such programs then require libraries also built
for 32 bit x86, creating the necessity for having two different versions of the
`/lib` directory. Shared libraries may have to be duplicated because some
32 bit programs need a 32 bit version and 64 bit programs need their version.
For instance, many 64 bit processors can be set into 32 bit mode and run 32 bit
programs. Such programs then require additional 32 bit versions of shared
libraries, already built for the 64 bit system, creating the necessity for
having two different versions of the `/lib` directory.
For the time being, it has been decided to not include multiarch support.
All packages are built for a single target architecture. This simplifies both
the build process and the final system as well as reducing the memory footprint
of the system image.
of the system image. A proposal exists for creating a separate `/system32` sub
hierarchy on 64 bit targets that require 32 bit binaries.

80
docs/layers.md Normal file
View File

@ -0,0 +1,80 @@
# Build System Layers
The layer configuration is currently organized into 4 different kinds
of layers:
* BSP layers that configure the toolchain and kernel build for a
specific board. All other layers are stacked on top.
* Mid-level program layers that simply add generic programs of some kind.
For instance `pygos-cli` configures packages for a simple, command line
based system on top of a BSP layer.
* Product base layers that add hardware independent configuration for a
specific kind of product. For instance `router-base` adds programs and
configurations for the `router` group of products, but that don't depend
on the specific board.
* Product and board specific layers that add the missing configuration for a
product running on a *specific* board. For instance `router-rpi3` adds the
final configurations to the `router-base` product for the Raspberry Pi 3.
As an example, the product `router-rpi` currently uses the following layer
configuration:
bsp-rpi3
pygos-cli
pygos-cli-net
router-base
router-rpi3
## Layer Index
This section contains an overview of the currently available configuration
layers.
### BSP Layers
- `bsp-alix` configures the cross toolchain for the AMD Geode LX based
PC Engines brand ALIX 2d3 or 2d13 board (3 Ethernet ports, real time clock).
A main line LTS kernel is used. The kernel configuration is tuned for
usage as some kind of network appliance. The release package is
called `release-alix` and contains a shell script for installing on a CF
card or generating a disk image that can be dumped onto a CF card.
- `bsp-rpi3` configures the cross toolchain for the Raspberry Pi 3. The output
is 32 bit ARM code. The kernel is a recent vendor kernel supplied by the
Raspberry Pi foundation. The kernel config is mostly based on the upstream
default config with additional networking options enabled and many options
not used by the Pygos system disabled (filesystem drivers, etc.). The
release package is called `release-rpi3` and contains a shell script
for installing on a micro SD card or generating an image for a micro
SD card.
- `bsp-qemu64` configures the cross toolchain for a generic 64 bit x86 target.
The main line LTS kernel with a stripped down default and kvm config. The
release package is called `release-qemu` and contains a shell script for
running the system using direct kernel boot in a KVM accelerated Qemu with
a virtio GPU, virtio network card with user mode networking and overlay
partition on a 9PFS mounted host directory.
### Mid-level Program Layers
- `pygos-cli` is the base layer for command line interface based images.
It adds the `bash` shell, the init system and basic command line programs
like `coreutils`.
- `pygos-cli-net` adds command line programs for networked systems such as
`ldns`, `nftables` or `iproute2`.
### Product Specific Layers
- `router-base` contains basic configuration for the router class of products.
It adds `unbound`, `dnsmasq`, `openssh` and `tcpdump`. The kernel parameters
are configured to enable IP forwarding, `resolv.conf` is set to resolve
names through the local DNS resolver.
- `router-alix` extends `router-base` with interface configuration for the
ALIX board and appropriate nftable rules.
- `router-rpi3` adds `hostapd` and extends `router-rpi3` with interface and
wireless configuration for the Raspberry Pi 3 and appropriate nftable rules.