Compare commits

..

6 Commits

Author SHA1 Message Date
Oliver Smith ab99fbd352
pmbootstrap install: add --ondev --cp
Allow to copy one or more files to the install chroot. It will be
possible to use this to put a non-pmOS image into the generated
installer OS.
2020-11-17 15:25:56 +01:00
Oliver Smith 89323a9afe
pmbootstrap install: add --ondev --no-rootfs
Skip building the postmarketOS rootfs, and allow either installing a
pre-built pmOS rootfs, or even another operating system.
2020-11-17 15:25:56 +01:00
Oliver Smith 89d350bd4e
install: embed_firmware: use correct suffix
Embed the firmware from the right chroot suffix. Previously it would
always use the rootfs_{args.device} chroot, which does not work anymore
with upcoming 'pmbootstrap install --ondev --no-rootfs' as there will
only be the installer_{args.device} chroot.
2020-11-17 15:25:55 +01:00
Oliver Smith 9d3e75c1ff
pmbootstrap install: properly count install steps
Get rid of hardcoded step numbers, even for the currently common steps.
With the upcoming --ondev --no-rootfs, we will need to skip the
hardcoded step 2 (create device rootfs).
2020-11-17 15:25:55 +01:00
Oliver Smith 6772f64198
pmbootstrap install: new func create_device_rootfs
Move related code from pmb/install/_install.py:install() to a new
create_device_rootfs() function in the same file, so it can be skipped
with the upcoming --no-rootfs parameter.
2020-11-17 15:25:55 +01:00
Oliver Smith 2a56a75c4f
pmb/parse/arguments.py: refactor 'install' args
Move the numerous "install" arguments into an own function (as it was
done with actions added later). Categorize the options and update the
help output, so the options are easier to understand.
2020-11-17 15:25:51 +01:00
206 changed files with 3875 additions and 8697 deletions

View File

@ -1,9 +0,0 @@
# Allow this repository to be used with the 'b4' tool. See
# https://postmarketos.org/patch-review for details.
[b4]
midmask = https://lists.sr.ht/~postmarketos/pmbootstrap-devel/%s
linkmask = https://lists.sr.ht/~postmarketos/pmbootstrap-devel/%%3C%s%%3E
send-series-to = ~postmarketos/pmbootstrap-devel@lists.sr.ht
send-endpoint-web = NONE
backend = sourcehut

View File

@ -1,26 +0,0 @@
image: alpine/edge
packages:
- sudo
sources:
- https://git.sr.ht/~postmarketos/pmbootstrap
tasks:
- note: |
pmbootstrap/.ci/note.sh
- shellcheck: |
cd pmbootstrap
sudo .ci/shellcheck.sh
- ruff: |
cd pmbootstrap
sudo .ci/ruff.sh
- vermin: |
cd pmbootstrap
sudo .ci/vermin.sh
- codespell: |
cd pmbootstrap
sudo .ci/codespell.sh
- pytest: |
cd pmbootstrap
sudo .ci/pytest.sh
artifacts:
- ".local/var/pmbootstrap/log.txt"
- ".local/var/pmbootstrap/log_testsuite.txt"

View File

@ -1,20 +0,0 @@
#!/bin/sh -ex
# SPDX-License-Identifier: GPL-3.0-or-later
# Copyright 2023 Oliver Smith
# Description: find typos
# https://postmarketos.org/pmb-ci
if [ "$(id -u)" = 0 ]; then
set -x
apk -q add \
py3-codespell
exec su "${TESTUSER:-build}" -c "sh -e $0"
fi
set -x
# -L: words to ignore
codespell \
-L crate \
-L hda \
.

View File

@ -1,6 +0,0 @@
#!/bin/sh -e
printf "\n"
printf "PROTIP: use"
printf " \e[1;32mpmbootstrap ci\e[0m"
printf " to run these scripts locally.\n"

View File

@ -1,72 +0,0 @@
#!/bin/sh -e
# Description: run pmbootstrap python testsuite
# Options: native slow
# https://postmarketos.org/pmb-ci
if [ "$(id -u)" = 0 ]; then
set -x
apk -q add \
git \
openssl \
py3-pytest \
py3-pytest-cov \
sudo
exec su "${TESTUSER:-build}" -c "sh -e $0"
fi
# Require pytest to be installed on the host system
if [ -z "$(command -v pytest)" ]; then
echo "ERROR: pytest command not found, make sure it is in your PATH."
exit 1
fi
# Use pytest-cov if it is installed to display code coverage
cov_arg=""
if python -c "import pytest_cov" >/dev/null 2>&1; then
cov_arg="--cov=pmb"
fi
echo "Initializing pmbootstrap..."
if ! yes '' | ./pmbootstrap.py \
--details-to-stdout \
init \
>/tmp/pmb_init 2>&1; then
cat /tmp/pmb_init
exit 1
fi
# Make sure that the work folder format is up to date, and that there are no
# mounts from aborted test cases (#1595)
./pmbootstrap.py work_migrate
./pmbootstrap.py -q shutdown
# Make sure we have a valid device (#1128)
device="$(./pmbootstrap.py config device)"
pmaports="$(./pmbootstrap.py config aports)"
deviceinfo="$(ls -1 "$pmaports"/device/*/device-"$device"/deviceinfo)"
if ! [ -e "$deviceinfo" ]; then
echo "ERROR: Could not find deviceinfo file for selected device:" \
"$device"
echo "Expected path: $deviceinfo"
echo "Maybe you have switched to a branch where your device does not"
echo "exist? Use 'pmbootstrap config device qemu-amd64' to switch to"
echo "a valid device."
exit 1
fi
# Make sure pmaports is clean, some of the tests will fail otherwise
if [ -n "$(git -C "$pmaports" status --porcelain)" ]; then
echo "ERROR: pmaports dir is not clean"
exit 1
fi
echo "Running pytest..."
echo "NOTE: use 'pmbootstrap log' to see the detailed log if running locally."
pytest \
--color=yes \
-vv \
-x \
$cov_arg \
test \
-m "not skip_ci" \
"$@"

View File

@ -1,19 +0,0 @@
#!/bin/sh -e
# Description: lint all python scripts
# https://postmarketos.org/pmb-ci
if [ "$(id -u)" = 0 ]; then
set -x
apk -q add ruff
exec su "${TESTUSER:-build}" -c "sh -e $0"
fi
set -x
# __init__.py with additional ignore:
# F401: imported, but not used
# shellcheck disable=SC2046
ruff --ignore "F401" $(find . -not -path '*/venv/*' -name '__init__.py')
# Check all other files
ruff --exclude=__init__.py .

View File

@ -1,15 +0,0 @@
#!/bin/sh -e
# Description: lint all shell scripts
# https://postmarketos.org/pmb-ci
if [ "$(id -u)" = 0 ]; then
set -x
apk -q add shellcheck
exec su "${TESTUSER:-build}" -c "sh -e $0"
fi
find . -name '*.sh' |
while read -r file; do
echo "shellcheck: $file"
shellcheck "$file"
done

View File

@ -1,25 +0,0 @@
#!/bin/sh -e
# Description: verify that we don't use too new python features
# https://postmarketos.org/pmb-ci
if [ "$(id -u)" = 0 ]; then
set -x
apk -q add vermin
exec su "${TESTUSER:-build}" -c "sh -e $0"
fi
# shellcheck disable=SC2046
vermin \
-t=3.7- \
--backport argparse \
--backport configparser \
--backport enum \
--backport typing \
--lint \
--no-parse-comments \
--eval-annotations \
$(find . -name '*.py' \
-a -not -path "./.venv/*" \
-a -not -path "./venv/*")
echo "vermin check passed"

73
.gitlab-ci.yml Normal file
View File

@ -0,0 +1,73 @@
---
# Author: Clayton Craft <clayton@craftyguy.net>
image: python:3.7-slim-stretch
cache:
paths:
- venv
before_script:
- ./.gitlab/setup-pmos-environment.sh
# venv created in CI_PROJECT_DIR for caching
- "[[ ! -d venv ]] && virtualenv venv -p $(which python3)"
- "source venv/bin/activate"
- "pip3 install pytest-cov python-coveralls pytest"
- "python3 --version"
- "su pmos -c 'git config --global user.email postmarketos-ci@localhost' || true"
- "su pmos -c 'git config --global user.name postmarketOS_CI' || true"
stages:
- checks
- tests
# defaults for "only"
# We need to run the CI jobs in a "merge request specific context", if CI is
# running in a merge request. Otherwise the environment variable that holds the
# merge request ID is not available. This means, we must set the "only"
# variable accordingly - and if we only do it for one job, all other jobs will
# not get executed. So have the defaults here, and use them in all jobs that
# should run on both the master branch, and in merge requests.
# https://docs.gitlab.com/ee/ci/merge_request_pipelines/index.html#excluding-certain-jobs
.only-default: &only-default
only:
- master
- merge_requests
- tags
static-code-analysis:
stage: checks
<<: *only-default
script:
# Note: This script uses CI_PROJECT_DIR
- su pmos -c "CI_PROJECT_DIR=$CI_PROJECT_DIR .gitlab/shared-runner_test-pmbootstrap.sh --static-code-analysis"
# MR settings
# (Checks for "Allow commits from members who can merge to the target branch")
mr-settings:
stage: checks
only:
- merge_requests
script:
- .gitlab/check_mr_settings.py
test-pmbootstrap:
stage: tests
<<: *only-default
script:
# Note: This script uses CI_PROJECT_DIR
- su pmos -c "CI_PROJECT_DIR=$CI_PROJECT_DIR .gitlab/shared-runner_test-pmbootstrap.sh --testcases-fast"
after_script:
# Move logs so it can be saved as artifacts
- "[[ -f /home/pmos/.local/var/pmbootstrap/log.txt ]] && mv /home/pmos/.local/var/pmbootstrap/log.txt $CI_PROJECT_DIR/log.txt"
- "[[ -f /home/pmos/.local/var/pmbootstrap/log_testsuite.txt ]] && mv /home/pmos/.local/var/pmbootstrap/log_testsuite.txt $CI_PROJECT_DIR/log_testsuite.txt"
- "[[ -f /home/pmos/.config/pmbootstrap.cfg ]] && cp /home/pmos/.config/pmbootstrap.cfg $CI_PROJECT_DIR/pmbootstrap.cfg"
- "sudo dmesg > $CI_PROJECT_DIR/dmesg.txt"
artifacts:
when: always
paths:
- "log.txt"
- "log_testsuite.txt"
- "dmesg.txt"
- "pmbootstrap.cfg"
expire_in: 1 week

156
.gitlab/check_mr_settings.py Executable file
View File

@ -0,0 +1,156 @@
#!/usr/bin/env python3
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import argparse
import json
import os
import sys
import urllib.parse
import urllib.request
def check_environment_variables():
""" Make sure that all environment variables from GitLab CI are present,
and exit right here when a variable is missing. """
keys = ["CI_MERGE_REQUEST_IID",
"CI_MERGE_REQUEST_PROJECT_PATH",
"CI_MERGE_REQUEST_SOURCE_PROJECT_URL"]
for key in keys:
if key in os.environ:
continue
print("ERROR: missing environment variable: " + key)
print("Reference: https://docs.gitlab.com/ee/ci/variables/")
exit(1)
def get_url_api():
""" :returns: single merge request API URL, as documented here:
https://docs.gitlab.com/ee/api/merge_requests.html#get-single-mr """
project_path = os.environ["CI_MERGE_REQUEST_PROJECT_PATH"]
project_path = urllib.parse.quote_plus(project_path)
mr_id = os.environ["CI_MERGE_REQUEST_IID"]
url = "https://gitlab.com/api/v4/projects/{}/merge_requests/{}"
return url.format(project_path, mr_id)
def get_url_mr_edit():
""" :returns: the link where the user can edit their own MR """
url = "https://gitlab.com/{}/merge_requests/{}/edit"
return url.format(os.environ["CI_MERGE_REQUEST_PROJECT_PATH"],
os.environ["CI_MERGE_REQUEST_IID"])
def get_url_repo_settings():
""" :returns: link to the user's forked pmaports project's settings """
prefix = os.environ["CI_MERGE_REQUEST_SOURCE_PROJECT_URL"]
return prefix + "/settings/repository"
def get_mr_settings(path):
""" Get the merge request API data and parse it as JSON. Print the whole
thing on failure.
:param path: to a local file with the saved API data (will download a
fresh copy when set to None)
:returns: dict of settings data (see GitLab API reference) """
content = ""
if path:
# Read file
with open(path, encoding="utf-8") as handle:
content = handle.read()
else:
# Download from GitLab API
url = get_url_api()
print("Download " + url)
content = urllib.request.urlopen(url).read().decode("utf-8")
# Parse JSON
try:
return json.loads(content)
except:
print("ERROR: failed to decode JSON. Here's the whole content for"
" debugging:")
print("---")
print(content)
exit(1)
def settings_read(settings, key):
if key not in settings:
print("ERROR: missing '" + key + "' key in settings!")
print("Here are the whole settings for debugging:")
print("---")
print(settings)
exit(1)
return settings[key]
def check_allow_push(settings):
""" :returns: True when maintainers are allowed to push to the branch
(what we want!), False otherwise """
# Check if source branch is in same repository
source = settings_read(settings, "source_project_id")
target = settings_read(settings, "target_project_id")
if source == target:
return True
return settings_read(settings, "allow_maintainer_to_push")
def main():
# Parse arguments
parser = argparse.ArgumentParser()
parser.add_argument("--path", help="use a local file instead of querying"
" the GitLab API", default=None)
args = parser.parse_args()
# Check the merge request
check_environment_variables()
settings = get_mr_settings(args.path)
# Make sure that squashing is disabled
if settings_read(settings, "squash"):
print("*** MR settings check failed!")
print('ERROR: Please turn off the "Squash commits when merge request'
' is accepted." option in the merge request settings.')
return 1
if check_allow_push(settings):
print("*** MR settings check successful!")
else:
print("*** MR settings check failed!")
print()
print("We need to be able to push changes to your merge request.")
print("So we can rebase it on master right before merging, add the")
print("MR-ID to the commit messages, etc.")
print()
print("How to fix it:")
print("1) Open the 'edit' page of your merge request:")
print(" " + get_url_mr_edit())
print("2) Enable this option and save:")
print(" 'Allow commits from members who can merge to the target"
" branch'")
print("3) Run these tests again with an empty commit in your MR:")
print(" $ git commit --allow-empty -m 'run mr-settings test again'")
print()
print("If that setting is disabled, then you have created the MR from")
print("a protected branch. When you had forked the repository from")
print("postmarketOS, the protected branch settings were copied to")
print("your fork.")
print()
print("Resolve this with:")
print("1) Open your repository settings:")
print(" " + get_url_repo_settings())
print("2) Scroll down to 'Protected Branches' and expand it")
print("3) Click 'unprotect' next to the branch from your MR")
print("4) Follow steps 1-3 from the first list again, the option")
print(" should not be disabled anymore.")
print()
print("Thank you and sorry for the inconvenience.")
return 1
return 0
sys.exit(main())

22
.gitlab/config.toml Normal file
View File

@ -0,0 +1,22 @@
# 'qemu' gitlab runner configuration file
# Author: Clayton Craft <clayton@craftyguy.net>
concurrent = 4
check_interval = 0
log_level = "debug"
[[runners]]
name = "corredor"
url = "https://gitlab.com/"
token = <REDACTED>
executor = "virtualbox"
builds_dir = "/home/pmos/builds"
[runners.ssh]
user = "pmos"
password = <REDACTED>
identity_file = "/home/pmos/.ssh/id_ecdsa"
[runners.virtualbox]
base_name = "pmbootstrap-vm"
base_snapshot = "ci-snapshot-python-3.6"
#disable_snapshots = false
disable_snapshots = true
[runners.cache]

View File

@ -0,0 +1,23 @@
#!/bin/bash
#
# This script is meant for the gitlab CI shared runners, not for
# any specific runners. Specific runners are expected to provide
# all of these configurations to save time, at least for now.
# Author: Clayton Craft <clayton@craftyguy.net>
# skip non-shared runner
[[ -d "/home/pmos" ]] && echo "pmos user already exists, assume running on pre-configured runner" && exit
# mount binfmt_misc
mount -t binfmt_misc none /proc/sys/fs/binfmt_misc
# install dependencies (procps: /bin/kill)
apt update
apt install -q -y git sudo procps python3-pip
pip3 install virtualenv
# create pmos user
echo "Creating pmos user"
useradd pmos -m -s /bin/bash -b "/home"
chown -R pmos:pmos .
echo 'pmos ALL=(ALL) NOPASSWD: ALL' >> /etc/sudoers

View File

@ -0,0 +1,40 @@
#!/bin/bash
#
# This script is meant to be executed by a non-root user, since pmbootstrap
# commands will fail otherwise. This is primarily used by the gitlab CI shared
# runners.
# This script also assumes, if run outside a gitlab CI runner, that cwd is
# the root of the pmbootstrap project. For gitlab CI runners, $CI_PROJECT_DIR
# is used.
# Author: Clayton Craft <clayton@craftyguy.net>
# Return failure on any failure of commands below
set -e
# Fail quickly if run as root, since other commands here will fail
[[ "$(id -u)" != "0" ]]
# These are specific to the gitlab CI
[[ $CI_PROJECT_DIR ]] && cd "$CI_PROJECT_DIR"
# shellcheck disable=SC1091
[[ -d venv ]] && source ./venv/bin/activate
# Init test (pipefail disabled so 'yes' doesn't fail test)
set +o pipefail
yes ''| ./pmbootstrap.py init
set -o pipefail
# this seems to be needed for some tests to pass
set +o pipefail
yes | ./pmbootstrap.py zap -m -p
set -o pipefail
case $1 in
--static-code-analysis )
./test/static_code_analysis.sh
;;
--testcases-fast )
# testcases_fast (qemu is omitted by not passing --all)
./test/testcases_fast.sh
;;
esac

122
README.md
View File

@ -1,62 +1,23 @@
# pmbootstrap
[**Introduction**](https://postmarketos.org/blog/2017/05/26/intro/) | [**Security Warning**](https://ollieparanoid.github.io/post/security-warning/) | [**Devices**](https://wiki.postmarketos.org/wiki/Devices)
Sophisticated chroot/build/flash tool to develop and install
[postmarketOS](https://postmarketos.org).
Sophisticated chroot/build/flash tool to develop and install [postmarketOS](https://postmarketos.org).
## Development
pmbootstrap is being developed on SourceHut
([what](https://postmarketos.org/blog/2022/07/25/considering-sourcehut/)):
https://git.sr.ht/~postmarketos/pmbootstrap
Send patches via mail or web UI to
[pmbootstrap-devel](https://lists.sr.ht/~postmarketos/pmbootstrap-devel)
([subscribe](mailto:~postmarketos/pmbootstrap-devel+subscribe@lists.sr.ht)):
```
~postmarketos/pmbootstrap-devel@lists.sr.ht
```
You can set the default values for sending email in the git checkout
```
$ git config sendemail.to "~postmarketos/pmbootstrap-devel@lists.sr.ht"
$ git config format.subjectPrefix "PATCH pmbootstrap"
```
Run CI scripts locally with:
```
$ pmbootstrap ci
```
Run a single test file:
```
$ pytest -vv ./test/test_keys.py
```
## Issues
Issues are being tracked
[here](https://gitlab.com/postmarketOS/pmbootstrap/-/issues).
Package build scripts live in the [`pmaports`](https://gitlab.com/postmarketOS/pmaports) repository now.
## Requirements
* Linux distribution on the host system (`x86`, `x86_64`, `aarch64` or `armv7`)
* [Windows subsystem for Linux (WSL)](https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux)
does **not** work! Please use [VirtualBox](https://www.virtualbox.org/) instead.
* 2 GB of RAM recommended for compiling
* Linux distribution on the host system (`x86`, `x86_64`, or `aarch64`)
* [Windows subsystem for Linux (WSL)](https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux) does **not** work! Please use [VirtualBox](https://www.virtualbox.org/) instead.
* Kernels based on the grsec patchset [do **not** work](https://github.com/postmarketOS/pmbootstrap/issues/107) *(Alpine: use linux-vanilla instead of linux-hardened, Arch: linux-hardened [is not based on grsec](https://www.reddit.com/r/archlinux/comments/68b2jn/linuxhardened_in_community_repo_a_grsecurity/))*
* On Alpine Linux only: `apk add coreutils procps`
* [Linux kernel 3.17 or higher](https://postmarketos.org/oldkernel)
* Python 3.7+
* Python 3.6+
* OpenSSL
* git
* ps
* tar
## Usage Examples
Please refer to the [postmarketOS wiki](https://wiki.postmarketos.org) for
in-depth coverage of topics such as
[porting to a new device](https://wiki.postmarketos.org/wiki/Porting_to_a_new_device)
or [installation](https://wiki.postmarketos.org/wiki/Installation_guide). The
help output (`pmbootstrap -h`) has detailed usage instructions for every
command. Read on for some generic examples of what can be done with
`pmbootstrap`.
Please refer to the [postmarketOS wiki](https://wiki.postmarketos.org) for in-depth coverage of topics such as [porting to a new device](https://wiki.postmarketos.org/wiki/Porting_to_a_new_device) or [installation](https://wiki.postmarketos.org/wiki/Installation_guide). The help output (`pmbootstrap -h`) has detailed usage instructions for every command. Read on for some generic examples of what can be done with `pmbootstrap`.
### Installing pmbootstrap
<https://wiki.postmarketos.org/wiki/Installing_pmbootstrap>
@ -103,27 +64,6 @@ Generate a template for a new package:
$ pmbootstrap newapkbuild "https://gitlab.com/postmarketOS/osk-sdl/-/archive/0.52/osk-sdl-0.52.tar.bz2"
```
#### Default architecture
Packages will be compiled for the architecture of the device running
pmbootstrap by default. For example, if your `x86_64` PC runs pmbootstrap, it
would build a package for `x86_64` with this command:
```
$ pmbootstrap build hello-world
```
If you would rather build for the target device selected in `pmbootstrap init`
by default, then use the `build_default_device_arch` option:
```
$ pmbootstrap config build_default_device_arch True
```
If your target device is `pine64-pinephone` for example, pmbootstrap will now
build this package for `aarch64`:
```
$ pmbootstrap build hello-world
```
### Chroots
Enter the `armhf` building chroot:
```
@ -141,9 +81,7 @@ $ pmbootstrap zap
```
### Device Porting Assistance
Analyze Android
[`boot.img`](https://wiki.postmarketos.org/wiki/Glossary#boot.img) files (also
works with recovery OS images like TWRP):
Analyze Android [`boot.img`](https://wiki.postmarketos.org/wiki/Glossary#boot.img) files (also works with recovery OS images like TWRP):
```
$ pmbootstrap bootimg_analyze ~/Downloads/twrp-3.2.1-0-fp2.img
```
@ -207,14 +145,12 @@ List pmaports that don't have a binary package:
$ pmbootstrap repo_missing --arch=armhf --overview
```
Increase the `pkgrel` for each aport where the binary package has outdated
dependencies (e.g. after soname bumps):
Increase the `pkgrel` for each aport where the binary package has outdated dependencies (e.g. after soname bumps):
```
$ pmbootstrap pkgrel_bump --auto
```
Generate cross-compiler aports based on the latest version from Alpine's
aports:
Generate cross-compiler aports based on the latest version from Alpine's aports:
```
$ pmbootstrap aportgen binutils-armhf gcc-armhf
```
@ -255,32 +191,14 @@ $ pmbootstrap apkindex_parse $WORK/cache_apk_x86_64/APKINDEX.8b865e19.tar.gz hel
$ pmbootstrap stats --arch=armhf
```
### Use alternative sudo
`distccd` log:
```
$ pmbootstrap log_distccd
```
pmbootstrap supports `doas` and `sudo`.
If multiple sudo implementations are installed, pmbootstrap will use `doas`.
You can set the `PMB_SUDO` environmental variable to define the sudo
implementation you want to use.
### Select SSH keys to include and make authorized in new images
If the config file option `ssh_keys` is set to `True` (it defaults to `False`),
then all files matching the glob `~/.ssh/id_*.pub` will be placed in
`~/.ssh/authorized_keys` in the user's home directory in newly-built images.
Sometimes, for example if you have a large number of SSH keys, you may wish to
select a different set of public keys to include in an image. To do this, set
the `ssh_key_glob` configuration parameter in the pmbootstrap config file to a
string containing a glob that is to match the file or files you wish to
include.
For example, a `~/.config/pmbootstrap.cfg` may contain:
[pmbootstrap]
# ...
ssh_keys = True
ssh_key_glob = ~/.ssh/postmarketos-dev.pub
# ...
## Development
### Testing
Install `pytest` (via your package manager or pip) and run it inside the pmbootstrap folder.
## License
[GPLv3](LICENSE)

View File

@ -14,9 +14,6 @@ check_kernel_folder() {
clean_kernel_src_dir() {
# Prevent Linux from appending Git version information to kernel version
# This will cause kernels to be packaged incorrectly.
touch .scmversion
if [ -f ".config" ] || [ -d "include/config" ]; then
echo "Source directory is not clean, running 'make mrproper'."
@ -47,16 +44,16 @@ export_pmbootstrap_dir() {
# Get pmbootstrap dir based on this script's location
# See also: <https://stackoverflow.com/a/29835459>
# shellcheck disable=SC3054
# shellcheck disable=SC2039
if [ -n "${BASH_SOURCE[0]}" ]; then
script_dir="$(dirname "$(realpath "$BASH_SOURCE")")"
script_dir="$(dirname "${BASH_SOURCE[0]}")"
else
script_dir="$(dirname "$1")"
fi
# Fail with debug information
# shellcheck disable=SC2155
export pmbootstrap_dir="$(realpath "$script_dir/..")"
export pmbootstrap_dir=$(realpath "$script_dir/..")
if ! [ -e "$pmbootstrap_dir/pmbootstrap.py" ]; then
echo "ERROR: Failed to get the script's location with your shell."
echo "Please adjust export_pmbootstrap_dir in envkernel.sh. Debug info:"
@ -83,7 +80,7 @@ set_alias_pmbootstrap() {
export_chroot_device_deviceinfo() {
chroot="$("$pmbootstrap" config work)/chroot_native"
device="$("$pmbootstrap" config device)"
deviceinfo="$(echo "$("$pmbootstrap" config aports)"/device/*/device-"$device"/deviceinfo)"
deviceinfo="$(echo "$pmbootstrap_dir"/aports/device/*/device-"$device"/deviceinfo)"
export chroot device deviceinfo
}
@ -117,12 +114,10 @@ initialize_chroot() {
host_arch="$(uname -m)"
need_cross_compiler=1
# Match arm* architectures
# shellcheck disable=SC3057
# shellcheck disable=SC2039
arch_substr="${host_arch:0:3}"
if [ "$arch" = "$host_arch" ] || \
{ [ "$arch_substr" = "arm" ] && [ "$arch_substr" = "$arch" ]; } || \
{ [ "$arch" = "arm64" ] && [ "$host_arch" = "aarch64" ]; } || \
{ [ "$arch" = "x86" ] && [ "$host_arch" = "x86_64" ]; }; then
{ [ "$arch_substr" = "arm" ] && [ "$arch_substr" = "$arch" ]; }; then
need_cross_compiler=0
fi
@ -158,7 +153,6 @@ initialize_chroot() {
elfutils-dev \
findutils \
flex \
g++ \
"$gcc_pkgname" \
gmp-dev \
linux-headers \
@ -169,10 +163,7 @@ initialize_chroot() {
musl-dev \
ncurses-dev \
perl \
py3-dt-schema \
sed \
yamllint \
yaml-dev \
xz || return 1
# Create /mnt/linux
@ -201,7 +192,7 @@ create_output_folder() {
set_alias_make() {
# Cross compiler prefix
# shellcheck disable=SC1091
# shellcheck disable=SC1090
prefix="$(CBUILD="$deviceinfo_arch" . "$chroot/usr/share/abuild/functions.sh";
arch_to_hostspec "$deviceinfo_arch")"
@ -219,14 +210,9 @@ set_alias_make() {
cross_compiler="/usr/bin/$prefix-"
fi
if [ "$arch" = "x86" ] && [ "$host_arch" = "x86_64" ]; then
cc=$hostcc
fi
# Build make command
cmd="echo '*** pmbootstrap envkernel.sh active for $PWD! ***';"
cmd="$cmd pmbootstrap -q chroot --user --"
cmd="$cmd CCACHE_DISABLE=1"
cmd="$cmd ARCH=$arch"
if [ "$need_cross_compiler" = 1 ]; then
cmd="$cmd CROSS_COMPILE=$cross_compiler"
@ -234,6 +220,11 @@ set_alias_make() {
cmd="$cmd make -C /mnt/linux O=/mnt/linux/.output"
cmd="$cmd CC=$cc HOSTCC=$hostcc"
# Avoid "+" suffix in kernel version if the working directory is dirty.
# (Otherwise we will generate a package that uses different paths...)
# Note: Set CONFIG_LOCALVERSION_AUTO=n in kernel config additionally
cmd="$cmd LOCALVERSION="
# shellcheck disable=SC2139
alias make="$cmd"
unset cmd
@ -248,7 +239,7 @@ set_alias_make() {
cmd="$cmd srcdir=/mnt/linux builddir=/mnt/linux/.output tmpdir=/tmp/envkernel"
cmd="$cmd ./\"\$_script\"\";"
cmd="$cmd else"
cmd="$cmd echo \"ERROR: \$_script not found.\";"
cmd="$cmd echo \"Error: \$_script not found.\";"
cmd="$cmd fi;"
cmd="$cmd };"
cmd="$cmd _run_script \"\$@\""
@ -268,7 +259,7 @@ set_alias_pmbroot_kernelroot() {
cross_compiler_version() {
if [ "$need_cross_compiler" = 1 ]; then
"$pmbootstrap" chroot --user -- "${cross_compiler}gcc" --version \
pmbootstrap chroot --user -- "${cross_compiler}gcc" --version \
2> /dev/null | grep "^.*gcc " | \
awk -F'[()]' '{ print $1 "("$2")" }'
else
@ -314,17 +305,19 @@ set_reactivate() {
}
check_and_deactivate() {
if [ "$POSTMARKETOS_ENVKERNEL_ENABLED" = 1 ]; then
# we already are running in envkernel
if [ "$POSTMARKETOS_ENVKERNEL_ENABLED" -eq 1 ]; then
# we already are runnning in envkernel
deactivate
fi
}
print_usage() {
# shellcheck disable=SC3054
# shellcheck disable=SC2039
if [ -n "${BASH_SOURCE[0]}" ]; then
echo "usage: source $(basename "$(realpath "$BASH_SOURCE")")"
echo "usage: source $(basename "${BASH_SOURCE[0]}")"
else
echo "usage: source $(basename "$1")"
fi
echo "optional arguments:"
echo " --fish Print fish alias syntax (internally used)"
@ -410,7 +403,7 @@ main() {
# Print fish alias syntax (when called from envkernel.fish)
fish_compat() {
[ "$1" = "--fish" ] || return 0
[ "$1" = "--fish" ] || return
for name in make kernelroot pmbootstrap pmbroot; do
echo "alias $(alias $name | sed 's/=/ /')"
done

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
# PYTHON_ARGCOMPLETE_OK
import sys
@ -14,17 +14,6 @@ from .helpers import logging as pmb_logging
from .helpers import mount
from .helpers import other
# pmbootstrap version
__version__ = "2.0.0"
# Python version check
version = sys.version_info
if version < (3, 7):
print("You need at least Python 3.7 to run pmbootstrap")
print("(You are running it with Python " + str(version.major) +
"." + str(version.minor) + ")")
sys.exit()
def main():
# Wrap everything to display nice error messages
@ -34,11 +23,8 @@ def main():
args = parse.arguments()
os.umask(0o22)
# Store script invocation command
os.environ["PMBOOTSTRAP_CMD"] = sys.argv[0]
# Sanity checks
other.check_grsec()
other.check_grsec(args)
if not args.as_root and os.geteuid() == 0:
raise RuntimeError("Do not run pmbootstrap as root!")
@ -68,11 +54,7 @@ def main():
if mount.ismount(args.work + "/chroot_native/dev"):
logging.info("NOTE: chroot is still active (use 'pmbootstrap"
" shutdown' as necessary)")
logging.info("DONE!")
except KeyboardInterrupt:
print("\nCaught KeyboardInterrupt, exiting …")
sys.exit(130) # SIGINT(2) + 128
logging.info("Done")
except Exception as e:
# Dump log to stdout when args (and therefore logging) init failed
@ -87,16 +69,9 @@ def main():
log_hint = "Run 'pmbootstrap log' for details."
if not args or not os.path.exists(args.log):
log_hint += (" Alternatively you can use '--details-to-stdout' to"
" get more output, e.g. 'pmbootstrap"
" --details-to-stdout init'.")
print()
" get more output, e.g. 'pmbootstrap --details-to-stdout"
" init'.")
print(log_hint)
print()
print("Before you report this error, ensure that pmbootstrap is "
"up to date.")
print("Find the latest version here:"
" https://git.sr.ht/~postmarketos/pmbootstrap/refs")
print(f"Your version: {__version__}")
return 1

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import os
import logging
@ -37,16 +37,14 @@ def properties(pkgname):
def generate(args, pkgname):
if args.fork_alpine:
prefix, folder, options = (pkgname, "temp",
{"confirm_overwrite": True})
prefix, folder, options = (pkgname, "temp", {"confirm_overwrite": True})
else:
prefix, folder, options = properties(pkgname)
path_target = args.aports + "/" + folder + "/" + pkgname
# Confirm overwrite
if options["confirm_overwrite"] and os.path.exists(path_target):
logging.warning("WARNING: Target folder already exists: "
f"{path_target}")
logging.warning("WARNING: Target folder already exists: " + path_target)
if not pmb.helpers.cli.confirm(args, "Continue and overwrite?"):
raise RuntimeError("Aborted.")
@ -54,10 +52,8 @@ def generate(args, pkgname):
pmb.helpers.run.user(args, ["rm", "-r", args.work + "/aportgen"])
if args.fork_alpine:
upstream = pmb.aportgen.core.get_upstream_aport(args, pkgname)
pmb.helpers.run.user(args, ["cp", "-r", upstream,
f"{args.work}/aportgen"])
pmb.aportgen.core.rewrite(args, pkgname, replace_simple={
"# Contributor:*": None, "# Maintainer:*": None})
pmb.helpers.run.user(args, ["cp", "-r", upstream, args.work + "/aportgen"])
pmb.aportgen.core.rewrite(args, pkgname, replace_simple={"# Contributor:*": None, "# Maintainer:*": None})
else:
# Run pmb.aportgen.PREFIX.generate()
getattr(pmb.aportgen, prefix.replace("-", "_")).generate(args, pkgname)

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import pmb.aportgen.core
import pmb.helpers.git
@ -13,21 +13,42 @@ def generate(args, pkgname):
# Rewrite APKBUILD
fields = {
"arch": pmb.config.arch_native,
"makedepends_host": "zlib-dev jansson-dev zstd-dev",
"pkgdesc": f"Tools necessary to build programs for {arch} targets",
"pkgname": pkgname,
"pkgdesc": "Tools necessary to build programs for " + arch + " targets",
"arch": args.arch_native,
"makedepends_build": "",
"makedepends_host": "",
"makedepends": "gettext libtool autoconf automake bison texinfo",
"subpackages": "",
}
replace_simple = {
"*--with-bugurl=*": "\t\t--with-bugurl=\"https://postmarketos.org/issues\" \\"
}
replace_functions = {
"build": """
_target="$(arch_to_hostspec """ + arch + """)"
"$builddir"/configure \\
--build="$CBUILD" \\
--target=$_target \\
--with-lib-path=/usr/lib \\
--prefix=/usr \\
--with-sysroot=/usr/$_target \\
--enable-ld=default \\
--enable-gold=yes \\
--enable-plugins \\
--enable-deterministic-archives \\
--disable-multilib \\
--disable-werror \\
--disable-nls
make
""",
"package": """
make install DESTDIR="$pkgdir"
below_header = """
CTARGET_ARCH=""" + arch + """
CTARGET="$(arch_to_hostspec $CTARGET_ARCH)"
"""
# remove man, info folders
rm -rf "$pkgdir"/usr/share
""",
"libs": None,
"gold": None,
}
pmb.aportgen.core.rewrite(args, pkgname, "main/binutils", fields,
"binutils", replace_simple=replace_simple,
below_header=below_header)
"binutils", replace_functions, remove_indent=8)

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import pmb.aportgen.core
import pmb.build
@ -27,7 +27,6 @@ def generate(args, pkgname):
channel_cfg = pmb.config.pmaports.read_config_channel(args)
mirrordir = channel_cfg["mirrordir_alpine"]
apkbuild_path = f"{args.work}/chroot_native/{tempdir}/APKBUILD"
apk_name = f"busybox-static-$pkgver-r$pkgrel-$_arch-{mirrordir}.apk"
with open(apkbuild_path, "w", encoding="utf-8") as handle:
apkbuild = f"""\
# Automatically generated aport, do not edit!
@ -47,7 +46,7 @@ def generate(args, pkgname):
url="http://busybox.net"
license="GPL2"
arch="{pmb.config.arch_native}"
arch="{args.arch_native}"
options="!check !strip"
pkgdesc="Statically linked Busybox for $_arch"
_target="$(arch_to_hostspec $_arch)"
@ -59,7 +58,7 @@ def generate(args, pkgname):
package() {{
mkdir -p "$pkgdir/usr/$_target"
cd "$pkgdir/usr/$_target"
tar -xf $srcdir/{apk_name}
tar -xf $srcdir/busybox-static-$pkgver-r$pkgrel-$_arch-{mirrordir}.apk
rm .PKGINFO .SIGN.*
}}
"""
@ -67,7 +66,7 @@ def generate(args, pkgname):
handle.write(line[12:].replace(" " * 4, "\t") + "\n")
# Generate checksums
pmb.build.init_abuild_minimal(args)
pmb.build.init(args)
pmb.chroot.root(args, ["chown", "-R", "pmos:pmos", tempdir])
pmb.chroot.user(args, ["abuild", "checksum"], working_dir=tempdir)
pmb.helpers.run.user(args, ["cp", apkbuild_path, f"{args.work}/aportgen"])

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import fnmatch
import logging
@ -54,20 +54,19 @@ def rewrite(args, pkgname, path_original="", fields={}, replace_pkgname=None,
lines (so they won't be bugged with issues regarding our generated aports),
and add reference to the original aport.
:param path_original: The original path of the automatically generated
aport.
:param fields: key-value pairs of fields that shall be changed in the
:param path_original: The original path of the automatically generated aport
:param fields: key-value pairs of fields, that shall be changed in the
APKBUILD. For example: {"pkgdesc": "my new package", "subpkgs": ""}
:param replace_pkgname: When set, $pkgname gets replaced with that string
in every line.
:param replace_pkgname: When set, $pkgname gets replaced with that string in
every line.
:param replace_functions: Function names and new bodies, for example:
{"build": "return 0"}
The body can also be None (deletes the function)
:param replace_simple: Lines that fnmatch the pattern, get
:param replace_simple: Lines, that fnmatch the pattern, get
replaced/deleted. Example: {"*test*": "# test", "*mv test.bin*": None}
:param below_header: String that gets directly placed below the header.
:param remove_indent: Number of spaces to remove from function body
provided to replace_functions.
:param below_header: String, that gets directly placed below the header.
:param remove_indent: Number of spaces to remove from function body provided
to replace_functions.
"""
# Header
@ -110,8 +109,8 @@ def rewrite(args, pkgname, path_original="", fields={}, replace_pkgname=None,
if line.startswith(func + "() {"):
skip_in_func = True
if body:
lines_new += format_function(
func, body, remove_indent=remove_indent)
lines_new += format_function(func, body,
remove_indent=remove_indent)
break
if skip_in_func:
continue
@ -152,12 +151,11 @@ def rewrite(args, pkgname, path_original="", fields={}, replace_pkgname=None,
handle.truncate()
def get_upstream_aport(args, pkgname, arch=None):
def get_upstream_aport(args, pkgname):
"""
Perform a git checkout of Alpine's aports and get the path to the aport.
:param pkgname: package name
:param arch: Alpine architecture (e.g. "armhf"), defaults to native arch
:returns: absolute path on disk where the Alpine aport is checked out
example: /opt/pmbootstrap_work/cache_git/aports/upstream/main/gcc
"""
@ -189,7 +187,7 @@ def get_upstream_aport(args, pkgname, arch=None):
aport_path = paths[0]
# Parse APKBUILD
apkbuild = pmb.parse.apkbuild(f"{aport_path}/APKBUILD",
apkbuild = pmb.parse.apkbuild(args, aport_path + "/APKBUILD",
check_pkgname=False)
apkbuild_version = apkbuild["pkgver"] + "-r" + apkbuild["pkgrel"]
@ -197,7 +195,7 @@ def get_upstream_aport(args, pkgname, arch=None):
split = aport_path.split("/")
repo = split[-2]
pkgname = split[-1]
index_path = pmb.helpers.repo.alpine_apkindex_path(args, repo, arch)
index_path = pmb.helpers.repo.alpine_apkindex_path(args, repo)
package = pmb.parse.apkindex.package(args, pkgname, indexes=[index_path])
# Compare version (return when equal)
@ -205,18 +203,16 @@ def get_upstream_aport(args, pkgname, arch=None):
if compare == 0:
return aport_path
# APKBUILD > binary: this is fine
if compare == 1:
logging.info(f"NOTE: {pkgname} {arch} binary package has a lower"
f" version {package['version']} than the APKBUILD"
f" {apkbuild_version}")
return aport_path
# APKBUILD < binary: aports.git is outdated
logging.error("ERROR: Package '" + pkgname + "' has a lower version in"
# Different version message
logging.error("ERROR: Package '" + pkgname + "' has a different version in"
" local checkout of Alpine's aports (" + apkbuild_version +
") compared to Alpine's binary package (" +
package["version"] + ")!")
raise RuntimeError("You can update your local checkout with: "
"'pmbootstrap pull'")
# APKBUILD < binary
if compare == -1:
raise RuntimeError("You can update your local checkout with: "
"'pmbootstrap pull'")
# APKBUILD > binary
raise RuntimeError("You can force an update of your binary package"
" APKINDEX files with: 'pmbootstrap update'")

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import logging
import os
@ -9,30 +9,26 @@ import pmb.parse.apkindex
import pmb.parse.bootimg
def ask_for_architecture():
def ask_for_architecture(args):
architectures = pmb.config.build_device_architectures
# Don't show armhf, new ports shouldn't use this architecture
if "armhf" in architectures:
architectures.remove("armhf")
while True:
ret = pmb.helpers.cli.ask("Device architecture", architectures,
"aarch64", complete=architectures)
ret = pmb.helpers.cli.ask(args, "Device architecture", architectures,
architectures[0])
if ret in architectures:
return ret
logging.fatal("ERROR: Invalid architecture specified. If you want to"
" add a new architecture, edit"
" build_device_architectures in"
" pmb/config/__init__.py.")
" add a new architecture, edit build_device_architectures"
" in pmb/config/__init__.py.")
def ask_for_manufacturer():
def ask_for_manufacturer(args):
logging.info("Who produced the device (e.g. LG)?")
return pmb.helpers.cli.ask("Manufacturer", None, None, False)
return pmb.helpers.cli.ask(args, "Manufacturer", None, None, False)
def ask_for_name(manufacturer):
def ask_for_name(args, manufacturer):
logging.info("What is the official name (e.g. Google Nexus 5)?")
ret = pmb.helpers.cli.ask("Name", None, None, False)
ret = pmb.helpers.cli.ask(args, "Name", None, None, False)
# Always add the manufacturer
if not ret.startswith(manufacturer) and \
@ -41,52 +37,44 @@ def ask_for_name(manufacturer):
return ret
def ask_for_year():
def ask_for_year(args):
# Regex from https://stackoverflow.com/a/12240826
logging.info("In what year was the device released (e.g. 2012)?")
return pmb.helpers.cli.ask("Year", None, None, False,
return pmb.helpers.cli.ask(args, "Year", None, None, False,
validation_regex=r'^[1-9]\d{3,}$')
def ask_for_chassis():
def ask_for_chassis(args):
types = pmb.config.deviceinfo_chassis_types
logging.info("What type of device is it?")
logging.info("Valid types are: " + ", ".join(types))
return pmb.helpers.cli.ask("Chassis", None, None, True,
validation_regex='|'.join(types),
complete=types)
return pmb.helpers.cli.ask(args, "Chassis", None, None, True,
validation_regex='|'.join(types), complete=types)
def ask_for_keyboard(args):
return pmb.helpers.cli.confirm(args, "Does the device have a hardware"
" keyboard?")
return pmb.helpers.cli.confirm(args, "Does the device have a hardware keyboard?")
def ask_for_external_storage(args):
return pmb.helpers.cli.confirm(args, "Does the device have a sdcard or"
" other external storage medium?")
return pmb.helpers.cli.confirm(args, "Does the device have a sdcard or other"
" external storage medium?")
def ask_for_flash_method():
def ask_for_flash_method(args):
while True:
logging.info("Which flash method does the device support?")
method = pmb.helpers.cli.ask("Flash method",
pmb.config.flash_methods,
"none",
complete=pmb.config.flash_methods)
method = pmb.helpers.cli.ask(args, "Flash method", pmb.config.flash_methods,
pmb.config.flash_methods[0])
if method in pmb.config.flash_methods:
if method == "heimdall":
heimdall_types = ["isorec", "bootimg"]
while True:
logging.info("Does the device use the \"isolated"
" recovery\" or boot.img?")
logging.info("<https://wiki.postmarketos.org/wiki"
"/Deviceinfo_flash_methods#Isorec_or_bootimg"
".3F>")
heimdall_type = pmb.helpers.cli.ask("Type",
heimdall_types,
logging.info("Does the device use the \"isolated recovery\" or boot.img?")
logging.info("<https://wiki.postmarketos.org/wiki/Deviceinfo_flash_methods#Isorec_or_bootimg.3F>")
heimdall_type = pmb.helpers.cli.ask(args, "Type", heimdall_types,
heimdall_types[0])
if heimdall_type in heimdall_types:
method += "-" + heimdall_type
@ -100,15 +88,13 @@ def ask_for_flash_method():
def ask_for_bootimg(args):
logging.info("You can analyze a known working boot.img file to"
" automatically fill out the flasher information for your"
" deviceinfo file. Either specify the path to an image or"
" press return to skip this step (you can do it later with"
" 'pmbootstrap bootimg_analyze').")
logging.info("You can analyze a known working boot.img file to automatically fill"
" out the flasher information for your deviceinfo file. Either specify"
" the path to an image or press return to skip this step (you can do"
" it later with 'pmbootstrap bootimg_analyze').")
while True:
response = pmb.helpers.cli.ask("Path", None, "", False)
path = os.path.expanduser(response)
path = os.path.expanduser(pmb.helpers.cli.ask(args, "Path", None, "", False))
if not path:
return None
try:
@ -117,7 +103,7 @@ def ask_for_bootimg(args):
logging.fatal("ERROR: " + str(e) + ". Please try again.")
def generate_deviceinfo_fastboot_content(bootimg=None):
def generate_deviceinfo_fastboot_content(args, bootimg=None):
if bootimg is None:
bootimg = {"cmdline": "",
"qcdt": "false",
@ -129,49 +115,29 @@ def generate_deviceinfo_fastboot_content(bootimg=None):
"second_offset": "",
"tags_offset": "",
"pagesize": "2048"}
content = f"""\
return f"""\
deviceinfo_kernel_cmdline="{bootimg["cmdline"]}"
deviceinfo_generate_bootimg="true"
deviceinfo_bootimg_qcdt="{bootimg["qcdt"]}"
deviceinfo_bootimg_mtk_mkimage="{bootimg["mtk_mkimage"]}"
deviceinfo_bootimg_dtb_second="{bootimg["dtb_second"]}"
deviceinfo_flash_pagesize="{bootimg["pagesize"]}"
"""
if "header_version" in bootimg.keys():
content += f"""\
deviceinfo_header_version="{bootimg["header_version"]}"
"""
if bootimg["header_version"] == "2":
content += f"""\
deviceinfo_append_dtb="false"
deviceinfo_flash_offset_dtb="{bootimg["dtb_offset"]}"
"""
if "base" in bootimg.keys():
content += f"""\
deviceinfo_flash_offset_base="{bootimg["base"]}"
deviceinfo_flash_offset_kernel="{bootimg["kernel_offset"]}"
deviceinfo_flash_offset_ramdisk="{bootimg["ramdisk_offset"]}"
deviceinfo_flash_offset_second="{bootimg["second_offset"]}"
deviceinfo_flash_offset_tags="{bootimg["tags_offset"]}"
deviceinfo_flash_pagesize="{bootimg["pagesize"]}"
"""
return content
def generate_deviceinfo(args, pkgname, name, manufacturer, year, arch,
chassis, has_keyboard, has_external_storage,
flash_method, bootimg=None):
codename = "-".join(pkgname.split("-")[1:])
external_storage = "true" if has_external_storage else "false"
# Note: New variables must be added to pmb/config/__init__.py as well
content = f"""\
# Reference: <https://postmarketos.org/deviceinfo>
# Please use double quotes only. You can source this file in shell
# scripts.
# Please use double quotes only. You can source this file in shell scripts.
deviceinfo_format_version="0"
deviceinfo_name="{name}"
@ -179,12 +145,13 @@ def generate_deviceinfo(args, pkgname, name, manufacturer, year, arch,
deviceinfo_codename="{codename}"
deviceinfo_year="{year}"
deviceinfo_dtb=""
deviceinfo_modules_initfs=""
deviceinfo_arch="{arch}"
# Device related
deviceinfo_chassis="{chassis}"
deviceinfo_keyboard="{"true" if has_keyboard else "false"}"
deviceinfo_external_storage="{external_storage}"
deviceinfo_external_storage="{"true" if has_external_storage else "false"}"
deviceinfo_screen_width="800"
deviceinfo_screen_height="600"
@ -194,13 +161,13 @@ def generate_deviceinfo(args, pkgname, name, manufacturer, year, arch,
content_heimdall_bootimg = """\
deviceinfo_flash_heimdall_partition_kernel=""
deviceinfo_flash_heimdall_partition_rootfs=""
deviceinfo_flash_heimdall_partition_system=""
"""
content_heimdall_isorec = """\
deviceinfo_flash_heimdall_partition_kernel=""
deviceinfo_flash_heimdall_partition_initfs=""
deviceinfo_flash_heimdall_partition_rootfs=""
deviceinfo_flash_heimdall_partition_system=""
"""
content_0xffff = """\
@ -212,9 +179,9 @@ def generate_deviceinfo(args, pkgname, name, manufacturer, year, arch,
"""
if flash_method == "fastboot":
content += generate_deviceinfo_fastboot_content(bootimg)
content += generate_deviceinfo_fastboot_content(args, bootimg)
elif flash_method == "heimdall-bootimg":
content += generate_deviceinfo_fastboot_content(bootimg)
content += generate_deviceinfo_fastboot_content(args, bootimg)
content += content_heimdall_bootimg
elif flash_method == "heimdall-isorec":
content += content_heimdall_isorec
@ -225,42 +192,21 @@ def generate_deviceinfo(args, pkgname, name, manufacturer, year, arch,
# Write to file
pmb.helpers.run.user(args, ["mkdir", "-p", args.work + "/aportgen"])
path = args.work + "/aportgen/deviceinfo"
with open(path, "w", encoding="utf-8") as handle:
for line in content.rstrip().split("\n"):
handle.write(line.lstrip() + "\n")
def generate_modules_initfs(args):
content = """\
# Remove this comment after reading, or the file if unnecessary (CHANGEME!)
# This file can contain a list of modules to be included in the initramfs,
# so that they are available in early boot stages. It should have one
# module name per line. If there are multiple kernel variants with different
# requirements for modules into the initramfs, one modules-initfs.$variant
# file should be created for each of them.
"""
# Write to file
pmb.helpers.run.user(args, ["mkdir", "-p", args.work + "/aportgen"])
path = args.work + "/aportgen/modules-initfs"
with open(path, "w", encoding="utf-8") as handle:
with open(args.work + "/aportgen/deviceinfo", "w", encoding="utf-8") as handle:
for line in content.rstrip().split("\n"):
handle.write(line.lstrip() + "\n")
def generate_apkbuild(args, pkgname, name, arch, flash_method):
# Dependencies
depends = ["postmarketos-base",
"linux-" + "-".join(pkgname.split("-")[1:])]
depends = "postmarketos-base linux-" + "-".join(pkgname.split("-")[1:])
if flash_method in ["fastboot", "heimdall-bootimg"]:
depends.append("mkbootimg")
depends += " mkbootimg"
if flash_method == "0xffff":
depends.append("uboot-tools")
depends += " uboot-tools"
depends += " mesa-dri-gallium"
# Whole APKBUILD
depends.sort()
depends = ("\n" + " " * 12).join(depends)
content = f"""\
# Reference: <https://postmarketos.org/devicepkg>
pkgname={pkgname}
@ -271,14 +217,9 @@ def generate_apkbuild(args, pkgname, name, arch, flash_method):
license="MIT"
arch="{arch}"
options="!check !archcheck"
depends="
{depends}
"
depends="{depends}"
makedepends="devicepkg-dev"
source="
deviceinfo
modules-initfs
"
source="deviceinfo"
build() {{
devicepkg_build $startdir $pkgname
@ -293,21 +234,20 @@ def generate_apkbuild(args, pkgname, name, arch, flash_method):
# Write the file
pmb.helpers.run.user(args, ["mkdir", "-p", args.work + "/aportgen"])
path = args.work + "/aportgen/APKBUILD"
with open(path, "w", encoding="utf-8") as handle:
with open(args.work + "/aportgen/APKBUILD", "w", encoding="utf-8") as handle:
for line in content.rstrip().split("\n"):
handle.write(line[8:].replace(" " * 4, "\t") + "\n")
def generate(args, pkgname):
arch = ask_for_architecture()
manufacturer = ask_for_manufacturer()
name = ask_for_name(manufacturer)
year = ask_for_year()
chassis = ask_for_chassis()
arch = ask_for_architecture(args)
manufacturer = ask_for_manufacturer(args)
name = ask_for_name(args, manufacturer)
year = ask_for_year(args)
chassis = ask_for_chassis(args)
has_keyboard = ask_for_keyboard(args)
has_external_storage = ask_for_external_storage(args)
flash_method = ask_for_flash_method()
flash_method = ask_for_flash_method(args)
bootimg = None
if flash_method in ["fastboot", "heimdall-bootimg"]:
bootimg = ask_for_bootimg(args)
@ -315,5 +255,4 @@ def generate(args, pkgname):
generate_deviceinfo(args, pkgname, name, manufacturer, year, arch,
chassis, has_keyboard, has_external_storage,
flash_method, bootimg)
generate_modules_initfs(args)
generate_apkbuild(args, pkgname, name, arch, flash_method)

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import pmb.aportgen.core
import pmb.helpers.git
@ -10,35 +10,33 @@ def generate(args, pkgname):
prefix = pkgname.split("-")[0]
arch = pkgname.split("-")[1]
if prefix == "gcc":
upstream = pmb.aportgen.core.get_upstream_aport(args, "gcc", arch)
upstream = pmb.aportgen.core.get_upstream_aport(args, "gcc")
based_on = "main/gcc (from Alpine)"
elif prefix == "gcc4":
upstream = f"{args.aports}/main/gcc4"
upstream = args.aports + "/main/gcc4"
based_on = "main/gcc4 (from postmarketOS)"
elif prefix == "gcc6":
upstream = f"{args.aports}/main/gcc6"
upstream = args.aports + "/main/gcc6"
based_on = "main/gcc6 (from postmarketOS)"
else:
raise ValueError(f"Invalid prefix '{prefix}', expected gcc, gcc4 or"
raise ValueError("Invalid prefix '" + prefix + "', expected gcc, gcc4 or"
" gcc6.")
pmb.helpers.run.user(args, ["cp", "-r", upstream, f"{args.work}/aportgen"])
pmb.helpers.run.user(args, ["cp", "-r", upstream, args.work + "/aportgen"])
# Rewrite APKBUILD (only building for native covers most use cases and
# saves a lot of build time, can be changed on demand)
fields = {
"pkgname": pkgname,
"pkgdesc": f"Stage2 cross-compiler for {arch}",
"arch": pmb.config.arch_native,
"depends": f"binutils-{arch} mpc1",
"makedepends_build": "gcc g++ bison flex texinfo gawk zip"
" gmp-dev mpfr-dev mpc1-dev zlib-dev",
"makedepends_host": "linux-headers gmp-dev mpfr-dev mpc1-dev isl-dev"
f" zlib-dev musl-dev-{arch} binutils-{arch}",
"subpackages": "",
"pkgdesc": "Stage2 cross-compiler for " + arch,
"arch": args.arch_native,
"depends": "isl binutils-" + arch + " mpc1",
"makedepends_build": "gcc g++ paxmark bison flex texinfo gawk zip gmp-dev mpfr-dev mpc1-dev zlib-dev",
"makedepends_host": "linux-headers gmp-dev mpfr-dev mpc1-dev isl-dev zlib-dev musl-dev-" + arch + " binutils-" + arch,
"subpackages": "g++-" + arch + ":gpp" if prefix == "gcc" else "",
# gcc6: options is already there, so we need to replace it and not only
# set it below the header like done below.
"options": "!strip",
"options": "!strip !tracedeps",
"LIBGOMP": "false",
"LIBGCC": "false",
@ -46,11 +44,6 @@ def generate(args, pkgname):
"LIBITM": "false",
}
# Latest gcc only, not gcc4 and gcc6
if prefix == "gcc":
fields["subpackages"] = f"g++-{arch}:gpp" \
f" libstdc++-dev-{arch}:libcxx_dev"
below_header = "CTARGET_ARCH=" + arch + """
CTARGET="$(arch_to_hostspec ${CTARGET_ARCH})"
LANG_D=false
@ -59,7 +52,7 @@ def generate(args, pkgname):
LANG_GO=false
LANG_FORTRAN=false
LANG_ADA=false
options="!strip"
options="!strip !tracedeps"
# abuild doesn't try to tries to install "build-base-$CTARGET_ARCH"
# when this variable matches "no*"
@ -83,9 +76,6 @@ def generate(args, pkgname):
# use CBUILDROOT as sysroot. In the original APKBUILD this is a local
# variable, but we make it a global one.
'*_cross_configure=*': None,
# Do not build foreign arch libgcc, we use the one from Alpine (#2168)
'_libgcc=true*': '_libgcc=false',
}
pmb.aportgen.core.rewrite(args, pkgname, based_on, fields,

View File

@ -1,4 +1,4 @@
# Copyright 2023 Nick Reitemeyer, Oliver Smith
# Copyright 2020 Nick Reitemeyer, Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import pmb.aportgen.core
import pmb.build
@ -27,7 +27,6 @@ def generate(args, pkgname):
channel_cfg = pmb.config.pmaports.read_config_channel(args)
mirrordir = channel_cfg["mirrordir_alpine"]
apkbuild_path = f"{args.work}/chroot_native/{tempdir}/APKBUILD"
apk_name = f'"$srcdir/grub-efi-$pkgver-r$pkgrel-$_arch-{mirrordir}.apk"'
with open(apkbuild_path, "w", encoding="utf-8") as handle:
apkbuild = f"""\
# Automatically generated aport, do not edit!
@ -43,13 +42,13 @@ def generate(args, pkgname):
pkgdesc="GRUB $_arch EFI files for every architecture"
url="https://www.gnu.org/software/grub/"
license="GPL-3.0-or-later"
arch="{pmb.config.arch_native}"
arch="{args.arch_native}"
source="grub-efi-$pkgver-r$pkgrel-$_arch-{mirrordir}.apk::$_mirror/{mirrordir}/main/$_arch/grub-efi-$pkgver-r$pkgrel.apk"
package() {{
mkdir -p "$pkgdir"
cd "$pkgdir"
tar -xf {apk_name}
tar -xf "$srcdir/grub-efi-$pkgver-r$pkgrel-$_arch-{mirrordir}.apk"
rm .PKGINFO .SIGN.*
}}
"""
@ -57,7 +56,7 @@ def generate(args, pkgname):
handle.write(line[12:].replace(" " * 4, "\t") + "\n")
# Generate checksums
pmb.build.init_abuild_minimal(args)
pmb.build.init(args)
pmb.chroot.root(args, ["chown", "-R", "pmos:pmos", tempdir])
pmb.chroot.user(args, ["abuild", "checksum"], working_dir=tempdir)
pmb.helpers.run.user(args, ["cp", apkbuild_path, f"{args.work}/aportgen"])

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import pmb.helpers.run
import pmb.aportgen.core
@ -10,54 +10,19 @@ def generate_apkbuild(args, pkgname, deviceinfo, patches):
device = "-".join(pkgname.split("-")[1:])
carch = pmb.parse.arch.alpine_to_kernel(deviceinfo["arch"])
makedepends = ["bash", "bc", "bison", "devicepkg-dev", "findutils", "flex",
"openssl-dev", "perl"]
build = """
unset LDFLAGS
make O="$_outdir" ARCH="$_carch" CC="${CC:-gcc}" \\
KBUILD_BUILD_VERSION="$((pkgrel + 1 ))-postmarketOS\""""
makedepends = "bash bc bison devicepkg-dev flex openssl-dev perl"
package = """
downstreamkernel_package "$builddir" "$pkgdir" "$_carch\" \\
"$_flavor" "$_outdir\""""
if deviceinfo.get("header_version") == "2":
package += """
make dtbs_install O="$_outdir" ARCH="$_carch" \\
INSTALL_DTBS_PATH="$pkgdir\"/boot/dtbs"""
downstreamkernel_package "$builddir" "$pkgdir" "$_carch" "$_flavor" "$_outdir\""""
if deviceinfo["bootimg_qcdt"] == "true":
build += """\n
# Master DTB (deviceinfo_bootimg_qcdt)"""
vendors = ["spreadtrum", "exynos", "other"]
soc_vendor = pmb.helpers.cli.ask("SoC vendor", vendors,
vendors[-1], complete=vendors)
if soc_vendor == "spreadtrum":
makedepends.append("dtbtool-sprd")
build += """
dtbTool-sprd -p "$_outdir/scripts/dtc/" \\
-o "$_outdir/arch/$_carch/boot"/dt.img \\
"$_outdir/arch/$_carch/boot/dts/\""""
elif soc_vendor == "exynos":
codename = "-".join(pkgname.split("-")[2:])
makedepends.append("dtbtool-exynos")
build += """
dtbTool-exynos -o "$_outdir/arch/$_carch/boot"/dt.img \\
$(find "$_outdir/arch/$_carch/boot/dts/\""""
build += f" -name *{codename}*.dtb)"
else:
makedepends.append("dtbtool")
build += """
dtbTool -o "$_outdir/arch/$_carch/boot"/dt.img \\
"$_outdir/arch/$_carch/boot/\""""
package += """
install -Dm644 "$_outdir/arch/$_carch/boot"/dt.img \\
"$pkgdir"/boot/dt.img"""
makedepends += " dtbtool"
package += """\n
# Master DTB (deviceinfo_bootimg_qcdt)
dtbTool -p scripts/dtc/ -o "$_outdir/arch/$_carch/boot"/dt.img "$_outdir/arch/$_carch/boot/"
install -Dm644 "$_outdir/arch/$_carch/boot"/dt.img "$pkgdir"/boot/dt.img"""
makedepends.sort()
makedepends = ("\n" + " " * 12).join(makedepends)
patches = ("\n" + " " * 12).join(patches)
content = f"""\
# Reference: <https://postmarketos.org/vendorkernel>
@ -73,9 +38,7 @@ def generate_apkbuild(args, pkgname, deviceinfo, patches):
url="https://kernel.org"
license="GPL-2.0-only"
options="!strip !check !tracedeps pmb:cross-native"
makedepends="
{makedepends}
"
makedepends="{makedepends}"
# Source
_repository="(CHANGEME!)"
@ -94,7 +57,10 @@ def generate_apkbuild(args, pkgname, deviceinfo, patches):
. downstreamkernel_prepare
}}
build() {{{build}
build() {{
unset LDFLAGS
make O="$_outdir" ARCH="$_carch" CC="${{CC:-gcc}}" \\
KBUILD_BUILD_VERSION="$((pkgrel + 1 ))-postmarketOS"
}}
package() {{{package}
@ -104,9 +70,9 @@ def generate_apkbuild(args, pkgname, deviceinfo, patches):
"""
# Write the file
with open(f"{args.work}/aportgen/APKBUILD", "w", encoding="utf-8") as hndl:
with open(args.work + "/aportgen/APKBUILD", "w", encoding="utf-8") as handle:
for line in content.rstrip().split("\n"):
hndl.write(line[8:].replace(" " * 4, "\t") + "\n")
handle.write(line[8:].replace(" " * 4, "\t") + "\n")
def generate(args, pkgname):

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import pmb.aportgen.core
import pmb.build
@ -27,8 +27,6 @@ def generate(args, pkgname):
channel_cfg = pmb.config.pmaports.read_config_channel(args)
mirrordir = channel_cfg["mirrordir_alpine"]
apkbuild_path = f"{args.work}/chroot_native/{tempdir}/APKBUILD"
apk_name = f"$srcdir/musl-$pkgver-r$pkgrel-$_arch-{mirrordir}.apk"
apk_dev_name = f"$srcdir/musl-dev-$pkgver-r$pkgrel-$_arch-{mirrordir}.apk"
with open(apkbuild_path, "w", encoding="utf-8") as handle:
apkbuild = f"""\
# Automatically generated aport, do not edit!
@ -42,7 +40,7 @@ def generate(args, pkgname):
pkgname={pkgname}
pkgver={pkgver}
pkgrel={pkgrel}
arch="{pmb.config.arch_native}"
arch="{args.arch_native}"
subpackages="musl-dev-{arch}:package_dev"
_arch="{arch}"
@ -65,7 +63,7 @@ def generate(args, pkgname):
cd "$pkgdir/usr/$_target"
# Use 'busybox tar' to avoid 'tar: Child returned status 141'
# on some machines (builds.sr.ht, gitlab-ci). See pmaports#26.
busybox tar -xf {apk_name}
busybox tar -xf $srcdir/musl-$pkgver-r$pkgrel-$_arch-{mirrordir}.apk
rm .PKGINFO .SIGN.*
}}
package_dev() {{
@ -73,12 +71,11 @@ def generate(args, pkgname):
cd "$subpkgdir/usr/$_target"
# Use 'busybox tar' to avoid 'tar: Child returned status 141'
# on some machines (builds.sr.ht, gitlab-ci). See pmaports#26.
busybox tar -xf {apk_dev_name}
busybox tar -xf $srcdir/musl-dev-$pkgver-r$pkgrel-$_arch-{mirrordir}.apk
rm .PKGINFO .SIGN.*
# symlink everything from /usr/$_target/usr/*
# to /usr/$_target/* so the cross-compiler gcc does not fail
# to build.
# symlink everything from /usr/$_target/usr/* to /usr/$_target/*
# so the cross-compiler gcc does not fail to build.
for _dir in include lib; do
mkdir -p "$subpkgdir/usr/$_target/$_dir"
cd "$subpkgdir/usr/$_target/usr/$_dir"
@ -93,7 +90,7 @@ def generate(args, pkgname):
handle.write(line[12:].replace(" " * 4, "\t") + "\n")
# Generate checksums
pmb.build.init_abuild_minimal(args)
pmb.build.init(args)
pmb.chroot.root(args, ["chown", "-R", "pmos:pmos", tempdir])
pmb.chroot.user(args, ["abuild", "checksum"], working_dir=tempdir)
pmb.helpers.run.user(args, ["cp", apkbuild_path, f"{args.work}/aportgen"])

View File

@ -1,9 +1,9 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
from pmb.build.init import init, init_abuild_minimal, init_compiler
from pmb.build.init import init
from pmb.build.envkernel import package_kernel
from pmb.build.kconfig import menuconfig
from pmb.build.menuconfig import menuconfig
from pmb.build.newapkbuild import newapkbuild
from pmb.build.other import copy_to_buildpath, is_necessary, \
index_repo
from pmb.build._package import mount_pmaports, package
from pmb.build._package import package

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import datetime
import logging
@ -8,26 +8,27 @@ import pmb.build
import pmb.build.autodetect
import pmb.chroot
import pmb.chroot.apk
import pmb.chroot.distccd
import pmb.helpers.pmaports
import pmb.helpers.repo
import pmb.parse
import pmb.parse.arch
def skip_already_built(pkgname, arch):
def skip_already_built(args, pkgname, arch):
"""
Check if the package was already built in this session, and add it
to the cache in case it was not built yet.
:returns: True when it can be skipped or False
"""
if arch not in pmb.helpers.other.cache["built"]:
pmb.helpers.other.cache["built"][arch] = []
if pkgname in pmb.helpers.other.cache["built"][arch]:
if arch not in args.cache["built"]:
args.cache["built"][arch] = []
if pkgname in args.cache["built"][arch]:
logging.verbose(pkgname + ": already checked this session,"
" no need to build it or its dependencies")
return True
pmb.helpers.other.cache["built"][arch].append(pkgname)
args.cache["built"][arch].append(pkgname)
return False
@ -103,8 +104,7 @@ def get_depends(args, apkbuild):
ret = sorted(set(ret))
# Don't recurse forever when a package depends on itself (#948)
for pkgname in ([apkbuild["pkgname"]] +
list(apkbuild["subpackages"].keys())):
for pkgname in [apkbuild["pkgname"]] + list(apkbuild["subpackages"].keys()):
if pkgname in ret:
logging.verbose(apkbuild["pkgname"] + ": ignoring dependency on"
" itself: " + pkgname)
@ -129,9 +129,6 @@ def build_depends(args, apkbuild, arch, strict):
if "no_depends" in args and args.no_depends:
pmb.helpers.repo.update(args, arch)
for depend in depends:
# Ignore conflicting dependencies
if depend.startswith("!"):
continue
# Check if binary package is missing
if not pmb.parse.apkindex.package(args, depend, arch, False):
raise RuntimeError("Missing binary package for dependency '" +
@ -140,8 +137,7 @@ def build_depends(args, apkbuild, arch, strict):
" it was started with --no-depends.")
# Check if binary package is outdated
apkbuild_dep = get_apkbuild(args, depend, arch)
if apkbuild_dep and \
pmb.build.is_necessary(args, arch, apkbuild_dep):
if apkbuild_dep and pmb.build.is_necessary(args, arch, apkbuild_dep):
raise RuntimeError(f"Binary package for dependency '{depend}'"
f" of '{pkgname}' is outdated, but"
f" pmbootstrap won't build any depends"
@ -149,8 +145,6 @@ def build_depends(args, apkbuild, arch, strict):
else:
# Build the dependencies
for depend in depends:
if depend.startswith("!"):
continue
if package(args, depend, arch, strict=strict):
depends_built += [depend]
logging.verbose(pkgname + ": build dependencies: done, built: " +
@ -175,8 +169,11 @@ def is_necessary_warn_depends(args, apkbuild, arch, force, depends_built):
ret = True
if not ret and len(depends_built):
logging.verbose(f"{pkgname}: depends on rebuilt package(s): "
f" {', '.join(depends_built)}")
# Warn of potentially outdated package
logging.warning("WARNING: " + pkgname + " depends on rebuilt" +
" package(s) " + ",".join(depends_built) + " (use" +
" 'pmbootstrap build " + pkgname + " --force' if" +
" necessary!)")
logging.verbose(pkgname + ": build necessary: " + str(ret))
return ret
@ -189,7 +186,7 @@ def init_buildenv(args, apkbuild, arch, strict=False, force=False, cross=None,
just initialized the build environment for nothing) and then setup the
whole build environment (abuild, gcc, dependencies, cross-compiler).
:param cross: None, "native", or "crossdirect"
:param cross: None, "native", "distcc", or "crossdirect"
:param skip_init_buildenv: can be set to False to avoid initializing the
build environment. Use this when building
something during initialization of the build
@ -200,7 +197,7 @@ def init_buildenv(args, apkbuild, arch, strict=False, force=False, cross=None,
depends_arch = arch
if cross == "native":
depends_arch = pmb.config.arch_native
depends_arch = args.arch_native
# Build dependencies
depends, built = build_depends(args, apkbuild, depends_arch, strict)
@ -213,10 +210,7 @@ def init_buildenv(args, apkbuild, arch, strict=False, force=False, cross=None,
if not skip_init_buildenv:
pmb.build.init(args, suffix)
pmb.build.other.configure_abuild(args, suffix)
if args.ccache:
pmb.build.other.configure_ccache(args, suffix)
if "rust" in depends or "cargo" in depends:
pmb.chroot.apk.install(args, ["sccache"], suffix)
pmb.build.other.configure_ccache(args, suffix)
if not strict and "pmb:strict" not in apkbuild["options"] and len(depends):
pmb.chroot.apk.install(args, depends, suffix)
if src:
@ -224,13 +218,41 @@ def init_buildenv(args, apkbuild, arch, strict=False, force=False, cross=None,
# Cross-compiler init
if cross:
pmb.build.init_compiler(args, depends, cross, arch)
cross_pkgs = ["ccache-cross-symlinks"]
if "gcc4" in depends:
cross_pkgs += ["gcc4-" + arch]
elif "gcc6" in depends:
cross_pkgs += ["gcc6-" + arch]
else:
cross_pkgs += ["gcc-" + arch, "g++-" + arch]
if "clang" in depends or "clang-dev" in depends:
cross_pkgs += ["clang"]
if cross == "crossdirect":
cross_pkgs += ["crossdirect"]
if "rust" in depends or "cargo" in depends:
cross_pkgs += ["rust"]
pmb.chroot.apk.install(args, cross_pkgs)
if cross == "distcc":
pmb.chroot.distccd.start(args, arch)
if cross == "crossdirect":
pmb.chroot.mount_native_into_foreign(args, suffix)
return True
def get_gcc_version(args, arch):
"""
Get the GCC version for a specific arch from parsing the right APKINDEX.
We feed this to ccache, so it knows the right GCC version, when
cross-compiling in a foreign arch chroot with distcc. See the "using
ccache with other compiler wrappers" section of their man page:
<https://linux.die.net/man/1/ccache>
:returns: a string like "6.4.0-r5"
"""
return pmb.parse.apkindex.package(args, "gcc-" + arch,
args.arch_native)["version"]
def get_pkgver(original_pkgver, original_source=False, now=None):
"""
Get the original pkgver when using the original source. Otherwise, get the
@ -263,7 +285,7 @@ def override_source(args, apkbuild, pkgver, src, suffix="native"):
return
# Mount source in chroot
mount_path = "/mnt/pmbootstrap/source-override/"
mount_path = "/mnt/pmbootstrap-source-override/"
mount_path_outside = args.work + "/chroot_" + suffix + mount_path
pmb.helpers.mount.bind(args, src, mount_path_outside, umount=True)
@ -322,16 +344,6 @@ def override_source(args, apkbuild, pkgver, src, suffix="native"):
pmb.chroot.user(args, ["mv", append_path + "_", apkbuild_path], suffix)
def mount_pmaports(args, destination, suffix="native"):
"""
Mount pmaports.git in chroot.
:param destination: mount point inside the chroot
"""
outside_destination = args.work + "/chroot_" + suffix + destination
pmb.helpers.mount.bind(args, args.aports, outside_destination, umount=True)
def link_to_git_dir(args, suffix):
"""
Make /home/pmos/build/.git point to the .git dir from pmaports.git, with a
@ -352,12 +364,13 @@ def link_to_git_dir(args, suffix):
# initialization of the chroot, because the pmaports dir may not exist yet
# at that point. Use umount=True, so we don't have an old path mounted
# (some tests change the pmaports dir).
destination = "/mnt/pmaports"
mount_pmaports(args, destination, suffix)
inside_destination = "/mnt/pmaports"
outside_destination = args.work + "/chroot_" + suffix + inside_destination
pmb.helpers.mount.bind(args, args.aports, outside_destination, umount=True)
# Create .git symlink
pmb.chroot.user(args, ["mkdir", "-p", "/home/pmos/build"], suffix)
pmb.chroot.user(args, ["ln", "-sf", destination + "/.git",
pmb.chroot.user(args, ["ln", "-sf", inside_destination + "/.git",
"/home/pmos/build/.git"], suffix)
@ -368,7 +381,7 @@ def run_abuild(args, apkbuild, arch, strict=False, force=False, cross=None,
depending on the cross-compiler method and target architecture), copy
the aport to the chroot and execute abuild.
:param cross: None, "native", or "crossdirect"
:param cross: None, "native", "distcc", or "crossdirect"
:param src: override source used to build the package with a local folder
:returns: (output, cmd, env), output is the destination apk path relative
to the package folder ("x86_64/hello-1-r2.apk"). cmd and env are
@ -397,28 +410,24 @@ def run_abuild(args, apkbuild, arch, strict=False, force=False, cross=None,
hostspec = pmb.parse.arch.alpine_to_hostspec(arch)
env["CROSS_COMPILE"] = hostspec + "-"
env["CC"] = hostspec + "-gcc"
if cross == "distcc":
env["CCACHE_PREFIX"] = "distcc"
env["CCACHE_PATH"] = "/usr/lib/arch-bin-masquerade/" + arch + ":/usr/bin"
env["CCACHE_COMPILERCHECK"] = "string:" + get_gcc_version(args, arch)
env["DISTCC_HOSTS"] = "@127.0.0.1:/home/pmos/.distcc-sshd/distccd"
env["DISTCC_SSH"] = ("ssh -o StrictHostKeyChecking=no -p" +
args.port_distccd)
env["DISTCC_BACKOFF_PERIOD"] = "0"
if not args.distcc_fallback:
env["DISTCC_FALLBACK"] = "0"
if args.verbose:
env["DISTCC_VERBOSE"] = "1"
if cross == "crossdirect":
env["PATH"] = ":".join(["/native/usr/lib/crossdirect/" + arch,
pmb.config.chroot_path])
if not args.ccache:
env["CCACHE_DISABLE"] = "1"
# Use sccache without crossdirect (crossdirect uses it via rustc.sh)
if args.ccache and cross != "crossdirect":
env["RUSTC_WRAPPER"] = "/usr/bin/sccache"
# Cache binary objects from go in this path (like ccache)
env["GOCACHE"] = "/home/pmos/.cache/go-build"
# Cache go modules (git repositories). Usually these should be bundled and
# it should not be required to download them at build time, in that case
# the APKBUILD sets the GOPATH (and therefore indirectly GOMODCACHE). But
# e.g. when using --src they are not bundled, in that case it makes sense
# to point GOMODCACHE at pmbootstrap's work dir so the modules are only
# downloaded once.
if args.go_mod_cache:
env["GOMODCACHE"] = "/home/pmos/go/pkg/mod"
# Build the abuild command
cmd = ["abuild", "-D", "postmarketOS"]
if strict or "pmb:strict" in apkbuild["options"]:
@ -452,17 +461,14 @@ def finish(args, apkbuild, arch, output, strict=False, suffix="native"):
# Clear APKINDEX cache (we only parse APKINDEX files once per session and
# cache the result for faster dependency resolving, but after we built a
# package we need to parse it again)
pmb.parse.apkindex.clear_cache(f"{args.work}/packages/{channel}"
f"/{arch}/APKINDEX.tar.gz")
pmb.parse.apkindex.clear_cache(args, f"{args.work}/packages/{channel}"
f"/{arch}/APKINDEX.tar.gz")
# Uninstall build dependencies (strict mode)
if strict or "pmb:strict" in apkbuild["options"]:
logging.info("(" + suffix + ") uninstall build dependencies")
pmb.chroot.user(args, ["abuild", "undeps"], suffix, "/home/pmos/build",
env={"SUDO_APK": "abuild-apk --no-progress"})
# If the build depends contain postmarketos-keys or postmarketos-base,
# abuild will have removed the postmarketOS repository key (pma#1230)
pmb.chroot.init_keys(args)
def package(args, pkgname, arch=None, force=False, strict=False,
@ -470,14 +476,9 @@ def package(args, pkgname, arch=None, force=False, strict=False,
"""
Build a package and its dependencies with Alpine Linux' abuild.
If this function is called multiple times on the same pkgname but first
with force=False and then force=True the force argument will be ignored due
to the package cache.
See the skip_already_built() call below.
:param pkgname: package name to be built, as specified in the APKBUILD
:param arch: architecture we're building for (default: native)
:param force: always build, even if not necessary
:param force: even build, if not necessary
:param strict: avoid building with irrelevant dependencies installed by
letting abuild install and uninstall all dependencies.
:param skip_init_buildenv: can be set to False to avoid initializing the
@ -489,8 +490,8 @@ def package(args, pkgname, arch=None, force=False, strict=False,
output path relative to the packages folder ("armhf/ab-1-r2.apk")
"""
# Once per session is enough
arch = arch or pmb.config.arch_native
if skip_already_built(pkgname, arch):
arch = arch or args.arch_native
if skip_already_built(args, pkgname, arch):
return
# Only build when APKBUILD exists
@ -501,7 +502,7 @@ def package(args, pkgname, arch=None, force=False, strict=False,
# Detect the build environment (skip unnecessary builds)
if not check_build_for_arch(args, pkgname, arch):
return
suffix = pmb.build.autodetect.suffix(apkbuild, arch)
suffix = pmb.build.autodetect.suffix(args, apkbuild, arch)
cross = pmb.build.autodetect.crosscompile(args, apkbuild, arch, suffix)
if not init_buildenv(args, apkbuild, arch, strict, force, cross, suffix,
skip_init_buildenv, src):

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import logging
import os
@ -40,7 +40,7 @@ def arch(args, pkgname):
:returns: arch string like "x86_64" or "armhf". Preferred order, depending
on what is supported by the APKBUILD:
* native arch
* device arch (this will be preferred instead if build_default_device_arch is true)
* device arch
* first arch in the APKBUILD
"""
aport = pmb.helpers.pmaports.find(args, pkgname)
@ -48,21 +48,14 @@ def arch(args, pkgname):
if ret:
return ret
apkbuild = pmb.parse.apkbuild(f"{aport}/APKBUILD")
apkbuild = pmb.parse.apkbuild(args, aport + "/APKBUILD")
arches = apkbuild["arch"]
if "noarch" in arches or "all" in arches or args.arch_native in arches:
return args.arch_native
if args.build_default_device_arch:
preferred_arch = args.deviceinfo["arch"]
preferred_arch_2nd = pmb.config.arch_native
else:
preferred_arch = pmb.config.arch_native
preferred_arch_2nd = args.deviceinfo["arch"]
if "noarch" in arches or "all" in arches or preferred_arch in arches:
return preferred_arch
if preferred_arch_2nd in arches:
return preferred_arch_2nd
arch_device = args.deviceinfo["arch"]
if arch_device in arches:
return arch_device
try:
return apkbuild["arch"][0]
@ -70,8 +63,8 @@ def arch(args, pkgname):
return None
def suffix(apkbuild, arch):
if arch == pmb.config.arch_native:
def suffix(args, apkbuild, arch):
if arch == args.arch_native:
return "native"
if "pmb:cross-native" in apkbuild["options"]:
@ -82,14 +75,14 @@ def suffix(apkbuild, arch):
def crosscompile(args, apkbuild, arch, suffix):
"""
:returns: None, "native", "crossdirect"
:returns: None, "native", "crossdirect" or "distcc"
"""
if not args.cross:
return None
if not pmb.parse.arch.cpu_emulation_required(arch):
if not pmb.parse.arch.cpu_emulation_required(args, arch):
return None
if suffix == "native":
return "native"
if "!pmb:crossdirect" in apkbuild["options"]:
return None
if args.no_crossdirect or "!pmb:crossdirect" in apkbuild["options"]:
return "distcc"
return "crossdirect"

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import logging
@ -10,7 +10,7 @@ import pmb.helpers.pmaports
def update(args, pkgname):
""" Fetch all sources and update the checksums in the APKBUILD. """
pmb.build.init_abuild_minimal(args)
pmb.build.init(args)
pmb.build.copy_to_buildpath(args, pkgname)
logging.info("(native) generate checksums for " + pkgname)
pmb.chroot.user(args, ["abuild", "checksum"],
@ -24,7 +24,7 @@ def update(args, pkgname):
def verify(args, pkgname):
""" Fetch all sources and verify their checksums. """
pmb.build.init_abuild_minimal(args)
pmb.build.init(args)
pmb.build.copy_to_buildpath(args, pkgname)
logging.info("(native) verify checksums for " + pkgname)

View File

@ -1,4 +1,4 @@
# Copyright 2023 Robert Yang
# Copyright 2020 Robert Yang
# SPDX-License-Identifier: GPL-3.0-or-later
import logging
import os
@ -12,14 +12,14 @@ import pmb.helpers.pmaports
import pmb.parse
def match_kbuild_out(word):
def match_kbuild_out(args, word):
"""
Look for paths in the following formats:
"<prefix>/<kbuild_out>/arch/<arch>/boot"
"<prefix>/<kbuild_out>/include/config/kernel.release"
:param word: space separated string cut out from a line from an APKBUILD
function body that might be the kbuild output path
function body, that might be the kbuild output path
:returns: kernel build output directory.
empty string when a separate build output directory isn't used.
None, when no output directory is found.
@ -47,7 +47,7 @@ def match_kbuild_out(word):
return "" if out_dir is None else out_dir.strip("/")
def find_kbuild_output_dir(function_body):
def find_kbuild_output_dir(args, function_body):
"""
Guess what the kernel build output directory is. Parses each line of the
function word by word, looking for paths which contain the kbuild output
@ -61,7 +61,7 @@ def find_kbuild_output_dir(function_body):
guesses = []
for line in function_body:
for item in line.split():
kbuild_out = match_kbuild_out(item)
kbuild_out = match_kbuild_out(args, item)
if kbuild_out is not None:
guesses.append(kbuild_out)
break
@ -86,7 +86,7 @@ def modify_apkbuild(args, pkgname, aport):
Modify kernel APKBUILD to package build output from envkernel.sh
"""
apkbuild_path = aport + "/APKBUILD"
apkbuild = pmb.parse.apkbuild(apkbuild_path)
apkbuild = pmb.parse.apkbuild(args, apkbuild_path)
if os.path.exists(args.work + "/aportgen"):
pmb.helpers.run.user(args, ["rm", "-r", args.work + "/aportgen"])
@ -104,23 +104,6 @@ def modify_apkbuild(args, pkgname, aport):
pmb.aportgen.core.rewrite(args, pkgname, apkbuild_path, fields=fields)
def host_build_bindmount(args, chroot, flag_file, mount=False):
"""
Check if the bind mount already exists and unmount it.
Then bindmount the current directory into the chroot as
/mnt/linux so it can be used by the envkernel abuild wrapper
"""
flag_path = f"{chroot}/{flag_file}"
if os.path.exists(flag_path):
logging.info("Cleaning up kernel sources bind-mount")
pmb.helpers.run.root(args, ["umount", chroot + "/mnt/linux"], check=False)
pmb.helpers.run.root(args, ["rm", flag_path])
if mount:
pmb.helpers.mount.bind(args, ".", f"{chroot}/mnt/linux")
pmb.helpers.run.root(args, ["touch", flag_path])
def run_abuild(args, pkgname, arch, apkbuild_path, kbuild_out):
"""
Prepare build environment and run abuild.
@ -134,30 +117,10 @@ def run_abuild(args, pkgname, arch, apkbuild_path, kbuild_out):
build_path = "/home/pmos/build"
kbuild_out_source = "/mnt/linux/.output"
# If the kernel was cross-compiled on the host rather than with the envkernel
# helper, we can still use the envkernel logic to package the artifacts for
# development, making it easy to quickly sideload a new kernel or pmbootstrap
# to create a boot image
# This handles bind mounting the current directory (assumed to be kernel sources)
# into the chroot so we can run abuild against it for the currently selected
# devices kernel package.
flag_file = "envkernel-bind-mounted"
host_build = False
if not pmb.helpers.mount.ismount(chroot + "/mnt/linux"):
logging.info("envkernel.sh hasn't run, assuming the kernel was cross compiled"
"on host and using current dir as source")
host_build = True
host_build_bindmount(args, chroot, flag_file, mount=host_build)
if not os.path.exists(chroot + kbuild_out_source):
raise RuntimeError("No '.output' dir found in your kernel source dir. "
"Compile the " + args.device + " kernel first and "
"then try again. See https://postmarketos.org/envkernel"
"for details. If building on your host and only using "
"--envkernel for packaging, make sure you have O=.output "
"as an argument to make.")
raise RuntimeError("No '.output' dir found in your kernel source dir."
"Compile the " + args.device + " kernel with "
"envkernel.sh first, then try again.")
# Create working directory for abuild
pmb.build.copy_to_buildpath(args, pkgname)
@ -179,14 +142,11 @@ def run_abuild(args, pkgname, arch, apkbuild_path, kbuild_out):
# Create the apk package
env = {"CARCH": arch,
"CHOST": arch,
"CBUILD": pmb.config.arch_native,
"CBUILD": args.arch_native,
"SUDO_APK": "abuild-apk --no-progress"}
cmd = ["abuild", "rootpkg"]
pmb.chroot.user(args, cmd, working_dir=build_path, env=env)
# Clean up bindmount if needed
host_build_bindmount(args, chroot, flag_file)
# Clean up symlinks
if build_output != "":
if os.path.islink(chroot + "/mnt/linux/" + build_output) and \
@ -206,25 +166,19 @@ def package_kernel(args):
"argument.")
aport = pmb.helpers.pmaports.find(args, pkgname)
function_body = pmb.parse.function_body(aport + "/APKBUILD", "package")
kbuild_out = find_kbuild_output_dir(args, function_body)
modify_apkbuild(args, pkgname, aport)
apkbuild_path = args.work + "/aportgen/APKBUILD"
arch = args.deviceinfo["arch"]
apkbuild = pmb.parse.apkbuild(apkbuild_path, check_pkgname=False)
if apkbuild["_outdir"]:
kbuild_out = apkbuild["_outdir"]
else:
function_body = pmb.parse.function_body(aport + "/APKBUILD", "package")
kbuild_out = find_kbuild_output_dir(function_body)
suffix = pmb.build.autodetect.suffix(apkbuild, arch)
apkbuild = pmb.parse.apkbuild(args, apkbuild_path, check_pkgname=False)
suffix = pmb.build.autodetect.suffix(args, apkbuild, arch)
# Install package dependencies
depends, _ = pmb.build._package.build_depends(
args, apkbuild, pmb.config.arch_native, strict=False)
depends, _ = pmb.build._package.build_depends(args, apkbuild, args.arch_native, strict=False)
pmb.build.init(args, suffix)
if pmb.parse.arch.cpu_emulation_required(arch):
depends.append("binutils-" + arch)
pmb.chroot.apk.install(args, depends, suffix)
output = (arch + "/" + apkbuild["pkgname"] + "-" + apkbuild["pkgver"] +

View File

@ -1,9 +1,8 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import glob
import logging
import os
import pathlib
import logging
import glob
import pmb.build
import pmb.config
@ -12,14 +11,15 @@ import pmb.chroot.apk
import pmb.helpers.run
def init_abuild_minimal(args, suffix="native"):
""" Initialize a minimal chroot with abuild where one can do
'abuild checksum'. """
marker = f"{args.work}/chroot_{suffix}/tmp/pmb_chroot_abuild_init_done"
if os.path.exists(marker):
def init(args, suffix="native"):
# Check if already initialized
marker = "/var/local/pmbootstrap_chroot_build_init_done"
if os.path.exists(args.work + "/chroot_" + suffix + marker):
return
pmb.chroot.apk.install(args, ["abuild"], suffix, build=False)
# Initialize chroot, install packages
pmb.chroot.apk.install(args, pmb.config.build_packages, suffix,
build=False)
# Fix permissions
pmb.chroot.root(args, ["chown", "root:abuild",
@ -27,45 +27,26 @@ def init_abuild_minimal(args, suffix="native"):
pmb.chroot.root(args, ["chmod", "g+w",
"/var/cache/distfiles"], suffix)
# Add user to group abuild
pmb.chroot.root(args, ["adduser", "pmos", "abuild"], suffix)
pathlib.Path(marker).touch()
def init(args, suffix="native"):
""" Initialize a chroot for building packages with abuild. """
marker = f"{args.work}/chroot_{suffix}/tmp/pmb_chroot_build_init_done"
if os.path.exists(marker):
return
init_abuild_minimal(args, suffix)
# Initialize chroot, install packages
pmb.chroot.apk.install(args, pmb.config.build_packages, suffix,
build=False)
# Generate package signing keys
chroot = args.work + "/chroot_" + suffix
if not os.path.exists(args.work + "/config_abuild/abuild.conf"):
logging.info("(" + suffix + ") generate abuild keys")
pmb.chroot.user(args, ["abuild-keygen", "-n", "-q", "-a"],
suffix, env={"PACKAGER": "pmos <pmos@local>"})
suffix)
# Copy package signing key to /etc/apk/keys
for key in glob.glob(chroot +
"/mnt/pmbootstrap/abuild-config/*.pub"):
"/mnt/pmbootstrap-abuild-config/*.pub"):
key = key[len(chroot):]
pmb.chroot.root(args, ["cp", key, "/etc/apk/keys/"], suffix)
# Add gzip wrapper that converts '-9' to '-1'
# Add gzip wrapper, that converts '-9' to '-1'
if not os.path.exists(chroot + "/usr/local/bin/gzip"):
with open(chroot + "/tmp/gzip_wrapper.sh", "w") as handle:
content = """
#!/bin/sh
# Simple wrapper that converts -9 flag for gzip to -1 for
# speed improvement with abuild. FIXME: upstream to abuild
# with a flag!
# Simple wrapper, that converts -9 flag for gzip to -1 for speed
# improvement with abuild. FIXME: upstream to abuild with a flag!
args=""
for arg in "$@"; do
[ "$arg" == "-9" ] && arg="-1"
@ -77,10 +58,13 @@ def init(args, suffix="native"):
for i in range(len(lines)):
lines[i] = lines[i][16:]
handle.write("\n".join(lines))
pmb.chroot.root(args, ["cp", "/tmp/gzip_wrapper.sh",
"/usr/local/bin/gzip"], suffix)
pmb.chroot.root(args, ["cp", "/tmp/gzip_wrapper.sh", "/usr/local/bin/gzip"],
suffix)
pmb.chroot.root(args, ["chmod", "+x", "/usr/local/bin/gzip"], suffix)
# Add user to group abuild
pmb.chroot.root(args, ["adduser", "pmos", "abuild"], suffix)
# abuild.conf: Don't clean the build folder after building, so we can
# inspect it afterwards for debugging
pmb.chroot.root(args, ["sed", "-i", "-e", "s/^CLEANUP=.*/CLEANUP=''/",
@ -92,27 +76,5 @@ def init(args, suffix="native"):
"s/^ERROR_CLEANUP=.*/ERROR_CLEANUP=''/",
"/etc/abuild.conf"], suffix)
pathlib.Path(marker).touch()
def init_compiler(args, depends, cross, arch):
cross_pkgs = ["ccache-cross-symlinks"]
if "gcc4" in depends:
cross_pkgs += ["gcc4-" + arch]
elif "gcc6" in depends:
cross_pkgs += ["gcc6-" + arch]
else:
cross_pkgs += ["gcc-" + arch, "g++-" + arch]
if "clang" in depends or "clang-dev" in depends:
cross_pkgs += ["clang"]
if cross == "crossdirect":
cross_pkgs += ["crossdirect"]
if "rust" in depends or "cargo" in depends:
if args.ccache:
cross_pkgs += ["sccache"]
# crossdirect for rust installs all build dependencies in the
# native chroot too, as some of them can be required for building
# native macros / build scripts
cross_pkgs += depends
pmb.chroot.apk.install(args, cross_pkgs)
# Mark the chroot as initialized
pmb.chroot.root(args, ["touch", marker], suffix)

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import os
import logging
@ -14,7 +14,7 @@ import pmb.helpers.run
import pmb.parse
def get_arch(apkbuild):
def get_arch(args, apkbuild):
"""
Take the architecture from the APKBUILD or complain if it's ambiguous. This
function only gets called if --arch is not set.
@ -78,7 +78,43 @@ def get_outputdir(args, pkgname, apkbuild):
" template with: pmbootstrap aportgen " + pkgname)
def extract_and_patch_sources(args, pkgname, arch):
def menuconfig(args, pkgname):
# Pkgname: allow omitting "linux-" prefix
if pkgname.startswith("linux-"):
pkgname_ = pkgname.split("linux-")[1]
logging.info("PROTIP: You can simply do 'pmbootstrap kconfig edit " +
pkgname_ + "'")
else:
pkgname = "linux-" + pkgname
# Read apkbuild
aport = pmb.helpers.pmaports.find(args, pkgname)
apkbuild = pmb.parse.apkbuild(args, aport + "/APKBUILD")
arch = args.arch or get_arch(args, apkbuild)
kopt = "menuconfig"
# Set up build tools and makedepends
pmb.build.init(args)
depends = apkbuild["makedepends"]
kopt = "menuconfig"
copy_xauth = False
if args.xconfig:
depends += ["qt-dev", "font-noto"]
kopt = "xconfig"
copy_xauth = True
elif args.gconfig:
depends += ["gtk+2.0-dev", "glib-dev", "libglade-dev", "font-noto"]
kopt = "gconfig"
copy_xauth = True
else:
depends += ["ncurses-dev"]
pmb.chroot.apk.install(args, depends)
# Copy host's .xauthority into native
if copy_xauth:
pmb.chroot.other.copy_xauthority(args)
# Patch and extract sources
pmb.build.copy_to_buildpath(args, pkgname)
logging.info("(native) extract kernel source")
pmb.chroot.user(args, ["abuild", "unpack"], "native", "/home/pmos/build")
@ -87,66 +123,14 @@ def extract_and_patch_sources(args, pkgname, arch):
"/home/pmos/build", output="interactive",
env={"CARCH": arch})
def menuconfig(args, pkgname, use_oldconfig):
# Pkgname: allow omitting "linux-" prefix
if not pkgname.startswith("linux-"):
pkgname = "linux-" + pkgname
# Read apkbuild
aport = pmb.helpers.pmaports.find(args, pkgname)
apkbuild = pmb.parse.apkbuild(f"{aport}/APKBUILD")
arch = args.arch or get_arch(apkbuild)
suffix = pmb.build.autodetect.suffix(apkbuild, arch)
cross = pmb.build.autodetect.crosscompile(args, apkbuild, arch, suffix)
hostspec = pmb.parse.arch.alpine_to_hostspec(arch)
# Set up build tools and makedepends
pmb.build.init(args, suffix)
if cross:
pmb.build.init_compiler(args, [], cross, arch)
depends = apkbuild["makedepends"]
copy_xauth = False
if use_oldconfig:
kopt = "oldconfig"
else:
kopt = "menuconfig"
if args.xconfig:
depends += ["qt5-qtbase-dev", "font-noto"]
kopt = "xconfig"
copy_xauth = True
elif args.nconfig:
kopt = "nconfig"
depends += ["ncurses-dev"]
else:
depends += ["ncurses-dev"]
pmb.chroot.apk.install(args, depends)
# Copy host's .xauthority into native
if copy_xauth:
pmb.chroot.other.copy_xauthority(args)
extract_and_patch_sources(args, pkgname, arch)
# Check for background color variable
color = os.environ.get("MENUCONFIG_COLOR")
# Run make menuconfig
outputdir = get_outputdir(args, pkgname, apkbuild)
logging.info("(native) make " + kopt)
env = {"ARCH": pmb.parse.arch.alpine_to_kernel(arch),
"DISPLAY": os.environ.get("DISPLAY"),
"XAUTHORITY": "/home/pmos/.Xauthority"}
if cross:
env["CROSS_COMPILE"] = f"{hostspec}-"
env["CC"] = f"{hostspec}-gcc"
if color:
env["MENUCONFIG_COLOR"] = color
pmb.chroot.user(args, ["make", kopt], "native",
outputdir, output="tui", env=env)
outputdir, output="tui",
env={"ARCH": pmb.parse.arch.alpine_to_kernel(arch),
"DISPLAY": os.environ.get("DISPLAY"),
"XAUTHORITY": "/home/pmos/.Xauthority"})
# Find the updated config
source = args.work + "/chroot_native" + outputdir + "/.config"
@ -161,4 +145,5 @@ def menuconfig(args, pkgname, use_oldconfig):
pmb.build.checksum.update(args, pkgname)
# Check config
pmb.parse.kconfig.check(args, apkbuild["_flavor"], details=True)
pmb.parse.kconfig.check(args, apkbuild["_flavor"], force_anbox_check=False,
details=True)

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import glob
import os
@ -25,7 +25,7 @@ def newapkbuild(args, folder, args_passed, force=False):
# Paths for copying
source_apkbuild = glob_result[0]
pkgname = pmb.parse.apkbuild(source_apkbuild, False)["pkgname"]
pkgname = pmb.parse.apkbuild(args, source_apkbuild, False)["pkgname"]
target = args.aports + "/" + folder + "/" + pkgname
# Move /home/pmos/build/$pkgname/* to /home/pmos/build/*

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import glob
import logging
@ -28,16 +28,7 @@ def copy_to_buildpath(args, package, suffix="native"):
pmb.chroot.root(args, ["rm", "-rf", "/home/pmos/build"], suffix)
# Copy aport contents with resolved symlinks
pmb.helpers.run.root(args, ["mkdir", "-p", build])
for entry in os.listdir(aport):
# Don't copy those dirs, as those have probably been generated by running `abuild`
# on the host system directly and not cleaning up after itself.
# Those dirs might contain broken symlinks and cp fails resolving them.
if entry in ["src", "pkg"]:
logging.warn(f"WARNING: Not copying {entry}, looks like a leftover from abuild")
continue
pmb.helpers.run.root(args, ["cp", "-rL", f"{aport}/{entry}", f"{build}/{entry}"])
pmb.helpers.run.root(args, ["cp", "-rL", aport + "/", build])
pmb.chroot.root(args, ["chown", "-R", "pmos:pmos",
"/home/pmos/build"], suffix)
@ -84,8 +75,8 @@ def is_necessary(args, arch, apkbuild, indexes=None):
# b) Aports folder has a newer version
if version_new != version_old:
logging.debug(f"{msg}Binary package out of date (binary: "
f"{version_old}, aport: {version_new})")
logging.debug(msg + "Binary package out of date (binary: " + version_old +
", aport: " + version_new + ")")
return True
# Aports and binary repo have the same version.
@ -126,7 +117,7 @@ def index_repo(args, arch=None):
pmb.chroot.user(args, command, working_dir=path_repo_chroot)
else:
logging.debug("NOTE: Can't build index for: " + path)
pmb.parse.apkindex.clear_cache(f"{path}/APKINDEX.tar.gz")
pmb.parse.apkindex.clear_cache(args, path + "/APKINDEX.tar.gz")
def configure_abuild(args, suffix, verify=False):
@ -143,16 +134,14 @@ def configure_abuild(args, suffix, verify=False):
continue
if line != (prefix + args.jobs + "\n"):
if verify:
raise RuntimeError(f"Failed to configure abuild: {path}"
"\nTry to delete the file"
"(or zap the chroot).")
raise RuntimeError("Failed to configure abuild: " + path +
"\nTry to delete the file (or zap the chroot).")
pmb.chroot.root(args, ["sed", "-i", "-e",
f"s/^{prefix}.*/{prefix}{args.jobs}/",
"/etc/abuild.conf"],
suffix)
"s/^" + prefix + ".*/" + prefix + args.jobs + "/",
"/etc/abuild.conf"], suffix)
configure_abuild(args, suffix, True)
return
pmb.chroot.root(args, ["sed", "-i", f"$ a\\{prefix}{args.jobs}", "/etc/abuild.conf"], suffix)
raise RuntimeError("Could not find " + prefix + " line in " + path)
def configure_ccache(args, suffix="native", verify=False):

View File

@ -1,7 +1,7 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
from pmb.chroot.init import init, init_keys
from pmb.chroot.mount import mount, mount_native_into_foreign, remove_mnt_pmbootstrap
from pmb.chroot.init import init
from pmb.chroot.mount import mount, mount_native_into_foreign
from pmb.chroot.root import root
from pmb.chroot.user import user
from pmb.chroot.user import exists as user_exists

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import os
import logging
@ -6,7 +6,6 @@ import shlex
import pmb.chroot
import pmb.config
import pmb.helpers.apk
import pmb.helpers.pmaports
import pmb.parse.apkindex
import pmb.parse.arch
@ -25,11 +24,11 @@ def update_repository_list(args, suffix="native", check=False):
True.
"""
# Skip if we already did this
if suffix in pmb.helpers.other.cache["apk_repository_list_updated"]:
if suffix in args.cache["apk_repository_list_updated"]:
return
# Read old entries or create folder structure
path = f"{args.work}/chroot_{suffix}/etc/apk/repositories"
path = args.work + "/chroot_" + suffix + "/etc/apk/repositories"
lines_old = []
if os.path.exists(path):
# Read all old lines
@ -43,20 +42,20 @@ def update_repository_list(args, suffix="native", check=False):
# Up to date: Save cache, return
lines_new = pmb.helpers.repo.urls(args)
if lines_old == lines_new:
pmb.helpers.other.cache["apk_repository_list_updated"].append(suffix)
args.cache["apk_repository_list_updated"].append(suffix)
return
# Check phase: raise error when still outdated
if check:
raise RuntimeError(f"Failed to update: {path}")
raise RuntimeError("Failed to update: " + path)
# Update the file
logging.debug(f"({suffix}) update /etc/apk/repositories")
logging.debug("(" + suffix + ") update /etc/apk/repositories")
if os.path.exists(path):
pmb.helpers.run.root(args, ["rm", path])
for line in lines_new:
pmb.helpers.run.root(args, ["sh", "-c", "echo "
f"{shlex.quote(line)} >> {path}"])
pmb.helpers.run.root(args, ["sh", "-c", "echo " +
shlex.quote(line) + " >> " + path])
update_repository_list(args, suffix, True)
@ -67,180 +66,175 @@ def check_min_version(args, suffix="native"):
"""
# Skip if we already did this
if suffix in pmb.helpers.other.cache["apk_min_version_checked"]:
if suffix in args.cache["apk_min_version_checked"]:
return
# Skip if apk is not installed yet
if not os.path.exists(f"{args.work}/chroot_{suffix}/sbin/apk"):
logging.debug(f"NOTE: Skipped apk version check for chroot '{suffix}'"
", because it is not installed yet!")
if not os.path.exists(args.work + "/chroot_" + suffix + "/sbin/apk"):
logging.debug("NOTE: Skipped apk version check for chroot '" + suffix +
"', because it is not installed yet!")
return
# Compare
version_installed = installed(args, suffix)["apk-tools"]["version"]
pmb.helpers.apk.check_outdated(
args, version_installed,
"Delete your http cache and zap all chroots, then try again:"
" 'pmbootstrap zap -hc'")
version_min = pmb.config.apk_tools_static_min_version
if pmb.parse.version.compare(version_installed, version_min) == -1:
raise RuntimeError("You have an outdated version of the 'apk' package"
" manager installed (your version: " + version_installed +
", expected at least: " + version_min + "). Delete"
" your http cache and zap all chroots, then try again:"
" 'pmbootstrap zap -hc'")
# Mark this suffix as checked
pmb.helpers.other.cache["apk_min_version_checked"].append(suffix)
args.cache["apk_min_version_checked"].append(suffix)
def install_build(args, package, arch):
def install_is_necessary(args, build, arch, package, packages_installed):
"""
Build an outdated package unless pmbootstrap was invoked with
"pmbootstrap install" and the option to build packages during pmb install
is disabled.
:param package: name of the package to build
:param arch: architecture of the package to build
This function optionally builds an out of date package, and checks if the
version installed inside a chroot is up to date.
:param build: Set to true to build the package, if the binary packages are
out of date, and it is in the aports folder.
:param packages_installed: Return value from installed().
:returns: True if the package needs to be installed/updated, False otherwise.
"""
# User may have disabled building packages during "pmbootstrap install"
# User may have disabled buiding packages during "pmbootstrap install"
build_disabled = False
if args.action == "install" and not args.build_pkgs_on_install:
if not pmb.parse.apkindex.package(args, package, arch, False):
build_disabled = True
# Build package
if build and not build_disabled:
pmb.build.package(args, package, arch)
# No further checks when not installed
if package not in packages_installed:
return True
# Make sure, that we really have a binary package
data_repo = pmb.parse.apkindex.package(args, package, arch, False)
if not data_repo:
if build_disabled:
raise RuntimeError(f"{package}: no binary package found for"
f" {arch}, and compiling packages during"
" 'pmbootstrap install' has been disabled."
" Consider changing this option in"
" 'pmbootstrap init'.")
# Use the existing binary package
return
logging.warning("WARNING: Internal error in pmbootstrap," +
" package '" + package + "' for " + arch +
" has not been built yet, but it should have"
" been. Rebuilding it with force. Please "
" report this, if there is no ticket about this"
" yet!")
pmb.build.package(args, package, arch, True)
return install_is_necessary(args, build, arch, package,
packages_installed)
# Build the package if it's in pmaports and there is no binary package
# with the same pkgver and pkgrel. This check is done in
# pmb.build.is_necessary, which gets called in pmb.build.package.
return pmb.build.package(args, package, arch)
# Compare the installed version vs. the version in the repos
data_installed = packages_installed[package]
compare = pmb.parse.version.compare(data_installed["version"],
data_repo["version"])
# a) Installed newer (should not happen normally)
if compare == 1:
logging.info("WARNING: " + arch + " package '" + package +
"' installed version " + data_installed["version"] +
" is newer, than the version in the repositories: " +
data_repo["version"] +
" See also: <https://postmarketos.org/warning-repo>")
return False
# b) Repo newer
elif compare == -1:
return True
# c) Same version, look at last modified
elif compare == 0:
time_installed = float(data_installed["timestamp"])
time_repo = float(data_repo["timestamp"])
return time_repo > time_installed
def packages_split_to_add_del(packages):
def replace_aports_packages_with_path(args, packages, suffix, arch):
"""
Sort packages into "to_add" and "to_del" lists depending on their pkgname
starting with an exclamation mark.
:param packages: list of pkgnames
:returns: (to_add, to_del) - tuple of lists of pkgnames, e.g.
(["hello-world", ...], ["some-conflict-pkg", ...])
apk will only re-install packages with the same pkgname, pkgver and pkgrel,
when you give it the absolute path to the package. This function replaces
all packages, that were built locally, with the absolute path to the package.
"""
to_add = []
to_del = []
for package in packages:
if package.startswith("!"):
to_del.append(package.lstrip("!"))
else:
to_add.append(package)
return (to_add, to_del)
def packages_get_locally_built_apks(args, packages, arch):
"""
Iterate over packages and if existing, get paths to locally built packages.
This is used to force apk to upgrade packages to newer local versions, even
if the pkgver and pkgrel did not change.
:param packages: list of pkgnames
:param arch: architecture that the locally built packages should have
:returns: list of apk file paths that are valid inside the chroots, e.g.
["/mnt/pmbootstrap/packages/x86_64/hello-world-1-r6.apk", ...]
"""
channel = pmb.config.pmaports.read_config(args)["channel"]
ret = []
for package in packages:
data_repo = pmb.parse.apkindex.package(args, package, arch, False)
if not data_repo:
continue
apk_file = f"{package}-{data_repo['version']}.apk"
if not os.path.exists(f"{args.work}/packages/{channel}/{arch}/{apk_file}"):
continue
ret.append(f"/mnt/pmbootstrap/packages/{arch}/{apk_file}")
aport = pmb.helpers.pmaports.find(args, package, False)
if aport:
data_repo = pmb.parse.apkindex.package(args, package, arch, False)
if not data_repo:
raise RuntimeError(package + ": could not find binary"
" package, although it should exist for"
" sure at this point in the code."
" Probably an APKBUILD subpackage parsing"
" bug. Related: https://gitlab.com/"
"postmarketOS/build.postmarketos.org/"
"issues/61")
apk_path = ("/mnt/pmbootstrap-packages/" + arch + "/" +
package + "-" + data_repo["version"] + ".apk")
if os.path.exists(args.work + "/chroot_" + suffix + apk_path):
package = apk_path
ret.append(package)
return ret
def install_run_apk(args, to_add, to_add_local, to_del, suffix):
"""
Run apk to add packages, and ensure only the desired packages get
explicitly marked as installed.
:param to_add: list of pkgnames to install, without their dependencies
:param to_add_local: return of packages_get_locally_built_apks()
:param to_del: list of pkgnames to be deleted, this should be set to
conflicting dependencies in any of the packages to be
installed or their dependencies (e.g. ["osk-sdl"])
:param suffix: the chroot suffix, e.g. "native" or "rootfs_qemu-amd64"
"""
# Sanitize packages: don't allow '--allow-untrusted' and other options
# to be passed to apk!
for package in to_add + to_add_local + to_del:
if package.startswith("-"):
raise ValueError(f"Invalid package name: {package}")
commands = [["add"] + to_add]
# Use a virtual package to mark only the explicitly requested packages as
# explicitly installed, not the ones in to_add_local
if to_add_local:
commands += [["add", "-u", "--virtual", ".pmbootstrap"] + to_add_local,
["del", ".pmbootstrap"]]
if to_del:
commands += [["del"] + to_del]
for (i, command) in enumerate(commands):
if args.offline:
command = ["--no-network"] + command
if i == 0:
pmb.helpers.apk.apk_with_progress(args, ["apk"] + command,
chroot=True, suffix=suffix)
else:
# Virtual package related commands don't actually install or remove
# packages, but only mark the right ones as explicitly installed.
# They finish up almost instantly, so don't display a progress bar.
pmb.chroot.root(args, ["apk", "--no-progress"] + command,
suffix=suffix)
def install(args, packages, suffix="native", build=True):
"""
Install packages from pmbootstrap's local package index or the pmOS/Alpine
binary package mirrors. Iterate over all dependencies recursively, and
build missing packages as necessary.
:param packages: list of pkgnames to be installed
:param suffix: the chroot suffix, e.g. "native" or "rootfs_qemu-amd64"
:param build: automatically build the package, when it does not exist yet
or needs to be updated, and it is inside pmaports. For the
special case that all packages are expected to be in Alpine's
repositories, set this to False for performance optimization.
or needs to be updated, and it is inside the pm-aports
folder. Checking this is expensive - if you know, that all
packages are provides by upstream repos, set this to False!
"""
arch = pmb.parse.arch.from_chroot_suffix(args, suffix)
if not packages:
logging.verbose("pmb.chroot.apk.install called with empty packages list,"
" ignoring")
return
# Initialize chroot
check_min_version(args, suffix)
pmb.chroot.init(args, suffix)
# Add depends to packages
arch = pmb.parse.arch.from_chroot_suffix(args, suffix)
packages_with_depends = pmb.parse.depends.recurse(args, packages, suffix)
to_add, to_del = packages_split_to_add_del(packages_with_depends)
if build:
for package in to_add:
install_build(args, package, arch)
# Filter outdated packages (build them if required)
packages_installed = installed(args, suffix)
packages_todo = []
for package in packages_with_depends:
if install_is_necessary(
args, build, arch, package, packages_installed):
packages_todo.append(package)
if not len(packages_todo):
return
to_add_local = packages_get_locally_built_apks(args, to_add, arch)
to_add_no_deps, _ = packages_split_to_add_del(packages)
# Sanitize packages: don't allow '--allow-untrusted' and other options
# to be passed to apk!
for package in packages_todo:
if package.startswith("-"):
raise ValueError("Invalid package name: " + package)
logging.info(f"({suffix}) install {' '.join(to_add_no_deps)}")
install_run_apk(args, to_add_no_deps, to_add_local, to_del, suffix)
# Readable install message without dependencies
message = "(" + suffix + ") install"
for pkgname in packages:
if pkgname not in packages_installed:
message += " " + pkgname
logging.info(message)
# Local packages: Using the path instead of pkgname makes apk update
# packages of the same version if the build date is different
packages_todo = replace_aports_packages_with_path(args, packages_todo,
suffix, arch)
# Use a virtual package to mark only the explicitly requested packages as
# explicitly installed, not their dependencies or specific paths (#1212)
commands = [["add"] + packages]
if packages != packages_todo:
commands = [["add", "-u", "--virtual", ".pmbootstrap"] + packages_todo,
["add"] + packages,
["del", ".pmbootstrap"]]
for command in commands:
if args.offline:
command = ["--no-network"] + command
pmb.chroot.root(args, ["apk", "--no-progress"] + command, suffix=suffix, disable_timeout=True)
def installed(args, suffix="native"):
@ -258,5 +252,5 @@ def installed(args, suffix="native"):
}, ...
}
"""
path = f"{args.work}/chroot_{suffix}/lib/apk/db/installed"
return pmb.parse.apkindex.parse(path, False)
path = args.work + "/chroot_" + suffix + "/lib/apk/db/installed"
return pmb.parse.apkindex.parse(args, path, False)

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import os
import logging
@ -7,7 +7,6 @@ import tarfile
import tempfile
import stat
import pmb.helpers.apk
import pmb.helpers.run
import pmb.config
import pmb.config.load
@ -18,7 +17,7 @@ import pmb.parse.version
def read_signature_info(tar):
"""
Find various information about the signature that was used to sign
Find various information about the signature, that was used to sign
/sbin/apk.static inside the archive (not to be confused with the normal apk
archive signature!)
@ -32,19 +31,19 @@ def read_signature_info(tar):
sigfilename = filename
break
if not sigfilename:
raise RuntimeError("Could not find signature filename in apk."
" This means that your apk file is damaged."
" Delete it and try again."
" If the problem persists, fill out a bug report.")
raise RuntimeError("Could not find signature filename in apk." +
" This means, that your apk file is damaged. Delete it" +
" and try again. If the problem persists, fill out a bug" +
" report.")
sigkey = sigfilename[len(prefix):]
logging.debug(f"sigfilename: {sigfilename}")
logging.debug(f"sigkey: {sigkey}")
logging.debug("sigfilename: " + sigfilename)
logging.debug("sigkey: " + sigkey)
# Get path to keyfile on disk
sigkey_path = f"{pmb.config.apk_keys_path}/{sigkey}"
sigkey_path = pmb.config.apk_keys_path + "/" + sigkey
if "/" in sigkey or not os.path.exists(sigkey_path):
logging.debug(f"sigkey_path: {sigkey_path}")
raise RuntimeError(f"Invalid signature key: {sigkey}")
logging.debug("sigkey_path: " + sigkey_path)
raise RuntimeError("Invalid signature key: " + sigkey)
return (sigfilename, sigkey_path)
@ -71,7 +70,7 @@ def extract_temp(tar, sigfilename):
ret[ftype]["temp_path"] = path
shutil.copyfileobj(tar.extractfile(member), handle)
logging.debug(f"extracted: {path}")
logging.debug("extracted: " + path)
handle.close()
return ret
@ -83,7 +82,7 @@ def verify_signature(args, files, sigkey_path):
:param files: return value from extract_temp()
:raises RuntimeError: when verification failed and removes temp files
"""
logging.debug(f"Verify apk.static signature with {sigkey_path}")
logging.debug("Verify apk.static signature with " + sigkey_path)
try:
pmb.helpers.run.user(args, ["openssl", "dgst", "-sha1", "-verify",
sigkey_path, "-signature", files[
@ -114,24 +113,22 @@ def extract(args, version, apk_path):
os.unlink(files["sig"]["temp_path"])
temp_path = files["apk"]["temp_path"]
# Verify the version that the extracted binary reports
logging.debug("Verify the version reported by the apk.static binary"
f" (must match the package version {version})")
# Verify the version, that the extracted binary reports
logging.debug("Verify the version reported by the apk.static binary" +
" (must match the package version " + version + ")")
os.chmod(temp_path, os.stat(temp_path).st_mode | stat.S_IEXEC)
version_bin = pmb.helpers.run.user(args, [temp_path, "--version"],
output_return=True)
version_bin = version_bin.split(" ")[1].split(",")[0]
if not version.startswith(f"{version_bin}-r"):
if not version.startswith(version_bin + "-r"):
os.unlink(temp_path)
raise RuntimeError(f"Downloaded apk-tools-static-{version}.apk,"
" but the apk binary inside that package reports"
f" to be version: {version_bin}!"
" Looks like a downgrade attack"
" from a malicious server! Switch the server (-m)"
" and try again.")
raise RuntimeError("Downloaded apk-tools-static-" + version + ".apk,"
" but the apk binary inside that package reports to be"
" version: " + version_bin + "! Looks like a downgrade attack"
" from a malicious server! Switch the server (-m) and try again.")
# Move it to the right path
target_path = f"{args.work}/apk.static"
target_path = args.work + "/apk.static"
shutil.move(temp_path, target_path)
@ -141,8 +138,8 @@ def download(args, file):
"""
channel_cfg = pmb.config.pmaports.read_config_channel(args)
mirrordir = channel_cfg["mirrordir_alpine"]
base_url = f"{args.mirror_alpine}{mirrordir}/main/{pmb.config.arch_native}"
return pmb.helpers.http.download(args, f"{base_url}/{file}", file)
base_url = f"{args.mirror_alpine}{mirrordir}/main/{args.arch_native}"
return pmb.helpers.http.download(args, base_url + "/" + file, file)
def init(args):
@ -155,12 +152,16 @@ def init(args):
indexes=[apkindex])
version = index_data["version"]
# Verify the apk-tools-static version
pmb.helpers.apk.check_outdated(
args, version, "Run 'pmbootstrap update', then try again.")
# Extract and verify the apk-tools-static version
version_min = pmb.config.apk_tools_static_min_version
apk_name = "apk-tools-static-" + version + ".apk"
if pmb.parse.version.compare(version, version_min) == -1:
raise RuntimeError("Your APKINDEX has an outdated version of"
" apk-tools-static (your version: " + version +
", expected at least:" + version_min + "). Please" +
" run 'pmbootstrap update'.")
# Download, extract, verify apk-tools-static
apk_name = f"apk-tools-static-{version}.apk"
apk_static = download(args, apk_name)
extract(args, version, apk_static)
@ -168,5 +169,4 @@ def init(args):
def run(args, parameters):
if args.offline:
parameters = ["--no-network"] + parameters
pmb.helpers.apk.apk_with_progress(
args, [f"{args.work}/apk.static"] + parameters, chroot=False)
pmb.helpers.run.root(args, [args.work + "/apk.static"] + parameters)

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import os
import logging
@ -18,22 +18,11 @@ def register(args, arch):
Get arch, magic, mask.
"""
arch_qemu = pmb.parse.arch.alpine_to_qemu(arch)
# always make sure the qemu-<arch> binary is installed, since registering
# may happen outside of this method (e.g. by OS)
if f"qemu-{arch_qemu}" not in pmb.chroot.apk.installed(args):
pmb.chroot.apk.install(args, ["qemu-" + arch_qemu])
if is_registered(arch_qemu):
return
pmb.helpers.other.check_binfmt_misc(args)
# Don't continue if the actions from check_binfmt_misc caused the OS to
# automatically register the target arch
if is_registered(arch_qemu):
return
info = pmb.parse.binfmt_info(arch_qemu)
pmb.chroot.apk.install(args, ["qemu-" + arch_qemu])
info = pmb.parse.binfmt_info(args, arch_qemu)
# Build registration string
# https://en.wikipedia.org/wiki/Binfmt_misc

250
pmb/chroot/distccd.py Normal file
View File

@ -0,0 +1,250 @@
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import errno
import json
import logging
import os
import pmb.chroot
import pmb.config
import pmb.chroot.apk
""" Packages for foreign architectures (e.g. armhf) get built in chroots
running with QEMU. While this works, it is painfully slow. So we speed it
up by using distcc to let cross compilers running in the native chroots do
the heavy lifting.
This file sets up an SSH server in the native chroot, which will then be
used by the foreign arch chroot to communicate with the distcc daemon. We
make sure that only the foreign arch chroot can connect to the sshd by only
listening on localhost, as well as generating dedicated ssh keys.
Using the SSH server instead of running distccd directly is a security
measure. Distccd does not authenticate its clients and would therefore
allow any process of the host system (not related to pmbootstrap) to
execute compilers in the native chroot. By modifying the compiler's options
or sending malicious data to the compiler, it is likely that the process
can gain remote code execution [1]. That way, a compromised, but sandboxed
process could gain privilege escalation.
[1]: <https://github.com/distcc/distcc/issues/155#issuecomment-374014645>
"""
def init_server(args):
"""
Install dependencies and generate keys for the server.
"""
# Install dependencies
pmb.chroot.apk.install(args, ["arch-bin-masquerade", "distcc",
"openssh-server"])
# Config folder (nothing to do if existing)
dir = "/home/pmos/.distcc-sshd"
dir_outside = args.work + "/chroot_native" + dir
if os.path.exists(dir_outside):
return
# Generate keys
logging.info("(native) generate distcc-sshd server keys")
pmb.chroot.user(args, ["mkdir", "-p", dir + "/etc/ssh"])
pmb.chroot.user(args, ["ssh-keygen", "-A", "-f", dir])
def init_client(args, suffix):
"""
Install dependencies and generate keys for the client.
"""
# Install dependencies
pmb.chroot.apk.install(args, ["arch-bin-masquerade", "distcc",
"openssh-client"], suffix)
# Public key path (nothing to do if existing)
pub = "/home/pmos/id_ed25519.pub"
pub_outside = args.work + "/chroot_" + suffix + pub
if os.path.exists(pub_outside):
return
# Generate keys
logging.info("(" + suffix + ") generate distcc-sshd client keys")
pmb.chroot.user(args, ["ssh-keygen", "-t", "ed25519", "-N", "",
"-f", "/home/pmos/.ssh/id_ed25519"], suffix)
pmb.chroot.user(args, ["cp", "/home/pmos/.ssh/id_ed25519.pub", pub],
suffix)
def configure_authorized_keys(args, suffix):
"""
Exclusively allow one foreign arch chroot to access the sshd.
"""
auth = "/home/pmos/.distcc-sshd/authorized_keys"
auth_outside = args.work + "/chroot_native/" + auth
pub = "/home/pmos/id_ed25519.pub"
pub_outside = args.work + "/chroot_" + suffix + pub
pmb.helpers.run.root(args, ["cp", pub_outside, auth_outside])
def configure_cmdlist(args, arch):
"""
Create a whitelist of all the cross compiler wrappers.
Distcc 3.3 and above requires such a whitelist, or else it will only run
with the --make-me-a-botnet parameter (even in ssh mode).
"""
dir = "/home/pmos/.distcc-sshd"
with open(args.work + "/chroot_native/tmp/cmdlist", "w") as handle:
for cmd in ["c++", "cc", "cpp", "g++", "gcc"]:
cmd_full = "/usr/lib/arch-bin-masquerade/" + arch + "/" + cmd
handle.write(cmd_full + "\n")
pmb.chroot.root(args, ["mv", "/tmp/cmdlist", dir + "/cmdlist"])
pmb.chroot.user(args, ["cat", dir + "/cmdlist"])
def configure_distccd_wrapper(args):
"""
Wrap distccd in a shell script, so we can pass the compiler whitelist and
set the verbose flag (when pmbootstrap is running with --verbose).
"""
dir = "/home/pmos/.distcc-sshd"
with open(args.work + "/chroot_native/tmp/wrapper", "w") as handle:
handle.write("#!/bin/sh\n"
"export DISTCC_CMDLIST='" + dir + "/cmdlist'\n"
"distccd --log-file /home/pmos/distccd.log --nice 19")
if args.verbose:
handle.write(" --verbose")
handle.write(" \"$@\"\n")
pmb.chroot.root(args, ["mv", "/tmp/wrapper", dir + "/distccd"])
pmb.chroot.user(args, ["cat", dir + "/distccd"])
pmb.chroot.root(args, ["chmod", "+x", dir + "/distccd"])
def configure_sshd(args):
"""
Configure the SSH daemon in the native chroot.
"""
dir = "/home/pmos/.distcc-sshd"
config = """AllowAgentForwarding no
AllowTcpForwarding no
AuthorizedKeysFile /home/pmos/.distcc-sshd/authorized_keys
HostKey /home/pmos/.distcc-sshd/etc/ssh/ssh_host_ed25519_key
ListenAddress 127.0.0.1
PasswordAuthentication no
PidFile /home/pmos/.distcc-sshd/sshd.pid
Port """ + args.port_distccd + """
X11Forwarding no"""
with open(args.work + "/chroot_native/tmp/cfg", "w") as handle:
for line in config.split("\n"):
handle.write(line.lstrip() + "\n")
pmb.chroot.root(args, ["mv", "/tmp/cfg", dir + "/sshd_config"])
pmb.chroot.user(args, ["cat", dir + "/sshd_config"])
def get_running_pid(args):
"""
:returns: the running distcc-sshd's pid as integer or None
"""
# PID file must exist
pidfile = "/home/pmos/.distcc-sshd/sshd.pid"
pidfile_outside = args.work + "/chroot_native" + pidfile
if not os.path.exists(pidfile_outside):
return None
# Verify, if it still exists by sending a kill signal
with open(pidfile_outside, "r") as handle:
pid = int(handle.read()[:-1])
try:
os.kill(pid, 0)
except OSError as err:
if err.errno == errno.ESRCH: # no such process
pmb.helpers.run.root(args, ["rm", pidfile_outside])
return None
return pid
def get_running_parameters(args):
"""
Get the parameters of the currently running distcc-sshd instance.
:returns: a dictionary in the form of
{"arch": "armhf", "port": 1234, "verbose": False}
If the information can not be read, "arch" is set to "unknown"
"""
# Return defaults
path = args.work + "/chroot_native/tmp/distcc_sshd_parameters"
if not os.path.exists(path):
return {"arch": "unknown", "port": 0, "verbose": False}
# Parse the file as JSON
with open(path, "r") as handle:
return json.loads(handle.read())
def set_running_parameters(args, arch):
"""
Set the parameters of the currently running distcc-sshd instance.
"""
parameters = {"arch": arch,
"port": args.port_distccd,
"verbose": args.verbose}
path = args.work + "/chroot_native/tmp/distcc_sshd_parameters"
with open(path, "w") as handle:
json.dump(parameters, handle)
def is_running_with_same_parameters(args, arch):
"""
Check whether we can use the already running distcc-sshd instance with our
current set of parameters. In case we can use it directly, we save some
time, otherwise we need to stop it, configure it again, and start it once
more.
"""
if not get_running_pid(args):
return False
parameters = get_running_parameters(args)
return (parameters["arch"] == arch and
parameters["port"] == args.port_distccd and
parameters["verbose"] == args.verbose)
def stop(args):
"""
Kill the sshd process (by using its pid).
"""
pid = get_running_pid(args)
if not pid:
return
parameters = get_running_parameters(args)
logging.info("(native) stop distcc-sshd (" + parameters["arch"] + ")")
pmb.chroot.user(args, ["kill", str(pid)])
def start(args, arch):
"""
Set up a new distcc-sshd instance or use an already running one.
"""
if is_running_with_same_parameters(args, arch):
return
stop(args)
# Initialize server and client
suffix = "buildroot_" + arch
init_server(args)
init_client(args, suffix)
logging.info("(native) start distcc-sshd (" + arch + ") on 127.0.0.1:" +
args.port_distccd)
# Configure server parameters (arch, port, verbose)
configure_authorized_keys(args, suffix)
configure_distccd_wrapper(args)
configure_cmdlist(args, arch)
configure_sshd(args)
# Run
dir = "/home/pmos/.distcc-sshd"
pmb.chroot.user(args, ["/usr/sbin/sshd", "-f", dir + "/sshd_config",
"-E", dir + "/log.txt"])
set_running_parameters(args, arch)

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import logging
import os
@ -22,7 +22,7 @@ def copy_resolv_conf(args, suffix="native"):
If the file doesn't exist, create an empty file with 'touch'.
"""
host = "/etc/resolv.conf"
chroot = f"{args.work}/chroot_{suffix}{host}"
chroot = args.work + "/chroot_" + suffix + host
if os.path.exists(host):
if not os.path.exists(chroot) or not filecmp.cmp(host, chroot):
pmb.helpers.run.root(args, ["cp", host, chroot])
@ -30,59 +30,29 @@ def copy_resolv_conf(args, suffix="native"):
pmb.helpers.run.root(args, ["touch", chroot])
def mark_in_chroot(args, suffix="native"):
"""
Touch a flag so we can know when we're running in chroot (and
don't accidentally flash partitions on our host). This marker
gets removed in pmb.chroot.shutdown (pmbootstrap shutdown).
"""
in_chroot_file = f"{args.work}/chroot_{suffix}/in-pmbootstrap"
if not os.path.exists(in_chroot_file):
pmb.helpers.run.root(args, ["touch", in_chroot_file])
def setup_qemu_emulation(args, suffix):
arch = pmb.parse.arch.from_chroot_suffix(args, suffix)
if not pmb.parse.arch.cpu_emulation_required(arch):
if not pmb.parse.arch.cpu_emulation_required(args, arch):
return
chroot = f"{args.work}/chroot_{suffix}"
chroot = args.work + "/chroot_" + suffix
arch_qemu = pmb.parse.arch.alpine_to_qemu(arch)
# mount --bind the qemu-user binary
pmb.chroot.binfmt.register(args, arch)
pmb.helpers.mount.bind_file(args, f"{args.work}/chroot_native"
f"/usr/bin/qemu-{arch_qemu}",
f"{chroot}/usr/bin/qemu-{arch_qemu}-static",
pmb.helpers.mount.bind_file(args, args.work + "/chroot_native/usr/bin/qemu-" + arch_qemu,
chroot + "/usr/bin/qemu-" + arch_qemu + "-static",
create_folders=True)
def init_keys(args):
"""
All Alpine and postmarketOS repository keys are shipped with pmbootstrap.
Copy them into $WORK/config_apk_keys, which gets mounted inside the various
chroots as /etc/apk/keys.
This is done before installing any package, so apk can verify APKINDEX
files of binary repositories even though alpine-keys/postmarketos-keys are
not installed yet.
"""
for key in glob.glob(f"{pmb.config.apk_keys_path}/*.pub"):
target = f"{args.work}/config_apk_keys/{os.path.basename(key)}"
if not os.path.exists(target):
# Copy as root, so the resulting files in chroots are owned by root
pmb.helpers.run.root(args, ["cp", key, target])
def init(args, suffix="native"):
# When already initialized: just prepare the chroot
chroot = f"{args.work}/chroot_{suffix}"
chroot = args.work + "/chroot_" + suffix
arch = pmb.parse.arch.from_chroot_suffix(args, suffix)
pmb.chroot.mount(args, suffix)
setup_qemu_emulation(args, suffix)
mark_in_chroot(args, suffix)
if os.path.islink(f"{chroot}/bin/sh"):
if os.path.islink(chroot + "/bin/sh"):
pmb.config.workdir.chroot_check_channel(args, suffix)
copy_resolv_conf(args, suffix)
pmb.chroot.apk.update_repository_list(args, suffix)
@ -91,15 +61,17 @@ def init(args, suffix="native"):
# Require apk-tools-static
pmb.chroot.apk_static.init(args)
logging.info(f"({suffix}) install alpine-base")
logging.info("(" + suffix + ") install alpine-base")
# Initialize cache
apk_cache = f"{args.work}/cache_apk_{arch}"
pmb.helpers.run.root(args, ["ln", "-s", "-f", "/var/cache/apk",
f"{chroot}/etc/apk/cache"])
apk_cache = args.work + "/cache_apk_" + arch
pmb.helpers.run.root(args, ["ln", "-s", "-f", "/var/cache/apk", chroot +
"/etc/apk/cache"])
# Initialize /etc/apk/keys/, resolv.conf, repositories
init_keys(args)
for key in glob.glob(pmb.config.apk_keys_path + "/*.pub"):
pmb.helpers.run.root(args, ["cp", key, args.work +
"/config_apk_keys/"])
copy_resolv_conf(args, suffix)
pmb.chroot.apk.update_repository_list(args, suffix)
@ -107,23 +79,22 @@ def init(args, suffix="native"):
# Install alpine-base
pmb.helpers.repo.update(args, arch)
pmb.chroot.apk_static.run(args, ["--root", chroot,
"--cache-dir", apk_cache,
"--initdb", "--arch", arch,
pmb.chroot.apk_static.run(args, ["--no-progress", "--root", chroot,
"--cache-dir", apk_cache, "--initdb", "--arch", arch,
"add", "alpine-base"])
# Building chroots: create "pmos" user, add symlinks to /home/pmos
if not suffix.startswith("rootfs_"):
pmb.chroot.root(args, ["adduser", "-D", "pmos", "-u",
pmb.config.chroot_uid_user],
suffix, auto_init=False)
pmb.config.chroot_uid_user], suffix, auto_init=False)
# Create the links (with subfolders if necessary)
for target, link_name in pmb.config.chroot_home_symlinks.items():
link_dir = os.path.dirname(link_name)
if not os.path.exists(f"{chroot}{link_dir}"):
if not os.path.exists(chroot + link_dir):
pmb.chroot.user(args, ["mkdir", "-p", link_dir], suffix)
if not os.path.exists(f"{chroot}{target}"):
if not os.path.exists(chroot + target):
pmb.chroot.root(args, ["mkdir", "-p", target], suffix)
pmb.chroot.user(args, ["ln", "-s", target, link_name], suffix)
pmb.chroot.root(args, ["chown", "pmos:pmos", target], suffix)
pmb.chroot.root(args, ["chown", "pmos:pmos", target],
suffix)

View File

@ -1,11 +1,10 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import os
import logging
import pmb.chroot.initfs_hooks
import pmb.chroot.other
import pmb.chroot.apk
import pmb.config.pmaports
import pmb.helpers.cli
@ -13,20 +12,15 @@ def build(args, flavor, suffix):
# Update mkinitfs and hooks
pmb.chroot.apk.install(args, ["postmarketos-mkinitfs"], suffix)
pmb.chroot.initfs_hooks.update(args, suffix)
pmaports_cfg = pmb.config.pmaports.read_config(args)
# Call mkinitfs
logging.info(f"({suffix}) mkinitfs {flavor}")
if pmaports_cfg.get("supported_mkinitfs_without_flavors", False):
pmb.chroot.root(args, ["mkinitfs"], suffix)
else:
release_file = (f"{args.work}/chroot_{suffix}/usr/share/kernel/"
f"{flavor}/kernel.release")
with open(release_file, "r") as handle:
release = handle.read().rstrip()
pmb.chroot.root(args, ["mkinitfs", "-o",
f"/boot/initramfs-{flavor}", release],
suffix)
logging.info("(" + suffix + ") mkinitfs " + flavor)
release_file = (args.work + "/chroot_" + suffix + "/usr/share/kernel/" +
flavor + "/kernel.release")
with open(release_file, "r") as handle:
release = handle.read().rstrip()
pmb.chroot.root(args, ["mkinitfs", "-o", "/boot/initramfs-" + flavor, release],
suffix)
def extract(args, flavor, suffix, extra=False):
@ -36,38 +30,30 @@ def extract(args, flavor, suffix, extra=False):
"""
# Extraction folder
inside = "/tmp/initfs-extracted"
pmaports_cfg = pmb.config.pmaports.read_config(args)
if pmaports_cfg.get("supported_mkinitfs_without_flavors", False):
initfs_file = "/boot/initramfs"
else:
initfs_file = f"/boot/initramfs-${flavor}"
if extra:
inside = "/tmp/initfs-extra-extracted"
initfs_file += "-extra"
outside = f"{args.work}/chroot_{suffix}{inside}"
flavor += "-extra"
outside = args.work + "/chroot_" + suffix + inside
if os.path.exists(outside):
if not pmb.helpers.cli.confirm(args, f"Extraction folder {outside}"
" already exists."
" Do you want to overwrite it?"):
if not pmb.helpers.cli.confirm(args, "Extraction folder " + outside +
" already exists. Do you want to overwrite it?"):
raise RuntimeError("Aborted!")
pmb.chroot.root(args, ["rm", "-r", inside], suffix)
# Extraction script (because passing a file to stdin is not allowed
# in pmbootstrap's chroot/shell functions for security reasons)
with open(f"{args.work}/chroot_{suffix}/tmp/_extract.sh", "w") as handle:
with open(args.work + "/chroot_" + suffix + "/tmp/_extract.sh", "w") as handle:
handle.write(
"#!/bin/sh\n"
f"cd {inside} && cpio -i < _initfs\n")
"cd " + inside + " && cpio -i < _initfs\n")
# Extract
commands = [["mkdir", "-p", inside],
["cp", initfs_file, f"{inside}/_initfs.gz"],
["gzip", "-d", f"{inside}/_initfs.gz"],
["cp", "/boot/initramfs-" + flavor, inside + "/_initfs.gz"],
["gzip", "-d", inside + "/_initfs.gz"],
["cat", "/tmp/_extract.sh"], # for the log
["sh", "/tmp/_extract.sh"],
["rm", "/tmp/_extract.sh", f"{inside}/_initfs"]
["rm", "/tmp/_extract.sh", inside + "/_initfs"]
]
for command in commands:
pmb.chroot.root(args, command, suffix)
@ -87,8 +73,11 @@ def ls(args, flavor, suffix, extra=False):
def frontend(args):
# Find the appropriate kernel flavor
suffix = f"rootfs_{args.device}"
flavor = pmb.chroot.other.kernel_flavor_installed(args, suffix)
suffix = "rootfs_" + args.device
flavors = pmb.chroot.other.kernel_flavors_installed(args, suffix)
flavor = flavors[0]
if hasattr(args, "flavor") and args.flavor:
flavor = args.flavor
# Handle initfs actions
action = args.action_initfs
@ -96,9 +85,9 @@ def frontend(args):
build(args, flavor, suffix)
elif action == "extract":
dir = extract(args, flavor, suffix)
logging.info(f"Successfully extracted initramfs to: {dir}")
logging.info("Successfully extracted initramfs to: " + dir)
dir_extra = extract(args, flavor, suffix, True)
logging.info(f"Successfully extracted initramfs-extra to: {dir_extra}")
logging.info("Successfully extracted initramfs-extra to: " + dir_extra)
elif action == "ls":
logging.info("*** initramfs ***")
ls(args, flavor, suffix)
@ -114,9 +103,10 @@ def frontend(args):
elif action == "hook_del":
pmb.chroot.initfs_hooks.delete(args, args.hook, suffix)
# Rebuild the initfs after adding/removing a hook
build(args, flavor, suffix)
# Rebuild the initfs for all kernels after adding/removing a hook
for flavor in flavors:
build(args, flavor, suffix)
if action in ["ls", "extract"]:
link = "https://wiki.postmarketos.org/wiki/Initramfs_development"
logging.info(f"See also: <{link}>")
logging.info("See also: <" + link + ">")

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import os
import glob
@ -23,7 +23,7 @@ def list_chroot(args, suffix, remove_prefix=True):
def list_aports(args):
ret = []
prefix = pmb.config.initfs_hook_prefix
for path in glob.glob(f"{args.aports}/*/{prefix}*"):
for path in glob.glob(args.aports + "/*/" + prefix + "*"):
ret.append(os.path.basename(path)[len(prefix):])
return ret
@ -33,28 +33,31 @@ def ls(args, suffix):
hooks_aports = list_aports(args)
for hook in hooks_aports:
line = f"* {hook} ({'' if hook in hooks_chroot else 'not '}installed)"
line = "* " + hook
if hook in hooks_chroot:
line += " (installed)"
else:
line += " (not installed)"
logging.info(line)
def add(args, hook, suffix):
if hook not in list_aports(args):
raise RuntimeError("Invalid hook name!"
" Run 'pmbootstrap initfs hook_ls'"
raise RuntimeError("Invalid hook name! Run 'pmbootstrap initfs hook_ls'"
" to get a list of all hooks.")
prefix = pmb.config.initfs_hook_prefix
pmb.chroot.apk.install(args, [f"{prefix}{hook}"], suffix)
pmb.chroot.apk.install(args, [prefix + hook], suffix)
def delete(args, hook, suffix):
if hook not in list_chroot(args, suffix):
raise RuntimeError("There is no such hook installed!")
prefix = pmb.config.initfs_hook_prefix
pmb.chroot.root(args, ["apk", "del", f"{prefix}{hook}"], suffix)
pmb.chroot.root(args, ["apk", "del", prefix + hook], suffix)
def update(args, suffix):
"""
Rebuild and update all hooks that are out of date
Rebuild and update all hooks, that are out of date
"""
pmb.chroot.apk.install(args, list_chroot(args, suffix, False), suffix)

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import glob
import logging
@ -72,9 +72,6 @@ def mount_dev_tmpfs(args, suffix="native"):
"tmpfs", dev + "/shm"])
create_device_nodes(args, suffix)
# Setup /dev/fd as a symlink
pmb.helpers.run.root(args, ["ln", "-sf", "/proc/self/fd", f"{dev}/"])
def mount(args, suffix="native"):
# Mount tmpfs as the chroot's /dev
@ -106,18 +103,3 @@ def mount_native_into_foreign(args, suffix):
if not os.path.lexists(musl_link):
pmb.helpers.run.root(args, ["ln", "-s", "/native/lib/" + musl,
musl_link])
def remove_mnt_pmbootstrap(args, suffix):
""" Safely remove /mnt/pmbootstrap directories from the chroot, without
running rm -r as root and potentially removing data inside the
mountpoint in case it was still mounted (bug in pmbootstrap, or user
ran pmbootstrap 2x in parallel). This is similar to running 'rm -r -d',
but we don't assume that the host's rm has the -d flag (busybox does
not). """
mnt_dir = f"{args.work}/chroot_{suffix}/mnt/pmbootstrap"
if not os.path.exists(mnt_dir):
return
for path in glob.glob(f"{mnt_dir}/*") + [mnt_dir]:
pmb.helpers.run.root(args, ["rmdir", path])

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import os
import glob
@ -7,33 +7,38 @@ import pmb.chroot.apk
import pmb.install
def kernel_flavor_installed(args, suffix, autoinstall=True):
def kernel_flavors_installed(args, suffix, autoinstall=True):
"""
Get installed kernel flavor. Optionally install the device's kernel
beforehand.
Get all installed kernel flavors and make sure that there's at least one
:param suffix: the chroot suffix, e.g. "native" or "rootfs_qemu-amd64"
:param autoinstall: install the device's kernel if it is not installed
:returns: * string with the installed kernel flavor,
e.g. ["postmarketos-qcom-sdm845"]
* None if no kernel is installed
:param autoinstall: make sure that at least one kernel flavor is installed
:returns: list of installed kernel flavors, e.g. ["postmarketos-mainline"]
"""
# Automatically install the selected kernel
if autoinstall:
packages = ([f"device-{args.device}"] +
packages = (["device-" + args.device] +
pmb.install.get_kernel_package(args, args.device))
pmb.chroot.apk.install(args, packages, suffix)
pattern = f"{args.work}/chroot_{suffix}/usr/share/kernel/*"
glob_result = glob.glob(pattern)
# Find all kernels in /boot
prefix = "vmlinuz-"
prefix_len = len(prefix)
pattern = args.work + "/chroot_" + suffix + "/boot/" + prefix + "*"
ret = []
for file in glob.glob(pattern):
flavor = os.path.basename(file)[prefix_len:]
if flavor[-4:] == "-dtb" or flavor[-4:] == "-mtk":
flavor = flavor[:-4]
ret.append(flavor)
# There should be only one directory here
return os.path.basename(glob_result[0]) if glob_result else None
# Return without duplicates
return list(set(ret))
def tempfolder(args, path, suffix="native"):
"""
Create a temporary folder inside the chroot that belongs to "user".
Create a temporary folder inside the chroot, that belongs to "user".
The folder gets deleted, if it already exists.
:param path: of the temporary folder inside the chroot

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import os
import shutil
@ -18,16 +18,14 @@ def executables_absolute_path():
for binary in ["sh", "chroot"]:
path = shutil.which(binary, path=pmb.config.chroot_host_path)
if not path:
raise RuntimeError(f"Could not find the '{binary}'"
" executable. Make sure that it is in"
" your current user's PATH.")
raise RuntimeError("Could not find the '" + binary +
"' executable. Make sure, that it is in" " your current user's PATH.")
ret[binary] = path
return ret
def root(args, cmd, suffix="native", working_dir="/", output="log",
output_return=False, check=None, env={}, auto_init=True,
disable_timeout=False):
output_return=False, check=None, env={}, auto_init=True, disable_timeout=False):
"""
Run a command inside a chroot as root.
@ -39,27 +37,25 @@ def root(args, cmd, suffix="native", working_dir="/", output="log",
arguments and the return value.
"""
# Initialize chroot
chroot = f"{args.work}/chroot_{suffix}"
if not auto_init and not os.path.islink(f"{chroot}/bin/sh"):
raise RuntimeError(f"Chroot does not exist: {chroot}")
chroot = args.work + "/chroot_" + suffix
if not auto_init and not os.path.islink(chroot + "/bin/sh"):
raise RuntimeError("Chroot does not exist: " + chroot)
if auto_init:
pmb.chroot.init(args, suffix)
# Readable log message (without all the escaping)
msg = f"({suffix}) % "
msg = "(" + suffix + ") % "
for key, value in env.items():
msg += f"{key}={value} "
msg += key + "=" + value + " "
if working_dir != "/":
msg += f"cd {working_dir}; "
msg += "cd " + working_dir + "; "
msg += " ".join(cmd)
# Merge env with defaults into env_all
env_all = {"CHARSET": "UTF-8",
"HISTFILE": "~/.ash_history",
"HOME": "/root",
"LANG": "UTF-8",
"PATH": pmb.config.chroot_path,
"PYTHONUNBUFFERED": "1",
"SHELL": "/bin/ash",
"TERM": "xterm"}
for key, value in env.items():
@ -71,11 +67,9 @@ def root(args, cmd, suffix="native", working_dir="/", output="log",
# cmd_sudo: ["sudo", "env", "-i", "sh", "-c", "PATH=... /sbin/chroot ..."]
executables = executables_absolute_path()
cmd_chroot = [executables["chroot"], chroot, "/bin/sh", "-c",
pmb.helpers.run_core.flat_cmd(cmd, working_dir)]
cmd_sudo = pmb.config.sudo([
"env", "-i", executables["sh"], "-c",
pmb.helpers.run_core.flat_cmd(cmd_chroot, env=env_all)]
)
pmb.helpers.run.flat_cmd(cmd, working_dir)]
cmd_sudo = ["sudo", "env", "-i", executables["sh"], "-c",
pmb.helpers.run.flat_cmd(cmd_chroot, env=env_all)]
kill_as_root = output in ["log", "stdout"]
return pmb.helpers.run_core.core(args, msg, cmd_sudo, None, output,
output_return, check, True,
disable_timeout)
output_return, check, kill_as_root, disable_timeout)

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import logging
import glob
@ -7,6 +7,7 @@ import socket
from contextlib import closing
import pmb.chroot
import pmb.chroot.distccd
import pmb.helpers.mount
import pmb.install.losetup
import pmb.parse.arch
@ -22,17 +23,6 @@ def kill_adb(args):
pmb.chroot.root(args, ["adb", "-P", str(port), "kill-server"])
def kill_sccache(args):
"""
Kill sccache daemon if it's running. Unlike ccache it automatically spawns
a daemon when you call it and exits after some time of inactivity.
"""
port = 4226
with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as sock:
if sock.connect_ex(("127.0.0.1", port)) == 0:
pmb.chroot.root(args, ["sccache", "--stop-server"])
def shutdown_cryptsetup_device(args, name):
"""
:param name: cryptsetup device name, usually "pm_crypt" in pmbootstrap
@ -59,9 +49,10 @@ def shutdown_cryptsetup_device(args, name):
def shutdown(args, only_install_related=False):
# Stop daemons
pmb.chroot.distccd.stop(args)
# Stop adb server
kill_adb(args)
kill_sccache(args)
# Umount installation-related paths (order is important!)
pmb.helpers.mount.umount_all(args, args.work +
@ -82,14 +73,6 @@ def shutdown(args, only_install_related=False):
if os.path.exists(path):
pmb.helpers.mount.umount_all(args, path)
# Remove "in-pmbootstrap" marker from all chroots. This marker indicates
# that pmbootstrap has set up all mount points etc. to run programs inside
# the chroots, but we want it gone afterwards (e.g. when the chroot
# contents get copied to a rootfs / installer image, or if creating an
# android recovery zip from its contents).
for marker in glob.glob(f"{args.work}/chroot_*/in-pmbootstrap"):
pmb.helpers.run.root(args, ["rm", marker])
if not only_install_related:
# Umount all folders inside args.work
# The folders are explicitly iterated over, so folders symlinked inside
@ -99,6 +82,6 @@ def shutdown(args, only_install_related=False):
# Clean up the rest
for arch in pmb.config.build_device_architectures:
if pmb.parse.arch.cpu_emulation_required(arch):
if pmb.parse.arch.cpu_emulation_required(args, arch):
pmb.chroot.binfmt.unregister(args, arch)
logging.debug("Shutdown complete")

View File

@ -1,8 +1,7 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import pmb.chroot.root
import pmb.helpers.run
import pmb.helpers.run_core
def user(args, cmd, suffix="native", working_dir="/", output="log",
@ -22,7 +21,7 @@ def user(args, cmd, suffix="native", working_dir="/", output="log",
if "HOME" not in env:
env["HOME"] = "/home/pmos"
flat_cmd = pmb.helpers.run_core.flat_cmd(cmd, env=env)
flat_cmd = pmb.helpers.run.flat_cmd(cmd, env=env)
cmd = ["busybox", "su", "pmos", "-c", flat_cmd]
return pmb.chroot.root(args, cmd, suffix, working_dir, output,
output_return, check, {}, auto_init)

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import glob
import logging
@ -15,21 +15,20 @@ import pmb.parse.apkindex
def zap(args, confirm=True, dry=False, pkgs_local=False, http=False,
pkgs_local_mismatch=False, pkgs_online_mismatch=False, distfiles=False,
rust=False, netboot=False):
rust=False):
"""
Shutdown everything inside the chroots (e.g. adb), umount
Shutdown everything inside the chroots (e.g. distccd, adb), umount
everything and then safely remove folders from the work-directory.
:param dry: Only show what would be deleted, do not delete for real
:param pkgs_local: Remove *all* self-compiled packages (!)
:param http: Clear the http cache (used e.g. for the initial apk download)
:param pkgs_local_mismatch: Remove the packages that have
a different version compared to what is in the aports folder.
:param pkgs_online_mismatch: Clean out outdated binary packages
downloaded from mirrors (e.g. from Alpine)
:param pkgs_local_mismatch: Remove the packages, that have a different version
compared to what is in the aports folder.
:param pkgs_online_mismatch: Clean out outdated binary packages downloaded from
mirrors (e.g. from Alpine)
:param distfiles: Clear the downloaded files cache
:param rust: Remove rust related caches
:param netboot: Remove images for netboot
NOTE: This function gets called in pmb/config/init.py, with only args.work
and args.device set!
@ -40,8 +39,7 @@ def zap(args, confirm=True, dry=False, pkgs_local=False, http=False,
logging.debug("Calculate work folder size")
size_old = pmb.helpers.other.folder_size(args, args.work)
# Delete packages with a different version compared to aports,
# then re-index
# Delete packages with a different version compared to aports, then re-index
if pkgs_local_mismatch:
zap_pkgs_local_mismatch(args, confirm, dry)
@ -66,17 +64,14 @@ def zap(args, confirm=True, dry=False, pkgs_local=False, http=False,
patterns += ["cache_distfiles"]
if rust:
patterns += ["cache_rust"]
if netboot:
patterns += ["images_netboot"]
# Delete everything matching the patterns
for pattern in patterns:
pattern = os.path.realpath(f"{args.work}/{pattern}")
pattern = os.path.realpath(args.work + "/" + pattern)
matches = glob.glob(pattern)
for match in matches:
if (not confirm or
pmb.helpers.cli.confirm(args, f"Remove {match}?")):
logging.info(f"% rm -rf {match}")
if not confirm or pmb.helpers.cli.confirm(args, "Remove " + match + "?"):
logging.info("% rm -rf " + match)
if not dry:
pmb.helpers.run.root(args, ["rm", "-rf", match])
@ -84,15 +79,15 @@ def zap(args, confirm=True, dry=False, pkgs_local=False, http=False,
pmb.config.workdir.clean(args)
# Chroots were zapped, so no repo lists exist anymore
pmb.helpers.other.cache["apk_repository_list_updated"].clear()
args.cache["apk_repository_list_updated"].clear()
# Print amount of cleaned up space
if dry:
logging.info("Dry run: nothing has been deleted")
else:
size_new = pmb.helpers.other.folder_size(args, args.work)
mb = (size_old - size_new) / 1024
logging.info(f"Cleared up ~{math.ceil(mb)} MB of space")
mb = (size_old - size_new) / 1024 / 1024
logging.info("Cleared up ~" + str(math.ceil(mb)) + " MB of space")
def zap_pkgs_local_mismatch(args, confirm=True, dry=False):
@ -109,7 +104,7 @@ def zap_pkgs_local_mismatch(args, confirm=True, dry=False):
pattern = f"{args.work}/packages/{channel}/*/APKINDEX.tar.gz"
for apkindex_path in glob.glob(pattern):
# Delete packages without same version in aports
blocks = pmb.parse.apkindex.parse_blocks(apkindex_path)
blocks = pmb.parse.apkindex.parse_blocks(args, apkindex_path)
for block in blocks:
pkgname = block["pkgname"]
origin = block["origin"]
@ -117,29 +112,29 @@ def zap_pkgs_local_mismatch(args, confirm=True, dry=False):
arch = block["arch"]
# Apk path
apk_path_short = f"{arch}/{pkgname}-{version}.apk"
apk_path_short = arch + "/" + pkgname + "-" + version + ".apk"
apk_path = f"{args.work}/packages/{channel}/{apk_path_short}"
if not os.path.exists(apk_path):
logging.info("WARNING: Package mentioned in index not"
f" found: {apk_path_short}")
" found: " + apk_path_short)
continue
# Aport path
aport_path = pmb.helpers.pmaports.find(args, origin, False)
if not aport_path:
logging.info(f"% rm {apk_path_short}"
f" ({origin} aport not found)")
logging.info("% rm " + apk_path_short + " (" + origin +
" aport not found)")
if not dry:
pmb.helpers.run.root(args, ["rm", apk_path])
reindex = True
continue
# Clear out any binary apks that do not match what is in aports
apkbuild = pmb.parse.apkbuild(f"{aport_path}/APKBUILD")
version_aport = f"{apkbuild['pkgver']}-r{apkbuild['pkgrel']}"
apkbuild = pmb.parse.apkbuild(args, aport_path + "/APKBUILD")
version_aport = apkbuild["pkgver"] + "-r" + apkbuild["pkgrel"]
if version != version_aport:
logging.info(f"% rm {apk_path_short}"
f" ({origin} aport: {version_aport})")
logging.info("% rm " + apk_path_short + " (" + origin +
" aport: " + version_aport + ")")
if not dry:
pmb.helpers.run.root(args, ["rm", apk_path])
reindex = True
@ -150,22 +145,18 @@ def zap_pkgs_local_mismatch(args, confirm=True, dry=False):
def zap_pkgs_online_mismatch(args, confirm=True, dry=False):
# Check whether we need to do anything
paths = glob.glob(f"{args.work}/cache_apk_*")
paths = glob.glob(args.work + "/cache_apk_*")
if not len(paths):
return
if (confirm and not pmb.helpers.cli.confirm(args,
"Remove outdated"
" binary packages?")):
if confirm and not pmb.helpers.cli.confirm(args, "Remove outdated binary packages?"):
return
# Iterate over existing apk caches
for path in paths:
arch = os.path.basename(path).split("_", 2)[2]
suffix = f"buildroot_{arch}"
if arch == pmb.config.arch_native:
suffix = "native"
suffix = "native" if arch == args.arch_native else "buildroot_" + arch
# Clean the cache with apk
logging.info(f"({suffix}) apk -v cache clean")
logging.info("(" + suffix + ") apk -v cache clean")
if not dry:
pmb.chroot.root(args, ["apk", "-v", "cache", "clean"], suffix)

View File

@ -1,169 +0,0 @@
# Copyright 2023 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import collections
import glob
import logging
import os
import shlex
import pmb.chroot
import pmb.helpers.cli
def get_ci_scripts(topdir):
""" Find 'pmbootstrap ci'-compatible scripts inside a git repository, and
parse their metadata (description, options). The reference is at:
https://postmarketos.org/pmb-ci
:param topdir: top directory of the git repository, get it with:
pmb.helpers.git.get_topdir()
:returns: a dict of CI scripts found in the git repository, e.g.
{"ruff": {"description": "lint all python scripts",
"options": []},
...} """
ret = {}
for script in glob.glob(f"{topdir}/.ci/*.sh"):
is_pmb_ci_script = False
description = ""
options = []
with open(script) as handle:
for line in handle:
if line.startswith("# https://postmarketos.org/pmb-ci"):
is_pmb_ci_script = True
elif line.startswith("# Description: "):
description = line.split(": ", 1)[1].rstrip()
elif line.startswith("# Options: "):
options = line.split(": ", 1)[1].rstrip().split(" ")
elif not line.startswith("#"):
# Stop parsing after the block of comments on top
break
if not is_pmb_ci_script:
continue
if not description:
logging.error(f"ERROR: {script}: missing '# Description: …' line")
exit(1)
for option in options:
if option not in pmb.config.ci_valid_options:
raise RuntimeError(f"{script}: unsupported option '{option}'."
" Typo in script or pmbootstrap too old?")
short_name = os.path.basename(script).split(".", -1)[0]
ret[short_name] = {"description": description,
"options": options}
return ret
def sort_scripts_by_speed(scripts):
""" Order the scripts, so fast scripts run before slow scripts. Whether a
script is fast or not is determined by the '# Options: slow' comment in
the file.
:param scripts: return of get_ci_scripts()
:returns: same format as get_ci_scripts(), but as ordered dict with
fast scripts before slow scripts """
ret = collections.OrderedDict()
# Fast scripts first
for script_name, script in scripts.items():
if "slow" in script["options"]:
continue
ret[script_name] = script
# Then slow scripts
for script_name, script in scripts.items():
if "slow" not in script["options"]:
continue
ret[script_name] = script
return ret
def ask_which_scripts_to_run(scripts_available):
""" Display an interactive prompt about which of the scripts the user
wishes to run, or all of them.
:param scripts_available: same format as get_ci_scripts()
:returns: either full scripts_available (all selected), or a subset """
count = len(scripts_available.items())
choices = ["all"]
logging.info(f"Available CI scripts ({count}):")
for script_name, script in scripts_available.items():
extra = ""
if "slow" in script["options"]:
extra += " (slow)"
logging.info(f"* {script_name}: {script['description']}{extra}")
choices += [script_name]
selection = pmb.helpers.cli.ask("Which script?", None, "all",
complete=choices)
if selection == "all":
return scripts_available
ret = {}
ret[selection] = scripts_available[selection]
return ret
def copy_git_repo_to_chroot(args, topdir):
""" Create a tarball of the git repo (including unstaged changes and new
files) and extract it in chroot_native.
:param topdir: top directory of the git repository, get it with:
pmb.helpers.git.get_topdir() """
pmb.chroot.init(args)
tarball_path = f"{args.work}/chroot_native/tmp/git.tar.gz"
files = pmb.helpers.git.get_files(args, topdir)
with open(f"{tarball_path}.files", "w") as handle:
for file in files:
handle.write(file)
handle.write("\n")
pmb.helpers.run.user(args, ["tar", "-cf", tarball_path, "-T",
f"{tarball_path}.files"], topdir)
ci_dir = "/home/pmos/ci"
pmb.chroot.user(args, ["rm", "-rf", ci_dir])
pmb.chroot.user(args, ["mkdir", ci_dir])
pmb.chroot.user(args, ["tar", "-xf", "/tmp/git.tar.gz"],
working_dir=ci_dir)
def run_scripts(args, topdir, scripts):
""" Run one of the given scripts after another, either natively or in a
chroot. Display a progress message and stop on error (without printing
a python stack trace).
:param topdir: top directory of the git repository, get it with:
pmb.helpers.git.get_topdir()
:param scripts: return of get_ci_scripts() """
steps = len(scripts)
step = 0
repo_copied = False
for script_name, script in scripts.items():
step += 1
where = "pmbootstrap chroot"
if "native" in script["options"]:
where = "native"
script_path = f".ci/{script_name}.sh"
logging.info(f"*** ({step}/{steps}) RUNNING CI SCRIPT: {script_path}"
f" [{where}] ***")
if "native" in script["options"]:
rc = pmb.helpers.run.user(args, [script_path], topdir,
output="tui")
continue
else:
# Run inside pmbootstrap chroot
if not repo_copied:
copy_git_repo_to_chroot(args, topdir)
repo_copied = True
env = {"TESTUSER": "pmos"}
rc = pmb.chroot.root(args, [script_path], check=False, env=env,
working_dir="/home/pmos/ci",
output="tui")
if rc:
logging.error(f"ERROR: CI script failed: {script_name}")
exit(1)

File diff suppressed because it is too large Load Diff

View File

@ -1,8 +1,7 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import logging
import glob
import json
import os
import shutil
@ -11,11 +10,8 @@ import pmb.config
import pmb.config.pmaports
import pmb.helpers.cli
import pmb.helpers.devices
import pmb.helpers.git
import pmb.helpers.http
import pmb.helpers.logging
import pmb.helpers.other
import pmb.helpers.pmaports
import pmb.helpers.run
import pmb.helpers.ui
import pmb.chroot.zap
@ -30,25 +26,8 @@ def require_programs():
missing.append(program)
if missing:
raise RuntimeError("Can't find all programs required to run"
" pmbootstrap. Please install first:"
f" {', '.join(missing)}")
def ask_for_username(args):
"""
Ask for a reasonable username for the non-root user.
:returns: the username
"""
while True:
ret = pmb.helpers.cli.ask("Username", None, args.user, False,
"[a-z_][a-z0-9_-]*")
if ret == "root":
logging.fatal("ERROR: don't put \"root\" here. This is about"
" creating an additional non-root user. Don't worry,"
" the root user will also be created ;)")
continue
return ret
" pmbootstrap. Please install first: " +
", ".join(missing))
def ask_for_work_path(args):
@ -66,13 +45,13 @@ def ask_for_work_path(args):
while True:
try:
work = os.path.expanduser(pmb.helpers.cli.ask(
"Work path", None, args.work, False))
args, "Work path", None, args.work, False))
work = os.path.realpath(work)
exists = os.path.exists(work)
# Work must not be inside the pmbootstrap path
if (work == pmb.config.pmb_src or
work.startswith(f"{pmb.config.pmb_src}/")):
work.startswith(pmb.config.pmb_src + "/")):
logging.fatal("ERROR: The work path must not be inside the"
" pmbootstrap path. Please specify another"
" location.")
@ -82,17 +61,15 @@ def ask_for_work_path(args):
if not exists:
os.makedirs(work, 0o700, True)
# If the version file doesn't exists yet because we either just
# created the work directory or the user has deleted it for
# whatever reason then we need to write initialize it.
work_version_file = f"{work}/version"
if not os.path.isfile(work_version_file):
with open(work_version_file, "w") as handle:
handle.write(f"{pmb.config.work_version}\n")
if not os.listdir(work):
# Directory is empty, either because we just created it or
# because user created it before running pmbootstrap init
with open(work + "/version", "w") as handle:
handle.write(str(pmb.config.work_version) + "\n")
# Create cache_git dir, so it is owned by the host system's user
# (otherwise pmb.helpers.mount.bind would create it as root)
os.makedirs(f"{work}/cache_git", 0o700, True)
os.makedirs(work + "/cache_git", 0o700, True)
return (work, exists)
except OSError:
logging.fatal("ERROR: Could not create this folder, or write"
@ -103,17 +80,14 @@ def ask_for_channel(args):
""" Ask for the postmarketOS release channel. The channel dictates, which
pmaports branch pmbootstrap will check out, and which repository URLs
will be used when initializing chroots.
:returns: channel name (e.g. "edge", "v21.03") """
:returns: channel name (e.g. "edge", "stable") """
channels_cfg = pmb.helpers.git.parse_channels_cfg(args)
count = len(channels_cfg["channels"])
# List channels
logging.info("Choose the postmarketOS release channel.")
logging.info(f"Available ({count}):")
# Only show the first 3 releases. This includes edge, the latest supported
# release plus one. Should be a good solution until new needs arrive when
# we might want to have a custom channels.cfg attribute.
for channel, channel_data in list(channels_cfg["channels"].items())[:3]:
for channel, channel_data in channels_cfg["channels"].items():
logging.info(f"* {channel}: {channel_data['description']}")
# Default for first run: "recommended" from channels.cfg
@ -127,7 +101,7 @@ def ask_for_channel(args):
# Ask until user gives valid channel
while True:
ret = pmb.helpers.cli.ask("Channel", None, default,
ret = pmb.helpers.cli.ask(args, "Channel", None, default,
complete=choices)
if ret in choices:
return ret
@ -135,36 +109,17 @@ def ask_for_channel(args):
" from the list above.")
def ask_for_ui(args, info):
def ask_for_ui(args, device):
info = pmb.parse.deviceinfo(args, device)
ui_list = pmb.helpers.ui.list(args, info["arch"])
hidden_ui_count = 0
device_is_accelerated = info.get("gpu_accelerated") == "true"
if not device_is_accelerated:
for i in reversed(range(len(ui_list))):
pkgname = f"postmarketos-ui-{ui_list[i][0]}"
apkbuild = pmb.helpers.pmaports.get(args, pkgname,
subpackages=False,
must_exist=False)
if apkbuild and "pmb:gpu-accel" in apkbuild["options"]:
ui_list.pop(i)
hidden_ui_count += 1
# Get default
default = args.ui
if default not in dict(ui_list).keys():
default = pmb.config.defaults["ui"]
logging.info(f"Available user interfaces ({len(ui_list) - 1}): ")
logging.info("Available user interfaces (" +
str(len(ui_list) - 1) + "): ")
ui_completion_list = []
for ui in ui_list:
logging.info(f"* {ui[0]}: {ui[1]}")
logging.info("* " + ui[0] + ": " + ui[1])
ui_completion_list.append(ui[0])
if hidden_ui_count > 0:
logging.info(f"NOTE: {hidden_ui_count} UIs are hidden because"
" \"deviceinfo_gpu_accelerated\" is not set (see"
" https://postmarketos.org/deviceinfo).")
while True:
ret = pmb.helpers.cli.ask("User interface", None, default, True,
ret = pmb.helpers.cli.ask(args, "User interface", None, args.ui, True,
complete=ui_completion_list)
if ret in dict(ui_list).keys():
return ret
@ -173,33 +128,33 @@ def ask_for_ui(args, info):
def ask_for_ui_extras(args, ui):
apkbuild = pmb.helpers.pmaports.get(args, f"postmarketos-ui-{ui}",
apkbuild = pmb.helpers.pmaports.get(args, "postmarketos-ui-" + ui,
subpackages=False, must_exist=False)
if not apkbuild:
return False
extra = apkbuild["subpackages"].get(f"postmarketos-ui-{ui}-extras")
extra = apkbuild["subpackages"].get("postmarketos-ui-" + ui + "-extras")
if extra is None:
return False
logging.info("This user interface has an extra package:"
f" {extra['pkgdesc']}")
logging.info("This user interface has an extra package: " + extra["pkgdesc"])
return pmb.helpers.cli.confirm(args, "Enable this package?",
default=args.ui_extras)
def ask_for_keymaps(args, info):
def ask_for_keymaps(args, device):
info = pmb.parse.deviceinfo(args, device)
if "keymaps" not in info or info["keymaps"].strip() == "":
return ""
options = info["keymaps"].split(' ')
logging.info(f"Available keymaps for device ({len(options)}): "
f"{', '.join(options)}")
logging.info("Available keymaps for device (" + str(len(options)) +
"): " + ", ".join(options))
if args.keymap == "":
args.keymap = options[0]
while True:
ret = pmb.helpers.cli.ask("Keymap", None, args.keymap,
ret = pmb.helpers.cli.ask(args, "Keymap", None, args.keymap,
True, complete=options)
if ret in options:
return ret
@ -223,9 +178,8 @@ def ask_for_timezone(args):
except:
pass
if tz:
logging.info(f"Your host timezone: {tz}")
if pmb.helpers.cli.confirm(args,
"Use this timezone instead of GMT?",
logging.info("Your host timezone: " + tz)
if pmb.helpers.cli.confirm(args, "Use this timezone instead of GMT?",
default="y"):
return tz
logging.info("WARNING: Unable to determine timezone configuration on host,"
@ -233,80 +187,6 @@ def ask_for_timezone(args):
return "GMT"
def ask_for_provider_select(args, apkbuild, providers_cfg):
"""
Ask for selectable providers that are specified using "_pmb_select"
in a APKBUILD.
:param apkbuild: the APKBUILD with the _pmb_select
:param providers_cfg: the configuration section with previously selected
providers. Updated with new providers after selection
"""
for select in apkbuild["_pmb_select"]:
providers = pmb.helpers.pmaports.find_providers(args, select)
logging.info(f"Available providers for {select} ({len(providers)}):")
has_default = False
providers_short = {}
last_selected = providers_cfg.get(select, 'default')
for pkgname, pkg in providers:
# Strip provider prefix if possible
short = pkgname
if short.startswith(f'{select}-'):
short = short[len(f"{select}-"):]
# Allow selecting the package using both short and long name
providers_short[pkgname] = pkgname
providers_short[short] = pkgname
if pkgname == last_selected:
last_selected = short
if not has_default and pkg.get('provider_priority', 0) != 0:
# Display as default provider
styles = pmb.config.styles
logging.info(f"* {short}: {pkg['pkgdesc']} "
f"{styles['BOLD']}(default){styles['END']}")
has_default = True
else:
logging.info(f"* {short}: {pkg['pkgdesc']}")
while True:
ret = pmb.helpers.cli.ask("Provider", None, last_selected, True,
complete=providers_short.keys())
if has_default and ret == 'default':
# Selecting default means to not select any provider explicitly
# In other words, apk chooses it automatically based on
# "provider_priority"
if select in providers_cfg:
del providers_cfg[select]
break
if ret in providers_short:
providers_cfg[select] = providers_short[ret]
break
logging.fatal("ERROR: Invalid provider specified, please type in"
" one from the list above.")
def ask_for_provider_select_pkg(args, pkgname, providers_cfg):
"""
Look up the APKBUILD for the specified pkgname and ask for selectable
providers that are specified using "_pmb_select".
:param pkgname: name of the package to search APKBUILD for
:param providers_cfg: the configuration section with previously selected
providers. Updated with new providers after selection
"""
apkbuild = pmb.helpers.pmaports.get(args, pkgname,
subpackages=False, must_exist=False)
if not apkbuild:
return
ask_for_provider_select(args, apkbuild, providers_cfg)
def ask_for_device_kernel(args, device):
"""
Ask for the kernel that should be used with the device.
@ -336,11 +216,11 @@ def ask_for_device_kernel(args, device):
" downstream kernels.")
# List kernels
logging.info(f"Available kernels ({len(kernels)}):")
logging.info("Available kernels (" + str(len(kernels)) + "):")
for type in sorted(kernels.keys()):
logging.info(f"* {type}: {kernels[type]}")
logging.info("* " + type + ": " + kernels[type])
while True:
ret = pmb.helpers.cli.ask("Kernel", None, default, True,
ret = pmb.helpers.cli.ask(args, "Kernel", None, default, True,
complete=kernels)
if ret in kernels.keys():
return ret
@ -365,12 +245,12 @@ def ask_for_device_nonfree(args, device):
"userland": args.nonfree_userland}
if not apkbuild_path:
return ret
apkbuild = pmb.parse.apkbuild(apkbuild_path)
apkbuild = pmb.parse.apkbuild(args, apkbuild_path)
# Only run when there is a "nonfree" subpackage
nonfree_found = False
for subpackage in apkbuild["subpackages"].keys():
if subpackage.startswith(f"device-{device}-nonfree"):
if subpackage.startswith("device-" + device + "-nonfree"):
nonfree_found = True
if not nonfree_found:
return ret
@ -384,13 +264,12 @@ def ask_for_device_nonfree(args, device):
# Ask for firmware and userland individually
for type in ["firmware", "userland"]:
subpkgname = f"device-{device}-nonfree-{type}"
subpkgname = "device-" + device + "-nonfree-" + type
subpkg = apkbuild["subpackages"].get(subpkgname, {})
if subpkg is None:
raise RuntimeError("Cannot find subpackage function for "
f"{subpkgname}")
raise RuntimeError("Cannot find subpackage function for " + subpkgname)
if subpkg:
logging.info(f"{subpkgname}: {subpkg['pkgdesc']}")
logging.info(subpkgname + ": " + subpkg["pkgdesc"])
ret[type] = pmb.helpers.cli.confirm(args, "Enable this package?",
default=ret[type])
return ret
@ -400,7 +279,8 @@ def ask_for_device(args):
vendors = sorted(pmb.helpers.devices.list_vendors(args))
logging.info("Choose your target device vendor (either an "
"existing one, or a new one for porting).")
logging.info(f"Available vendors ({len(vendors)}): {', '.join(vendors)}")
logging.info("Available vendors (" + str(len(vendors)) + "): " +
", ".join(vendors))
current_vendor = None
current_codename = None
@ -409,7 +289,7 @@ def ask_for_device(args):
current_codename = args.device.split("-", 1)[1]
while True:
vendor = pmb.helpers.cli.ask("Vendor", None, current_vendor,
vendor = pmb.helpers.cli.ask(args, "Vendor", None, current_vendor,
False, r"[a-z0-9]+", vendors)
new_vendor = vendor not in vendors
@ -421,45 +301,36 @@ def ask_for_device(args):
if not pmb.helpers.cli.confirm(args, default=True):
continue
else:
# Unmaintained devices can be selected, but are not displayed
devices = sorted(pmb.helpers.devices.list_codenames(
args, vendor, unmaintained=False))
devices = sorted(pmb.helpers.devices.list_codenames(args, vendor))
# Remove "vendor-" prefixes from device list
codenames = [x.split('-', 1)[1] for x in devices]
logging.info(f"Available codenames ({len(codenames)}): " +
logging.info("Available codenames (" + str(len(codenames)) + "): " +
", ".join(codenames))
if current_vendor != vendor:
current_codename = ''
codename = pmb.helpers.cli.ask("Device codename", None,
codename = pmb.helpers.cli.ask(args, "Device codename", None,
current_codename, False, r"[a-z0-9]+",
codenames)
device = f"{vendor}-{codename}"
device_path = pmb.helpers.devices.find_path(args, device, 'deviceinfo')
device_exists = device_path is not None
device = vendor + '-' + codename
device_exists = pmb.helpers.devices.find_path(args, device, 'deviceinfo') is not None
if not device_exists:
if device == args.device:
raise RuntimeError(
"This device does not exist anymore, check"
" <https://postmarketos.org/renamed>"
" to see if it was renamed")
logging.info("You are about to do"
f" a new device port for '{device}'.")
logging.info("You are about to do a new device port for '" +
device + "'.")
if not pmb.helpers.cli.confirm(args, default=True):
current_vendor = vendor
continue
# New port creation confirmed
logging.info("Generating new aports for: {}...".format(device))
pmb.aportgen.generate(args, f"device-{device}")
pmb.aportgen.generate(args, f"linux-{device}")
elif "/unmaintained/" in device_path:
apkbuild = f"{device_path[:-len('deviceinfo')]}APKBUILD"
unmaintained = pmb.parse._apkbuild.unmaintained(apkbuild)
logging.info(f"WARNING: {device} is unmaintained: {unmaintained}")
if not pmb.helpers.cli.confirm(args):
continue
pmb.aportgen.generate(args, "device-" + device)
pmb.aportgen.generate(args, "linux-" + device)
break
kernel = ask_for_device_kernel(args, device)
@ -470,135 +341,41 @@ def ask_for_device(args):
def ask_for_additional_options(args, cfg):
# Allow to skip additional options
logging.info("Additional options:"
f" extra free space: {args.extra_space} MB,"
f" boot partition size: {args.boot_size} MB,"
f" parallel jobs: {args.jobs},"
f" ccache per arch: {args.ccache_size},"
f" sudo timer: {args.sudo_timer},"
f" mirror: {','.join(args.mirrors_postmarketos)}")
f" ccache per arch: {args.ccache_size}")
if not pmb.helpers.cli.confirm(args, "Change them?",
default=False):
return
# Extra space
logging.info("Set extra free space to 0, unless you ran into a 'No space"
" left on device' error. In that case, the size of the"
" rootfs could not be calculated properly on your machine,"
" and we need to add extra free space to make the image big"
" enough to fit the rootfs (pmbootstrap#1904)."
" How much extra free space do you want to add to the image"
" (in MB)?")
answer = pmb.helpers.cli.ask("Extra space size", None,
args.extra_space, validation_regex="^[0-9]+$")
cfg["pmbootstrap"]["extra_space"] = answer
# Boot size
logging.info("What should be the boot partition size (in MB)?")
answer = pmb.helpers.cli.ask("Boot size", None, args.boot_size,
validation_regex="^[1-9][0-9]*$")
answer = pmb.helpers.cli.ask(args, "Boot size", None, args.boot_size,
validation_regex="[1-9][0-9]*")
cfg["pmbootstrap"]["boot_size"] = answer
# Parallel job count
logging.info("How many jobs should run parallel on this machine, when"
" compiling?")
answer = pmb.helpers.cli.ask("Jobs", None, args.jobs,
validation_regex="^[1-9][0-9]*$")
answer = pmb.helpers.cli.ask(args, "Jobs", None, args.jobs,
validation_regex="[1-9][0-9]*")
cfg["pmbootstrap"]["jobs"] = answer
# Ccache size
logging.info("We use ccache to speed up building the same code multiple"
" times. How much space should the ccache folder take up per"
" architecture? After init is through, you can check the"
" current usage with 'pmbootstrap stats'. Answer with 0 for"
" infinite.")
" architecture? After init is through, you can check the current"
" usage with 'pmbootstrap stats'. Answer with 0 for infinite.")
regex = "0|[0-9]+(k|M|G|T|Ki|Mi|Gi|Ti)"
answer = pmb.helpers.cli.ask("Ccache size", None, args.ccache_size,
lowercase_answer=False,
validation_regex=regex)
answer = pmb.helpers.cli.ask(args, "Ccache size", None, args.ccache_size,
lowercase_answer=False, validation_regex=regex)
cfg["pmbootstrap"]["ccache_size"] = answer
# Sudo timer
logging.info("pmbootstrap does everything in Alpine Linux chroots, so"
" your host system does not get modified. In order to"
" work with these chroots, pmbootstrap calls 'sudo'"
" internally. For long running operations, it is possible"
" that you'll have to authorize sudo more than once.")
answer = pmb.helpers.cli.confirm(args, "Enable background timer to prevent"
" repeated sudo authorization?",
default=args.sudo_timer)
cfg["pmbootstrap"]["sudo_timer"] = str(answer)
# Mirrors
# prompt for mirror change
logging.info("Selected mirror:"
f" {','.join(args.mirrors_postmarketos)}")
if pmb.helpers.cli.confirm(args, "Change mirror?", default=False):
mirrors = ask_for_mirror(args)
cfg["pmbootstrap"]["mirrors_postmarketos"] = ",".join(mirrors)
def ask_for_mirror(args):
regex = "^[1-9][0-9]*$" # single non-zero number only
json_path = pmb.helpers.http.download(
args, "https://postmarketos.org/mirrors.json", "pmos_mirrors",
cache=False)
with open(json_path, "rt") as handle:
s = handle.read()
logging.info("List of available mirrors:")
mirrors = json.loads(s)
keys = mirrors.keys()
i = 1
for key in keys:
logging.info(f"[{i}]\t{key} ({mirrors[key]['location']})")
i += 1
urls = []
for key in keys:
# accept only http:// or https:// urls
http_count = 0 # remember if we saw any http:// only URLs
link_list = []
for k in mirrors[key]["urls"]:
if k.startswith("http"):
link_list.append(k)
if k.startswith("http://"):
http_count += 1
# remove all https urls if there is more that one URL and one of
# them was http://
if http_count > 0 and len(link_list) > 1:
link_list = [k for k in link_list if not k.startswith("https")]
if len(link_list) > 0:
urls.append(link_list[0])
mirror_indexes = []
for mirror in args.mirrors_postmarketos:
for i in range(len(urls)):
if urls[i] == mirror:
mirror_indexes.append(str(i + 1))
break
mirrors_list = []
# require one valid mirror index selected by user
while len(mirrors_list) != 1:
answer = pmb.helpers.cli.ask("Select a mirror", None,
",".join(mirror_indexes),
validation_regex=regex)
mirrors_list = []
for i in answer.split(","):
idx = int(i) - 1
if 0 <= idx < len(urls):
mirrors_list.append(urls[idx])
if len(mirrors_list) != 1:
logging.info("You must select one valid mirror!")
return mirrors_list
def ask_for_hostname(args, device):
while True:
ret = pmb.helpers.cli.ask("Device hostname (short form, e.g. 'foo')",
ret = pmb.helpers.cli.ask(args, "Device hostname (short form, e.g. 'foo')",
None, (args.hostname or device), True)
if not pmb.helpers.other.validate_hostname(ret):
continue
@ -612,8 +389,7 @@ def ask_for_ssh_keys(args):
if not len(glob.glob(os.path.expanduser("~/.ssh/id_*.pub"))):
return False
return pmb.helpers.cli.confirm(args,
"Would you like to copy your SSH public"
" keys to the device?",
"Would you like to copy your SSH public keys to the device?",
default=args.ssh_keys)
@ -626,38 +402,6 @@ def ask_build_pkgs_on_install(args):
default=args.build_pkgs_on_install)
def get_locales():
ret = []
list_path = f"{pmb.config.pmb_src}/pmb/data/locales"
with open(list_path, "r") as handle:
for line in handle:
ret += [line.rstrip()]
return ret
def ask_for_locale(args):
locales = get_locales()
logging.info("Choose your preferred locale, like e.g. en_US. Only UTF-8"
" is supported, it gets appended automatically. Use"
" tab-completion if needed.")
while True:
ret = pmb.helpers.cli.ask("Locale",
choices=None,
default=args.locale.replace(".UTF-8", ""),
lowercase_answer=False,
complete=locales)
ret = ret.replace(".UTF-8", "")
if ret not in locales:
logging.info("WARNING: this locale is not in the list of known"
" valid locales.")
if pmb.helpers.cli.ask() != "y":
# Ask again
continue
return f"{ret}.UTF-8"
def frontend(args):
require_programs()
@ -681,15 +425,6 @@ def frontend(args):
pmb.config.pmaports.switch_to_channel_branch(args, channel)
cfg["pmbootstrap"]["is_default_channel"] = "False"
# Copy the git hooks if master was checked out. (Don't symlink them and
# only do it on master, so the git hooks don't change unexpectedly when
# having a random branch checked out.)
branch_current = pmb.helpers.git.rev_parse(args, args.aports,
extra_args=["--abbrev-ref"])
if branch_current == "master":
logging.info("NOTE: pmaports is on master branch, copying git hooks.")
pmb.config.pmaports.install_githooks(args)
# Device
device, device_exists, kernel, nonfree = ask_for_device(args)
cfg["pmbootstrap"]["device"] = device
@ -697,32 +432,25 @@ def frontend(args):
cfg["pmbootstrap"]["nonfree_firmware"] = str(nonfree["firmware"])
cfg["pmbootstrap"]["nonfree_userland"] = str(nonfree["userland"])
info = pmb.parse.deviceinfo(args, device)
apkbuild_path = pmb.helpers.devices.find_path(args, device, 'APKBUILD')
if apkbuild_path:
apkbuild = pmb.parse.apkbuild(apkbuild_path)
ask_for_provider_select(args, apkbuild, cfg["providers"])
# Device keymap
if device_exists:
cfg["pmbootstrap"]["keymap"] = ask_for_keymaps(args, info)
cfg["pmbootstrap"]["user"] = ask_for_username(args)
ask_for_provider_select_pkg(args, "postmarketos-base", cfg["providers"])
cfg["pmbootstrap"]["keymap"] = ask_for_keymaps(args, device)
# Username
cfg["pmbootstrap"]["user"] = pmb.helpers.cli.ask(args, "Username", None,
args.user, False,
"[a-z_][a-z0-9_-]*")
# UI and various build options
ui = ask_for_ui(args, info)
ui = ask_for_ui(args, device)
cfg["pmbootstrap"]["ui"] = ui
cfg["pmbootstrap"]["ui_extras"] = str(ask_for_ui_extras(args, ui))
ask_for_provider_select_pkg(args, f"postmarketos-ui-{ui}",
cfg["providers"])
ask_for_additional_options(args, cfg)
# Extra packages to be installed to rootfs
logging.info("Additional packages that will be installed to rootfs."
" Specify them in a comma separated list (e.g.: vim,file)"
" or \"none\"")
extra = pmb.helpers.cli.ask("Extra packages", None,
extra = pmb.helpers.cli.ask(args, "Extra packages", None,
args.extra_packages,
validation_regex=r"^([-.+\w]+)(,[-.+\w]+)*$")
cfg["pmbootstrap"]["extra_packages"] = extra
@ -730,9 +458,6 @@ def frontend(args):
# Configure timezone info
cfg["pmbootstrap"]["timezone"] = ask_for_timezone(args)
# Locale
cfg["pmbootstrap"]["locale"] = ask_for_locale(args)
# Hostname
cfg["pmbootstrap"]["hostname"] = ask_for_hostname(args, device)
@ -743,8 +468,7 @@ def frontend(args):
cfg["pmbootstrap"]["aports"] = args.aports
# Build outdated packages in pmbootstrap install
cfg["pmbootstrap"]["build_pkgs_on_install"] = str(
ask_build_pkgs_on_install(args))
cfg["pmbootstrap"]["build_pkgs_on_install"] = str(ask_build_pkgs_on_install(args))
# Save config
pmb.config.save(args, cfg)
@ -752,10 +476,8 @@ def frontend(args):
# Zap existing chroots
if (work_exists and device_exists and
len(glob.glob(args.work + "/chroot_*")) and
pmb.helpers.cli.confirm(
args, "Zap existing chroots to apply configuration?",
default=True)):
setattr(args, "deviceinfo", info)
pmb.helpers.cli.confirm(args, "Zap existing chroots to apply configuration?", default=True)):
setattr(args, "deviceinfo", pmb.parse.deviceinfo(args, device=device))
# Do not zap any existing packages or cache_http directories
pmb.chroot.zap(args, confirm=False)
@ -764,4 +486,4 @@ def frontend(args):
" not get updated automatically.")
logging.info("Run 'pmbootstrap status' once a day before working with"
" pmbootstrap to make sure that everything is up-to-date.")
logging.info("DONE!")
logging.info("Done!")

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import logging
import configparser
@ -13,8 +13,6 @@ def load(args):
if "pmbootstrap" not in cfg:
cfg["pmbootstrap"] = {}
if "providers" not in cfg:
cfg["providers"] = {}
for key in pmb.config.defaults:
if key in pmb.config.config_keys and key not in cfg["pmbootstrap"]:
@ -22,13 +20,12 @@ def load(args):
# We used to save default values in the config, which can *not* be
# configured in "pmbootstrap init". That doesn't make sense, we always
# want to use the defaults from pmb/config/__init__.py in that case,
# not some outdated version we saved some time back (eg. aports folder,
# want to use the defaults from pmb/config/__init__.py in that case, not
# some outdated version we saved some time back (eg. aports folder,
# postmarketOS binary packages mirror).
if key not in pmb.config.config_keys and key in cfg["pmbootstrap"]:
logging.debug("Ignored unconfigurable and possibly outdated"
" default value from config:"
f" {cfg['pmbootstrap'][key]}")
logging.debug("Ignored unconfigurable and possibly outdated default"
" value from config: " + str(cfg["pmbootstrap"][key]))
del cfg["pmbootstrap"][key]
return cfg

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import pmb.config
@ -6,8 +6,7 @@ import pmb.config
def merge_with_args(args):
"""
We have the internal config (pmb/config/__init__.py) and the user config
(usually ~/.config/pmbootstrap.cfg, can be changed with the '-c'
parameter).
(usually ~/.config/pmbootstrap.cfg, can be changed with the '-c' parameter).
Args holds the variables parsed from the commandline (e.g. -j fills out
args.jobs), and values specified on the commandline count the most.
@ -29,7 +28,6 @@ def merge_with_args(args):
if isinstance(default, bool):
value = (value.lower() == "true")
setattr(args, key, value)
setattr(args, 'selected_providers', cfg['providers'])
# Use defaults from pmb.config.defaults
for key, value in pmb.config.defaults.items():

View File

@ -1,13 +1,11 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import configparser
import logging
import os
import sys
import pmb.config
import pmb.helpers.git
import pmb.helpers.pmaports
def check_legacy_folder():
@ -25,6 +23,12 @@ def check_legacy_folder():
def clone(args):
# Explain sudo-usage before using it the first time
logging.info("pmbootstrap does everything in Alpine Linux chroots, so your"
" host system does not get modified. In order to work with"
" these chroots, pmbootstrap calls 'sudo' internally. To see"
" the commands it runs, you can run 'pmbootstrap log' in a"
" second terminal.")
logging.info("Setting up the native chroot and cloning the package build"
" recipes (pmaports)...")
@ -59,7 +63,7 @@ def check_version_pmaports(real):
def check_version_pmbootstrap(min):
# Compare versions
real = pmb.__version__
real = pmb.config.version
if pmb.parse.version.compare(real, min) >= 0:
return
@ -85,14 +89,14 @@ def read_config(args):
""" Read and verify pmaports.cfg. """
# Try cache first
cache_key = "pmb.config.pmaports.read_config"
if pmb.helpers.other.cache[cache_key]:
return pmb.helpers.other.cache[cache_key]
if args.cache[cache_key]:
return args.cache[cache_key]
# Migration message
if not os.path.exists(args.aports):
logging.error(f"ERROR: pmaports dir not found: {args.aports}")
logging.error("Did you run 'pmbootstrap init'?")
sys.exit(1)
raise RuntimeError("We have split the aports repository from the"
" pmbootstrap repository (#383). Please run"
" 'pmbootstrap init' again to clone it.")
# Require the config
path_cfg = args.aports + "/pmaports.cfg"
@ -109,11 +113,8 @@ def read_config(args):
check_version_pmaports(ret["version"])
check_version_pmbootstrap(ret["pmbootstrap_min_version"])
# Translate legacy channel names
ret["channel"] = pmb.helpers.pmaports.get_channel_new(ret["channel"])
# Cache and return
pmb.helpers.other.cache[cache_key] = ret
args.cache[cache_key] = ret
return ret
@ -158,7 +159,7 @@ def init(args):
def switch_to_channel_branch(args, channel_new):
""" Checkout the channel's branch in pmaports.git.
:channel_new: channel name (e.g. "edge", "v21.03")
:channel_new: channel name (e.g. "edge", "stable")
:returns: True if another branch was checked out, False otherwise """
# Check current pmaports branch channel
channel_current = read_config(args)["channel"]
@ -187,22 +188,8 @@ def switch_to_channel_branch(args, channel_new):
f"{args.aports}")
# Invalidate all caches
pmb.helpers.other.init_cache()
pmb.helpers.args.add_cache(args)
# Verify pmaports.cfg on new branch
read_config(args)
return True
def install_githooks(args):
hooks_dir = os.path.join(args.aports, ".githooks")
if not os.path.exists(hooks_dir):
logging.info("No .githooks dir found")
return
for h in os.listdir(hooks_dir):
src = os.path.join(hooks_dir, h)
# Use git default hooks dir so users can ignore our hooks
# if they dislike them by setting "core.hooksPath" git config
dst = os.path.join(args.aports, ".git", "hooks", h)
if pmb.helpers.run.user(args, ["cp", src, dst], check=False):
logging.warning(f"WARNING: Copying git hook failed: {dst}")

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import os
import logging

View File

@ -1,36 +0,0 @@
# Copyright 2023 Anjandev Momi
# SPDX-License-Identifier: GPL-3.0-or-later
import os
import shutil
from functools import lru_cache
from typing import Optional
@lru_cache()
def which_sudo() -> Optional[str]:
"""Returns a command required to run commands as root, if any.
Find whether sudo or doas is installed for commands that require root.
Allows user to override preferred sudo with PMB_SUDO env variable.
"""
if os.getuid() == 0:
return None
supported_sudos = ['doas', 'sudo']
user_set_sudo = os.getenv("PMB_SUDO")
if user_set_sudo is not None:
if shutil.which(user_set_sudo) is None:
raise RuntimeError("PMB_SUDO environmental variable is set to"
f" {user_set_sudo} but pmbootstrap cannot find"
" this command on your system.")
return user_set_sudo
for sudo in supported_sudos:
if shutil.which(sudo) is not None:
return sudo
raise RuntimeError("Can't find sudo or doas required to run pmbootstrap."
" Please install sudo, doas, or specify your own sudo"
" with the PMB_SUDO environmental variable.")

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
""" Save, read, verify workdir state related information in $WORK/workdir.cfg,
for example the init dates of the chroots. This is not saved in

View File

@ -1,9 +0,0 @@
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwXEJ8uVwJPODshTkf2BH
pH5fVVDppOa974+IQJsZDmGd3Ny0dcd+WwYUhNFUW3bAfc3/egaMWCaprfaHn+oS
4ddbOFgbX8JCHdru/QMAAU0aEWSMybfJGA569c38fNUF/puX6XK/y0lD2SS3YQ/a
oJ5jb5eNrQGR1HHMAd0G9WC4JeZ6WkVTkrcOw55F00aUPGEjejreXBerhTyFdabo
dSfc1TILWIYD742Lkm82UBOPsOSdSfOdsMOOkSXxhdCJuCQQ70DHkw7Epy9r+X33
ybI4r1cARcV75OviyhD8CFhAlapLKaYnRFqFxlA515e6h8i8ih/v3MSEW17cCK0b
QwIDAQAB
-----END PUBLIC KEY-----

View File

@ -1,9 +0,0 @@
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwR4uJVtJOnOFGchnMW5Y
j5/waBdG1u5BTMlH+iQMcV5+VgWhmpZHJCBz3ocD+0IGk2I68S5TDOHec/GSC0lv
6R9o6F7h429GmgPgVKQsc8mPTPtbjJMuLLs4xKc+viCplXc0Nc0ZoHmCH4da6fCV
tdpHQjVe6F9zjdquZ4RjV6R6JTiN9v924dGMAkbW/xXmamtz51FzondKC52Gh8Mo
/oA0/T0KsCMCi7tb4QNQUYrf+Xcha9uus4ww1kWNZyfXJB87a2kORLiWMfs2IBBJ
TmZ2Fnk0JnHDb8Oknxd9PvJPT0mvyT8DA+KIAPqNvOjUXP4bnjEHJcoCP9S5HkGC
IQIDAQAB
-----END PUBLIC KEY-----

View File

@ -1,14 +0,0 @@
-----BEGIN PUBLIC KEY-----
MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAutQkua2CAig4VFSJ7v54
ALyu/J1WB3oni7qwCZD3veURw7HxpNAj9hR+S5N/pNeZgubQvJWyaPuQDm7PTs1+
tFGiYNfAsiibX6Rv0wci3M+z2XEVAeR9Vzg6v4qoofDyoTbovn2LztaNEjTkB+oK
tlvpNhg1zhou0jDVYFniEXvzjckxswHVb8cT0OMTKHALyLPrPOJzVtM9C1ew2Nnc
3848xLiApMu3NBk0JqfcS3Bo5Y2b1FRVBvdt+2gFoKZix1MnZdAEZ8xQzL/a0YS5
Hd0wj5+EEKHfOd3A75uPa/WQmA+o0cBFfrzm69QDcSJSwGpzWrD1ScH3AK8nWvoj
v7e9gukK/9yl1b4fQQ00vttwJPSgm9EnfPHLAtgXkRloI27H6/PuLoNvSAMQwuCD
hQRlyGLPBETKkHeodfLoULjhDi1K2gKJTMhtbnUcAA7nEphkMhPWkBpgFdrH+5z4
Lxy+3ek0cqcI7K68EtrffU8jtUj9LFTUC8dERaIBs7NgQ/LfDbDfGh9g6qVj1hZl
k9aaIPTm/xsi8v3u+0qaq7KzIBc9s59JOoA8TlpOaYdVgSQhHHLBaahOuAigH+VI
isbC9vmqsThF2QdDtQt37keuqoda2E6sL7PUvIyVXDRfwX7uMDjlzTxHTymvq2Ck
htBqojBnThmjJQFgZXocHG8CAwEAAQ==
-----END PUBLIC KEY-----

View File

@ -1,14 +0,0 @@
-----BEGIN PUBLIC KEY-----
MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAlEyxkHggKCXC2Wf5Mzx4
nZLFZvU2bgcA3exfNPO/g1YunKfQY+Jg4fr6tJUUTZ3XZUrhmLNWvpvSwDS19ZmC
IXOu0+V94aNgnhMsk9rr59I8qcbsQGIBoHzuAl8NzZCgdbEXkiY90w1skUw8J57z
qCsMBydAueMXuWqF5nGtYbi5vHwK42PffpiZ7G5Kjwn8nYMW5IZdL6ZnMEVJUWC9
I4waeKg0yskczYDmZUEAtrn3laX9677ToCpiKrvmZYjlGl0BaGp3cxggP2xaDbUq
qfFxWNgvUAb3pXD09JM6Mt6HSIJaFc9vQbrKB9KT515y763j5CC2KUsilszKi3mB
HYe5PoebdjS7D1Oh+tRqfegU2IImzSwW3iwA7PJvefFuc/kNIijfS/gH/cAqAK6z
bhdOtE/zc7TtqW2Wn5Y03jIZdtm12CxSxwgtCF1NPyEWyIxAQUX9ACb3M0FAZ61n
fpPrvwTaIIxxZ01L3IzPLpbc44x/DhJIEU+iDt6IMTrHOphD9MCG4631eIdB0H1b
6zbNX1CXTsafqHRFV9XmYYIeOMggmd90s3xIbEujA6HKNP/gwzO6CDJ+nHFDEqoF
SkxRdTkEqjTjVKieURW7Swv7zpfu5PrsrrkyGnsRrBJJzXlm2FOOxnbI2iSL1B5F
rO5kbUxFeZUIDq+7Yv4kLWcCAwEAAQ==
-----END PUBLIC KEY-----

View File

@ -1,14 +0,0 @@
-----BEGIN PUBLIC KEY-----
MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAnC+bR4bHf/L6QdU4puhQ
gl1MHePszRC38bzvVFDUJsmCaMCL2suCs2A2yxAgGb9pu9AJYLAmxQC4mM3jNqhg
/E7yuaBbek3O02zN/ctvflJ250wZCy+z0ZGIp1ak6pu1j14IwHokl9j36zNfGtfv
ADVOcdpWITFFlPqwq1qt/H3UsKVmtiF3BNWWTeUEQwKvlU8ymxgS99yn0+4OPyNT
L3EUeS+NQJtDS01unau0t7LnjUXn+XIneWny8bIYOQCuVR6s/gpIGuhBaUqwaJOw
7jkJZYF2Ij7uPb4b5/R3vX2FfxxqEHqssFSg8FFUNTZz3qNZs0CRVyfA972g9WkJ
hPfn31pQYil4QGRibCMIeU27YAEjXoqfJKEPh4UWMQsQLrEfdGfb8VgwrPbniGfU
L3jKJR3VAafL9330iawzVQDlIlwGl6u77gEXMl9K0pfazunYhAp+BMP+9ot5ckK+
osmrqj11qMESsAj083GeFdfV3pXEIwUytaB0AKEht9DbqUfiE/oeZ/LAXgySMtVC
sbC4ESmgVeY2xSBIJdDyUap7FR49GGrw0W49NUv9gRgQtGGaNVQQO9oGL2PBC41P
iWF9GLoX30HIz1P8PF/cZvicSSPkQf2Z6TV+t0ebdGNS5DjapdnCrq8m9Z0pyKsQ
uxAL2a7zX8l5i1CZh1ycUGsCAwEAAQ==
-----END PUBLIC KEY-----

View File

@ -1,14 +0,0 @@
-----BEGIN PUBLIC KEY-----
MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA0MfCDrhODRCIxR9Dep1s
eXafh5CE5BrF4WbCgCsevyPIdvTeyIaW4vmO3bbG4VzhogDZju+R3IQYFuhoXP5v
Y+zYJGnwrgz3r5wYAvPnLEs1+dtDKYOgJXQj+wLJBW1mzRDL8FoRXOe5iRmn1EFS
wZ1DoUvyu7/J5r0itKicZp3QKED6YoilXed+1vnS4Sk0mzN4smuMR9eO1mMCqNp9
9KTfRDHTbakIHwasECCXCp50uXdoW6ig/xUAFanpm9LtK6jctNDbXDhQmgvAaLXZ
LvFqoaYJ/CvWkyYCgL6qxvMvVmPoRv7OPcyni4xR/WgWa0MSaEWjgPx3+yj9fiMA
1S02pFWFDOr5OUF/O4YhFJvUCOtVsUPPfA/Lj6faL0h5QI9mQhy5Zb9TTaS9jB6p
Lw7u0dJlrjFedk8KTJdFCcaGYHP6kNPnOxMylcB/5WcztXZVQD5WpCicGNBxCGMm
W64SgrV7M07gQfL/32QLsdqPUf0i8hoVD8wfQ3EpbQzv6Fk1Cn90bZqZafg8XWGY
wddhkXk7egrr23Djv37V2okjzdqoyLBYBxMz63qQzFoAVv5VoY2NDTbXYUYytOvG
GJ1afYDRVWrExCech1mX5ZVUB1br6WM+psFLJFoBFl6mDmiYt0vMYBddKISsvwLl
IJQkzDwtXzT2cSjoj3T5QekCAwEAAQ==
-----END PUBLIC KEY-----

View File

@ -1,14 +0,0 @@
-----BEGIN PUBLIC KEY-----
MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAvaaoSLab+IluixwKV5Od
0gib2YurjPatGIbn5Ov2DLUFYiebj2oJINXJSwUOO+4WcuHFEqiL/1rya+k5hLZt
hnPL1tn6QD4rESznvGSasRCQNT2vS/oyZbTYJRyAtFkEYLlq0t3S3xBxxHWuvIf0
qVxVNYpQWyM3N9RIeYBR/euXKJXileSHk/uq1I5wTC0XBIHWcthczGN0m9wBEiWS
0m3cnPk4q0Ea8mUJ91Rqob19qETz6VbSPYYpZk3qOycjKosuwcuzoMpwU8KRiMFd
5LHtX0Hx85ghGsWDVtS0c0+aJa4lOMGvJCAOvDfqvODv7gKlCXUpgumGpLdTmaZ8
1RwqspAe3IqBcdKTqRD4m2mSg23nVx2FAY3cjFvZQtfooT7q1ItRV5RgH6FhQSl7
+6YIMJ1Bf8AAlLdRLpg+doOUGcEn+pkDiHFgI8ylH1LKyFKw+eXaAml/7DaWZk1d
dqggwhXOhc/UUZFQuQQ8A8zpA13PcbC05XxN2hyP93tCEtyynMLVPtrRwDnHxFKa
qKzs3rMDXPSXRn3ZZTdKH3069ApkEjQdpcwUh+EmJ1Ve/5cdtzT6kKWCjKBFZP/s
91MlRrX2BTRdHaU5QJkUheUtakwxuHrdah2F94lRmsnQlpPr2YseJu6sIE+Dnx4M
CfhdVbQL2w54R645nlnohu8CAwEAAQ==
-----END PUBLIC KEY-----

View File

@ -1,14 +0,0 @@
-----BEGIN PUBLIC KEY-----
MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAq0BFD1D4lIxQcsqEpQzU
pNCYM3aP1V/fxxVdT4DWvSI53JHTwHQamKdMWtEXetWVbP5zSROniYKFXd/xrD9X
0jiGHey3lEtylXRIPxe5s+wXoCmNLcJVnvTcDtwx/ne2NLHxp76lyc25At+6RgE6
ADjLVuoD7M4IFDkAsd8UQ8zM0Dww9SylIk/wgV3ZkifecvgUQRagrNUdUjR56EBZ
raQrev4hhzOgwelT0kXCu3snbUuNY/lU53CoTzfBJ5UfEJ5pMw1ij6X0r5S9IVsy
KLWH1hiO0NzU2c8ViUYCly4Fe9xMTFc6u2dy/dxf6FwERfGzETQxqZvSfrRX+GLj
/QZAXiPg5178hT/m0Y3z5IGenIC/80Z9NCi+byF1WuJlzKjDcF/TU72zk0+PNM/H
Kuppf3JT4DyjiVzNC5YoWJT2QRMS9KLP5iKCSThwVceEEg5HfhQBRT9M6KIcFLSs
mFjx9kNEEmc1E8hl5IR3+3Ry8G5/bTIIruz14jgeY9u5jhL8Vyyvo41jgt9sLHR1
/J1TxKfkgksYev7PoX6/ZzJ1ksWKZY5NFoDXTNYUgzFUTOoEaOg3BAQKadb3Qbbq
XIrxmPBdgrn9QI7NCgfnAY3Tb4EEjs3ON/BNyEhUENcXOH6I1NbcuBQ7g9P73kE4
VORdoc8MdJ5eoKBpO8Ww8HECAwEAAQ==
-----END PUBLIC KEY-----

View File

@ -1,14 +0,0 @@
-----BEGIN PUBLIC KEY-----
MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAyduVzi1mWm+lYo2Tqt/0
XkCIWrDNP1QBMVPrE0/ZlU2bCGSoo2Z9FHQKz/mTyMRlhNqTfhJ5qU3U9XlyGOPJ
piM+b91g26pnpXJ2Q2kOypSgOMOPA4cQ42PkHBEqhuzssfj9t7x47ppS94bboh46
xLSDRff/NAbtwTpvhStV3URYkxFG++cKGGa5MPXBrxIp+iZf9GnuxVdST5PGiVGP
ODL/b69sPJQNbJHVquqUTOh5Ry8uuD2WZuXfKf7/C0jC/ie9m2+0CttNu9tMciGM
EyKG1/Xhk5iIWO43m4SrrT2WkFlcZ1z2JSf9Pjm4C2+HovYpihwwdM/OdP8Xmsnr
DzVB4YvQiW+IHBjStHVuyiZWc+JsgEPJzisNY0Wyc/kNyNtqVKpX6dRhMLanLmy+
f53cCSI05KPQAcGj6tdL+D60uKDkt+FsDa0BTAobZ31OsFVid0vCXtsbplNhW1IF
HwsGXBTVcfXg44RLyL8Lk/2dQxDHNHzAUslJXzPxaHBLmt++2COa2EI1iWlvtznk
Ok9WP8SOAIj+xdqoiHcC4j72BOVVgiITIJNHrbppZCq6qPR+fgXmXa+sDcGh30m6
9Wpbr28kLMSHiENCWTdsFij+NQTd5S47H7XTROHnalYDuF1RpS+DpQidT5tUimaT
JZDr++FjKrnnijbyNF8b98UCAwEAAQ==
-----END PUBLIC KEY-----

View File

@ -1,14 +0,0 @@
-----BEGIN PUBLIC KEY-----
MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAnpUpyWDWjlUk3smlWeA0
lIMW+oJ38t92CRLHH3IqRhyECBRW0d0aRGtq7TY8PmxjjvBZrxTNDpJT6KUk4LRm
a6A6IuAI7QnNK8SJqM0DLzlpygd7GJf8ZL9SoHSH+gFsYF67Cpooz/YDqWrlN7Vw
tO00s0B+eXy+PCXYU7VSfuWFGK8TGEv6HfGMALLjhqMManyvfp8hz3ubN1rK3c8C
US/ilRh1qckdbtPvoDPhSbTDmfU1g/EfRSIEXBrIMLg9ka/XB9PvWRrekrppnQzP
hP9YE3x/wbFc5QqQWiRCYyQl/rgIMOXvIxhkfe8H5n1Et4VAorkpEAXdsfN8KSVv
LSMazVlLp9GYq5SUpqYX3KnxdWBgN7BJoZ4sltsTpHQ/34SXWfu3UmyUveWj7wp0
x9hwsPirVI00EEea9AbP7NM2rAyu6ukcm4m6ATd2DZJIViq2es6m60AE6SMCmrQF
wmk4H/kdQgeAELVfGOm2VyJ3z69fQuywz7xu27S6zTKi05Qlnohxol4wVb6OB7qG
LPRtK9ObgzRo/OPumyXqlzAi/Yvyd1ZQk8labZps3e16bQp8+pVPiumWioMFJDWV
GZjCmyMSU8V6MB6njbgLHoyg2LCukCAeSjbPGGGYhnKLm1AKSoJh3IpZuqcKCk5C
8CM1S15HxV78s9dFntEqIokCAwEAAQ==
-----END PUBLIC KEY-----

View File

@ -1,304 +0,0 @@
C
a_DJ
aa_ER
aa_ET
af_ZA
agr_PE
ak_GH
am_ET
an_ES
anp_IN
ar_AE
ar_BH
ar_DZ
ar_EG
ar_IN
ar_IQ
ar_JO
ar_KW
ar_LB
ar_LY
ar_MA
ar_OM
ar_QA
ar_SA
ar_SD
ar_SS
ar_SY
ar_TN
ar_YE
as_IN
ast_ES
ayc_PE
az_AZ
az_IR
be_BY
bem_ZM
ber_DZ
ber_MA
bg_BG
bhb_IN
bho_IN
bho_NP
bi_VU
bn_BD
bn_IN
bo_CN
bo_IN
br_FR
brx_IN
bs_BA
byn_ER
ca_AD
ca_ES
ca_FR
ca_IT
ce_RU
ch_DE
chr_US
cmn_TW
crh_UA
cs_CZ
csb_PL
cv_RU
cy_GB
da_DK
de_AT
de_BE
de_CH
de_DE
de_IT
de_LI
de_LU
doi_IN
dsb_DE
dv_MV
dz_BT
el_CY
el_GR
en_AG
en_AU
en_BW
en_CA
en_DK
en_GB
en_HK
en_IE
en_IL
en_IN
en_NG
en_NZ
en_PH
en_SC
en_SG
en_US
en_ZA
en_ZM
en_ZW
eo
es_AR
es_BO
es_CL
es_CO
es_CR
es_CU
es_DO
es_EC
es_ES
es_GT
es_HN
es_MX
es_NI
es_PA
es_PE
es_PR
es_PY
es_SV
es_US
es_UY
es_VE
et_EE
eu_ES
fa_IR
ff_SN
fi_FI
fil_PH
fo_FO
fr_BE
fr_CA
fr_CH
fr_FR
fr_LU
fur_IT
fy_DE
fy_NL
ga_IE
gd_GB
gez_ER
gez_ET
gl_ES
gu_IN
gv_GB
ha_NG
hak_TW
he_IL
hi_IN
hif_FJ
hne_IN
hr_HR
hsb_DE
ht_HT
hu_HU
hy_AM
ia_FR
id_ID
ig_NG
ik_CA
is_IS
it_CH
it_IT
iu_CA
ja_JP
ka_GE
kab_DZ
kk_KZ
kl_GL
km_KH
kn_IN
ko_KR
kok_IN
ks_IN
ku_TR
kw_GB
ky_KG
lb_LU
lg_UG
li_BE
li_NL
lij_IT
ln_CD
lo_LA
lt_LT
lv_LV
lzh_TW
mag_IN
mai_IN
mai_NP
mfe_MU
mg_MG
mhr_RU
mi_NZ
miq_NI
mjw_IN
mk_MK
ml_IN
mn_MN
mni_IN
mnw_MM
mr_IN
ms_MY
mt_MT
my_MM
nan_TW
nb_NO
nds_DE
nds_NL
ne_NP
nhn_MX
niu_NU
niu_NZ
nl_AW
nl_BE
nl_NL
nn_NO
nr_ZA
nso_ZA
oc_FR
om_ET
om_KE
or_IN
os_RU
pa_IN
pa_PK
pap_AW
pap_CW
pl_PL
ps_AF
pt_BR
pt_PT
quz_PE
raj_IN
ro_RO
ru_RU
ru_UA
rw_RW
sa_IN
sah_RU
sat_IN
sc_IT
sd_IN
se_NO
sgs_LT
shn_MM
shs_CA
si_LK
sid_ET
sk_SK
sl_SI
sm_WS
so_DJ
so_ET
so_KE
so_SO
sq_AL
sq_MK
sr_ME
sr_RS
ss_ZA
st_ZA
sv_FI
sv_SE
sw_KE
sw_TZ
szl_PL
ta_IN
ta_LK
tcy_IN
te_IN
tg_TJ
th_TH
the_NP
ti_ER
ti_ET
tig_ER
tk_TM
tl_PH
tn_ZA
to_TO
tpi_PG
tr_CY
tr_TR
ts_ZA
tt_RU
ug_CN
uk_UA
unm_US
ur_IN
ur_PK
uz_UZ
ve_ZA
vi_VN
wa_BE
wae_CH
wal_ET
wo_SN
xh_ZA
yi_US
yo_NG
yue_HK
yuw_PG
zh_CN
zh_HK
zh_SG
zh_TW
zu_ZA

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
from pmb.export.frontend import frontend
from pmb.export.odin import odin

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import logging
import os
@ -12,71 +12,58 @@ import pmb.helpers.file
def odin(args, flavor, folder):
"""
Create Odin flashable tar file with kernel and initramfs
for devices configured with the flasher method 'heimdall-isorec'
and with boot.img for devices with 'heimdall-bootimg'
Create Odin flashable tar file with kernel and initramfs for devices configured with
the flasher method 'heimdall-isorec' and with boot.img for devices with 'heimdall-bootimg'
"""
pmb.flasher.init(args)
suffix = "rootfs_" + args.device
# Backwards compatibility with old mkinitfs (pma#660)
suffix_flavor = f"-{flavor}"
pmaports_cfg = pmb.config.pmaports.read_config(args)
if pmaports_cfg.get("supported_mkinitfs_without_flavors", False):
suffix_flavor = ""
# Validate method
method = args.deviceinfo["flash_method"]
if not method.startswith("heimdall-"):
raise RuntimeError("An odin flashable tar is not supported"
f" for the flash method '{method}' specified"
" in the current configuration."
raise RuntimeError("An odin flashable tar is not supported for the flash"
" method '" + method + "' specified in the current configuration."
" Only 'heimdall' methods are supported.")
# Partitions
partition_kernel = \
args.deviceinfo["flash_heimdall_partition_kernel"] or "KERNEL"
partition_initfs = \
args.deviceinfo["flash_heimdall_partition_initfs"] or "RECOVERY"
partition_kernel = args.deviceinfo["flash_heimdall_partition_kernel"] or "KERNEL"
partition_initfs = args.deviceinfo["flash_heimdall_partition_initfs"] or "RECOVERY"
# Temporary folder
temp_folder = "/tmp/odin-flashable-tar"
if os.path.exists(f"{args.work}/chroot_native{temp_folder}"):
if os.path.exists(args.work + "/chroot_native" + temp_folder):
pmb.chroot.root(args, ["rm", "-rf", temp_folder])
# Odin flashable tar generation script
# (because redirecting stdin/stdout is not allowed
# Odin flashable tar generation script (because redirecting stdin/stdout is not allowed
# in pmbootstrap's chroot/shell functions for security reasons)
odin_script = f"{args.work}/chroot_rootfs_{args.device}/tmp/_odin.sh"
with open(odin_script, "w") as handle:
odin_kernel_md5 = f"{partition_kernel}.bin.md5"
odin_initfs_md5 = f"{partition_initfs}.bin.md5"
odin_device_tar = f"{args.device}.tar"
odin_device_tar_md5 = f"{args.device}.tar.md5"
with open(args.work + "/chroot_rootfs_" + args.device + "/tmp/_odin.sh", "w") as handle:
odin_kernel_md5 = partition_kernel + ".bin.md5"
odin_initfs_md5 = partition_initfs + ".bin.md5"
odin_device_tar = args.device + ".tar"
odin_device_tar_md5 = args.device + ".tar.md5"
handle.write(
"#!/bin/sh\n"
f"cd {temp_folder}\n")
"cd " + temp_folder + "\n")
if method == "heimdall-isorec":
handle.write(
# Kernel: copy and append md5
f"cp /boot/vmlinuz{suffix_flavor} {odin_kernel_md5}\n"
f"md5sum -t {odin_kernel_md5} >> {odin_kernel_md5}\n"
"cp /boot/vmlinuz-" + flavor + " " + odin_kernel_md5 + "\n"
"md5sum -t " + odin_kernel_md5 + " >> " + odin_kernel_md5 + "\n"
# Initramfs: recompress with lzop, append md5
f"gunzip -c /boot/initramfs{suffix_flavor}"
f" | lzop > {odin_initfs_md5}\n"
f"md5sum -t {odin_initfs_md5} >> {odin_initfs_md5}\n")
"gunzip -c /boot/initramfs-" + flavor + " | lzop > " + odin_initfs_md5 + "\n"
"md5sum -t " + odin_initfs_md5 + " >> " + odin_initfs_md5 + "\n")
elif method == "heimdall-bootimg":
handle.write(
# boot.img: copy and append md5
f"cp /boot/boot.img{suffix_flavor} {odin_kernel_md5}\n"
f"md5sum -t {odin_kernel_md5} >> {odin_kernel_md5}\n")
"cp /boot/boot.img-" + flavor + " " + odin_kernel_md5 + "\n"
"md5sum -t " + odin_kernel_md5 + " >> " + odin_kernel_md5 + "\n")
handle.write(
# Create tar, remove included files and append md5
f"tar -c -f {odin_device_tar} *.bin.md5\n"
"tar -c -f " + odin_device_tar + " *.bin.md5\n"
"rm *.bin.md5\n"
f"md5sum -t {odin_device_tar} >> {odin_device_tar}\n"
f"mv {odin_device_tar} {odin_device_tar_md5}\n")
"md5sum -t " + odin_device_tar + " >> " + odin_device_tar + "\n"
"mv " + odin_device_tar + " " + odin_device_tar_md5 + "\n")
commands = [["mkdir", "-p", temp_folder],
["cat", "/tmp/_odin.sh"], # for the log
@ -88,19 +75,19 @@ def odin(args, flavor, folder):
# Move Odin flashable tar to native chroot and cleanup temp folder
pmb.chroot.user(args, ["mkdir", "-p", "/home/pmos/rootfs"])
pmb.chroot.root(args, ["mv", f"/mnt/rootfs_{args.device}{temp_folder}"
f"/{odin_device_tar_md5}", "/home/pmos/rootfs/"]),
pmb.chroot.root(args, ["mv", "/mnt/rootfs_" + args.device + temp_folder +
"/" + odin_device_tar_md5, "/home/pmos/rootfs/"]),
pmb.chroot.root(args, ["chown", "pmos:pmos",
f"/home/pmos/rootfs/{odin_device_tar_md5}"])
"/home/pmos/rootfs/" + odin_device_tar_md5])
pmb.chroot.root(args, ["rmdir", temp_folder], suffix)
# Create the symlink
file = f"{args.work}/chroot_native/home/pmos/rootfs/{odin_device_tar_md5}"
link = f"{folder}/{odin_device_tar_md5}"
file = args.work + "/chroot_native/home/pmos/rootfs/" + odin_device_tar_md5
link = folder + "/" + odin_device_tar_md5
pmb.helpers.file.symlink(args, file, link)
# Display a readable message
msg = f" * {odin_device_tar_md5}"
msg = " * " + odin_device_tar_md5
if method == "heimdall-isorec":
msg += " (Odin flashable file, contains initramfs and kernel)"
elif method == "heimdall-bootimg":

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import logging
import os
@ -7,7 +7,6 @@ import glob
import pmb.build
import pmb.chroot.apk
import pmb.config
import pmb.config.pmaports
import pmb.flasher
import pmb.helpers.file
@ -17,45 +16,34 @@ def symlinks(args, flavor, folder):
Create convenience symlinks to the rootfs and boot files.
"""
# Backwards compatibility with old mkinitfs (pma#660)
suffix = f"-{flavor}"
pmaports_cfg = pmb.config.pmaports.read_config(args)
if pmaports_cfg.get("supported_mkinitfs_without_flavors", False):
suffix = ""
# File descriptions
info = {
f"boot.img{suffix}": ("Fastboot compatible boot.img file,"
" contains initramfs and kernel"),
"dtbo.img": "Fastboot compatible dtbo image",
f"initramfs{suffix}": "Initramfs",
f"initramfs{suffix}-extra": "Extra initramfs files in /boot",
f"uInitrd{suffix}": "Initramfs, legacy u-boot image format",
f"uImage{suffix}": "Kernel, legacy u-boot image format",
f"vmlinuz{suffix}": "Linux kernel",
f"{args.device}.img": "Rootfs with partitions for /boot and /",
f"{args.device}-boot.img": "Boot partition image",
f"{args.device}-root.img": "Root partition image",
f"pmos-{args.device}.zip": "Android recovery flashable zip",
"lk2nd.img": "Secondary Android bootloader",
"boot.img-" + flavor: "Fastboot compatible boot.img file,"
" contains initramfs and kernel",
"blob-" + flavor: "Asus boot blob for TF101",
"initramfs-" + flavor: "Initramfs",
"initramfs-" + flavor + "-extra": "Extra initramfs files in /boot",
"uInitrd-" + flavor: "Initramfs, legacy u-boot image format",
"uImage-" + flavor: "Kernel, legacy u-boot image format",
"vmlinuz-" + flavor: "Linux kernel",
args.device + ".img": "Rootfs with partitions for /boot and /",
args.device + "-boot.img": "Boot partition image",
args.device + "-root.img": "Root partition image",
"pmos-" + args.device + ".zip": "Android recovery flashable zip",
}
# Generate a list of patterns
path_native = args.work + "/chroot_native"
path_boot = args.work + "/chroot_rootfs_" + args.device + "/boot"
path_buildroot = args.work + "/chroot_buildroot_" + args.deviceinfo["arch"]
patterns = [f"{path_boot}/boot.img{suffix}",
f"{path_boot}/initramfs{suffix}*",
f"{path_boot}/uInitrd{suffix}",
f"{path_boot}/uImage{suffix}",
f"{path_boot}/vmlinuz{suffix}",
f"{path_boot}/dtbo.img",
f"{path_native}/home/pmos/rootfs/{args.device}.img",
f"{path_native}/home/pmos/rootfs/{args.device}-boot.img",
f"{path_native}/home/pmos/rootfs/{args.device}-root.img",
f"{path_buildroot}/var/lib/postmarketos-android-recovery-" +
f"installer/pmos-{args.device}.zip",
f"{path_boot}/lk2nd.img"]
patterns = [path_boot + "/*-" + flavor,
path_boot + "/*-" + flavor + "-extra",
path_native + "/home/pmos/rootfs/" + args.device + ".img",
path_native + "/home/pmos/rootfs/" + args.device + "-boot.img",
path_native + "/home/pmos/rootfs/" + args.device + "-root.img",
path_buildroot +
"/var/lib/postmarketos-android-recovery-installer/pmos-" +
args.device + ".zip"]
# Generate a list of files from the patterns
files = []

View File

@ -1,7 +1,6 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
from pmb.flasher.init import init
from pmb.flasher.init import install_depends
from pmb.flasher.run import run
from pmb.flasher.run import check_partition_blacklist
from pmb.flasher.variables import variables

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import logging
import os
@ -20,7 +20,7 @@ def kernel(args):
pmb.chroot.initfs.build(args, flavor, "rootfs_" + args.device)
# Check kernel config
pmb.parse.kconfig.check(args, flavor, must_exist=False)
pmb.parse.kconfig.check(args, flavor)
# Generate the paths and run the flasher
if args.action_flasher == "boot":
@ -31,18 +31,17 @@ def kernel(args):
pmb.flasher.run(args, "flash_kernel", flavor)
logging.info("You will get an IP automatically assigned to your "
"USB interface shortly.")
logging.info("Then you can connect to your device using ssh after pmOS has"
" booted:")
logging.info("Then you can connect to your device using ssh after pmOS has booted:")
logging.info("ssh {}@{}".format(args.user, pmb.config.default_ip))
logging.info("NOTE: If you enabled full disk encryption, you should make"
" sure that osk-sdl has been properly configured for your"
" device")
logging.info("NOTE: If you enabled full disk encryption, you should make sure that"
" osk-sdl has been properly configured for your device")
def list_flavors(args):
suffix = "rootfs_" + args.device
logging.info("(" + suffix + ") installed kernel flavors:")
logging.info("* " + pmb.chroot.other.kernel_flavor_installed(args, suffix))
for flavor in pmb.chroot.other.kernel_flavors_installed(args, suffix):
logging.info("* " + flavor)
def rootfs(args):
@ -53,15 +52,13 @@ def rootfs(args):
if pmb.config.flashers.get(method, {}).get("split", False):
suffix = "-root.img"
img_path = f"{args.work}/chroot_native/home/pmos/rootfs/{args.device}"\
f"{suffix}"
img_path = args.work + "/chroot_native/home/pmos/rootfs/" + args.device + suffix
if not os.path.exists(img_path):
raise RuntimeError("The rootfs has not been generated yet, please run"
" 'pmbootstrap install' first.")
# Do not flash if using fastboot & image is too large
if method.startswith("fastboot") \
and args.deviceinfo["flash_fastboot_max_size"]:
if method.startswith("fastboot") and args.deviceinfo["flash_fastboot_max_size"]:
img_size = os.path.getsize(img_path) / 1024**2
max_size = int(args.deviceinfo["flash_fastboot_max_size"])
if img_size > max_size:
@ -78,18 +75,16 @@ def flash_vbmeta(args):
pmb.flasher.run(args, "flash_vbmeta")
def flash_dtbo(args):
logging.info("(native) flash dtbo image")
pmb.flasher.run(args, "flash_dtbo")
def list_devices(args):
pmb.flasher.run(args, "list_devices")
def sideload(args):
method = args.flash_method or args.deviceinfo["flash_method"]
cfg = pmb.config.flashers[method]
# Install depends
pmb.flasher.install_depends(args)
pmb.chroot.apk.install(args, cfg["depends"])
# Mount the buildroot
suffix = "buildroot_" + args.deviceinfo["arch"]
@ -109,63 +104,27 @@ def sideload(args):
pmb.flasher.run(args, "sideload")
def flash_lk2nd(args):
method = args.flash_method or args.deviceinfo["flash_method"]
if method == "fastboot":
# In the future this could be expanded to use "fastboot flash lk2nd $img"
# which reflashes/updates lk2nd from itself. For now let the user handle this
# manually since supporting the codepath with heimdall requires more effort.
pmb.flasher.init(args)
logging.info("(native) checking current fastboot product")
output = pmb.chroot.root(args, ["fastboot", "getvar", "product"],
output="interactive", output_return=True)
# Variable "product" is e.g. "LK2ND_MSM8974" or "lk2nd-msm8226" depending
# on the lk2nd version.
if "lk2nd" in output.lower():
raise RuntimeError("You are currently running lk2nd. Please reboot into the regular"
" bootloader mode to re-flash lk2nd.")
# Get the lk2nd package (which is a dependency of the device package)
device_pkg = f"device-{args.device}"
apkbuild = pmb.helpers.pmaports.get(args, device_pkg)
lk2nd_pkg = None
for dep in apkbuild["depends"]:
if dep.startswith("lk2nd"):
lk2nd_pkg = dep
break
if not lk2nd_pkg:
raise RuntimeError(f"{device_pkg} does not depend on any lk2nd package")
suffix = "rootfs_" + args.device
pmb.chroot.apk.install(args, [lk2nd_pkg], suffix)
logging.info("(native) flash lk2nd image")
pmb.flasher.run(args, "flash_lk2nd")
def frontend(args):
action = args.action_flasher
method = args.flash_method or args.deviceinfo["flash_method"]
if method == "none" and action in ["boot", "flash_kernel", "flash_rootfs",
"flash_lk2nd"]:
# Legacy alias
if action == "flash_system":
action = "flash_rootfs"
if method == "none" and action in ["boot", "flash_kernel", "flash_rootfs"]:
logging.info("This device doesn't support any flash method.")
return
if action in ["boot", "flash_kernel"]:
kernel(args)
elif action == "flash_rootfs":
if action == "flash_rootfs":
rootfs(args)
elif action == "flash_vbmeta":
if action == "flash_vbmeta":
flash_vbmeta(args)
elif action == "flash_dtbo":
flash_dtbo(args)
elif action == "flash_lk2nd":
flash_lk2nd(args)
elif action == "list_flavors":
if action == "list_flavors":
list_flavors(args)
elif action == "list_devices":
if action == "list_devices":
list_devices(args)
elif action == "sideload":
if action == "sideload":
sideload(args)

View File

@ -1,47 +1,27 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import pmb.chroot.apk
import pmb.config
import pmb.config.pmaports
import pmb.chroot.apk
import pmb.helpers.mount
def install_depends(args):
def init(args):
# Validate method
if hasattr(args, 'flash_method'):
method = args.flash_method or args.deviceinfo["flash_method"]
else:
method = args.deviceinfo["flash_method"]
if method not in pmb.config.flashers:
raise RuntimeError(f"Flash method {method} is not supported by the"
" current configuration. However, adding a new"
" flash method is not that hard, when the flashing"
" application already exists.\n"
"Make sure, it is packaged for Alpine Linux, or"
" package it yourself, and then add it to"
" pmb/config/__init__.py.")
depends = pmb.config.flashers[method]["depends"]
raise RuntimeError("Flash method " + method + " is not supported by the"
" current configuration. However, adding a new flash method is "
" not that hard, when the flashing application already exists.\n"
"Make sure, it is packaged for Alpine Linux, or package it "
" yourself, and then add it to pmb/config/__init__.py.")
cfg = pmb.config.flashers[method]
# Depends for some flash methods may be different for various pmaports
# branches, so read them from pmaports.cfg.
if method == "fastboot":
pmaports_cfg = pmb.config.pmaports.read_config(args)
depends = pmaports_cfg.get("supported_fastboot_depends",
"android-tools,avbtool").split(",")
elif method == "heimdall-bootimg":
pmaports_cfg = pmb.config.pmaports.read_config(args)
depends = pmaports_cfg.get("supported_heimdall_depends",
"heimdall,avbtool").split(",")
elif method == "mtkclient":
pmaports_cfg = pmb.config.pmaports.read_config(args)
depends = pmaports_cfg.get("supported_mtkclient_depends",
"mtkclient,android-tools").split(",")
pmb.chroot.apk.install(args, depends)
def init(args):
install_depends(args)
# Install depends
pmb.chroot.apk.install(args, cfg["depends"])
# Mount folders from host system
for folder in pmb.config.flash_mount_bind:

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import pmb.flasher
import pmb.chroot.initfs
@ -42,15 +42,6 @@ def run(args, action, flavor=None):
" <https://wiki.postmarketos.org/wiki/"
"Deviceinfo_reference>")
# dtbo flasher requires dtbo partition to be explicitly specified
if action == "flash_dtbo" and not vars["$PARTITION_DTBO"]:
raise RuntimeError("Your device does not have 'dtbo' partition"
" specified; set"
" 'deviceinfo_flash_fastboot_partition_dtbo'"
" in deviceinfo file. See also:"
" <https://wiki.postmarketos.org/wiki/"
"Deviceinfo_reference>")
# Run the commands of each action
for command in cfg["actions"][action]:
# Variable replacement
@ -58,11 +49,10 @@ def run(args, action, flavor=None):
for i in range(len(command)):
if key in command[i]:
if value is None:
raise RuntimeError(f"Variable {key} found in action"
f" {action} for method {method},"
" but the value for this variable"
" is None! Is that missing in your"
" deviceinfo?")
raise RuntimeError("Variable " + key + " found in"
" action " + action + " for method " + method + ","
" but the value for this variable is None! Is that"
" missing in your deviceinfo?")
check_partition_blacklist(args, key, value)
command[i] = command[i].replace(key, value)

View File

@ -1,6 +1,5 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import pmb.config.pmaports
def variables(args, flavor, method):
@ -10,53 +9,20 @@ def variables(args, flavor, method):
flash_pagesize = args.deviceinfo['flash_pagesize']
# TODO Remove _partition_system deviceinfo support once pmaports has been
# updated and minimum pmbootstrap version bumped.
# See also https://gitlab.com/postmarketOS/pmbootstrap/-/issues/2243
if method.startswith("fastboot"):
_partition_kernel = args.deviceinfo["flash_fastboot_partition_kernel"]\
or "boot"
_partition_rootfs = args.deviceinfo["flash_fastboot_partition_rootfs"]\
or args.deviceinfo["flash_fastboot_partition_system"] or "userdata"
_partition_vbmeta = args.deviceinfo["flash_fastboot_partition_vbmeta"]\
or None
_partition_dtbo = args.deviceinfo["flash_fastboot_partition_dtbo"]\
or None
# Require that the partitions are specified in deviceinfo for now
elif method.startswith("rkdeveloptool"):
_partition_kernel = args.deviceinfo["flash_rk_partition_kernel"]\
or None
_partition_rootfs = args.deviceinfo["flash_rk_partition_rootfs"]\
or args.deviceinfo["flash_rk_partition_system"] or None
_partition_vbmeta = None
_partition_dtbo = None
elif method.startswith("mtkclient"):
_partition_kernel = args.deviceinfo["flash_mtkclient_partition_kernel"]\
or "boot"
_partition_rootfs = args.deviceinfo["flash_mtkclient_partition_rootfs"]\
or "userdata"
_partition_vbmeta = args.deviceinfo["flash_mtkclient_partition_vbmeta"]\
or None
_partition_dtbo = args.deviceinfo["flash_mtkclient_partition_dtbo"]\
or None
_partition_kernel = args.deviceinfo["flash_fastboot_partition_kernel"] or "boot"
_partition_system = args.deviceinfo["flash_fastboot_partition_system"] or "system"
_partition_vbmeta = args.deviceinfo["flash_fastboot_partition_vbmeta"] or None
else:
_partition_kernel = args.deviceinfo["flash_heimdall_partition_kernel"]\
or "KERNEL"
_partition_rootfs = args.deviceinfo["flash_heimdall_partition_rootfs"]\
or args.deviceinfo["flash_heimdall_partition_system"] or "SYSTEM"
_partition_vbmeta = args.deviceinfo["flash_heimdall_partition_vbmeta"]\
or None
_partition_dtbo = args.deviceinfo["flash_heimdall_partition_dtbo"]\
or None
_partition_kernel = args.deviceinfo["flash_heimdall_partition_kernel"] or "KERNEL"
_partition_system = args.deviceinfo["flash_heimdall_partition_system"] or "SYSTEM"
_partition_vbmeta = args.deviceinfo["flash_heimdall_partition_vbmeta"] or None
if "partition" in args and args.partition:
# Only one operation is done at same time so it doesn't matter
# sharing the arg
# Only one of operations is done at same time so it doesn't matter sharing the arg
_partition_kernel = args.partition
_partition_rootfs = args.partition
_partition_system = args.partition
_partition_vbmeta = args.partition
_partition_dtbo = args.partition
_dtb = ""
if args.deviceinfo["append_dtb"] == "true":
@ -65,16 +31,15 @@ def variables(args, flavor, method):
vars = {
"$BOOT": "/mnt/rootfs_" + args.device + "/boot",
"$DTB": _dtb,
"$FLAVOR": flavor if flavor is not None else "",
"$IMAGE_SPLIT_BOOT": "/home/pmos/rootfs/" + args.device + "-boot.img",
"$IMAGE_SPLIT_ROOT": "/home/pmos/rootfs/" + args.device + "-root.img",
"$IMAGE": "/home/pmos/rootfs/" + args.device + ".img",
"$KERNEL_CMDLINE": _cmdline,
"$PARTITION_KERNEL": _partition_kernel,
"$PARTITION_INITFS": args.deviceinfo[
"flash_heimdall_partition_initfs"] or "RECOVERY",
"$PARTITION_ROOTFS": _partition_rootfs,
"$PARTITION_INITFS": args.deviceinfo["flash_heimdall_partition_initfs"] or "RECOVERY",
"$PARTITION_SYSTEM": _partition_system,
"$PARTITION_VBMETA": _partition_vbmeta,
"$PARTITION_DTBO": _partition_dtbo,
"$FLASH_PAGESIZE": flash_pagesize,
"$RECOVERY_ZIP": "/mnt/buildroot_" + args.deviceinfo["arch"] +
"/var/lib/postmarketos-android-recovery-installer"
@ -83,11 +48,4 @@ def variables(args, flavor, method):
"/usr/share/uuu/flash_script.lst"
}
# Backwards compatibility with old mkinitfs (pma#660)
pmaports_cfg = pmb.config.pmaports.read_config(args)
if pmaports_cfg.get("supported_mkinitfs_without_flavors", False):
vars["$FLAVOR"] = ""
else:
vars["$FLAVOR"] = f"-{flavor}" if flavor is not None else "-"
return vars

View File

@ -1,2 +1,2 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later

View File

@ -1,134 +0,0 @@
# Copyright 2023 Johannes Marbach, Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import os
import pmb.chroot.root
import pmb.config.pmaports
import pmb.helpers.cli
import pmb.helpers.run
import pmb.helpers.run_core
import pmb.parse.version
def _run(args, command, chroot=False, suffix="native", output="log"):
"""
Run a command.
:param command: command in list form
:param chroot: whether to run the command inside the chroot or on the host
:param suffix: chroot suffix. Only applies if the "chroot" parameter is
set to True.
See pmb.helpers.run_core.core() for a detailed description of all other
arguments and the return value.
"""
if chroot:
return pmb.chroot.root(args, command, output=output, suffix=suffix,
disable_timeout=True)
return pmb.helpers.run.root(args, command, output=output)
def _prepare_fifo(args, chroot=False, suffix="native"):
"""
Prepare the progress fifo for reading / writing.
:param chroot: whether to run the command inside the chroot or on the host
:param suffix: chroot suffix. Only applies if the "chroot" parameter is
set to True.
:returns: A tuple consisting of the path to the fifo as needed by apk to
write into it (relative to the chroot, if applicable) and the
path of the fifo as needed by cat to read from it (always
relative to the host)
"""
if chroot:
fifo = "/tmp/apk_progress_fifo"
fifo_outside = f"{args.work}/chroot_{suffix}{fifo}"
else:
_run(args, ["mkdir", "-p", f"{args.work}/tmp"])
fifo = fifo_outside = f"{args.work}/tmp/apk_progress_fifo"
if os.path.exists(fifo_outside):
_run(args, ["rm", "-f", fifo_outside])
_run(args, ["mkfifo", fifo_outside])
return (fifo, fifo_outside)
def _create_command_with_progress(command, fifo):
"""
Build a full apk command from a subcommand, set up to redirect progress
into a fifo.
:param command: apk subcommand in list form
:param fifo: path of the fifo
:returns: full command in list form
"""
flags = ["--no-progress", "--progress-fd", "3"]
command_full = [command[0]] + flags + command[1:]
command_flat = pmb.helpers.run_core.flat_cmd(command_full)
command_flat = f"exec 3>{fifo}; {command_flat}"
return ["sh", "-c", command_flat]
def _compute_progress(line):
"""
Compute the progress as a number between 0 and 1.
:param line: line as read from the progress fifo
:returns: progress as a number between 0 and 1
"""
if not line:
return 1
cur_tot = line.rstrip().split('/')
if len(cur_tot) != 2:
return 0
cur = float(cur_tot[0])
tot = float(cur_tot[1])
return cur / tot if tot > 0 else 0
def apk_with_progress(args, command, chroot=False, suffix="native"):
"""
Run an apk subcommand while printing a progress bar to STDOUT.
:param command: apk subcommand in list form
:param chroot: whether to run commands inside the chroot or on the host
:param suffix: chroot suffix. Only applies if the "chroot" parameter is
set to True.
:raises RuntimeError: when the apk command fails
"""
fifo, fifo_outside = _prepare_fifo(args, chroot, suffix)
command_with_progress = _create_command_with_progress(command, fifo)
log_msg = " ".join(command)
with _run(args, ['cat', fifo], chroot=chroot, suffix=suffix,
output="pipe") as p_cat:
with _run(args, command_with_progress, chroot=chroot, suffix=suffix,
output="background") as p_apk:
while p_apk.poll() is None:
line = p_cat.stdout.readline().decode('utf-8')
progress = _compute_progress(line)
pmb.helpers.cli.progress_print(args, progress)
pmb.helpers.cli.progress_flush(args)
pmb.helpers.run_core.check_return_code(args, p_apk.returncode,
log_msg)
def check_outdated(args, version_installed, action_msg):
"""
Check if the provided alpine version is outdated, depending on the alpine
mirrordir (edge, v3.12, ...) related to currently checked out pmaports
branch.
:param version_installed: currently installed apk version, e.g. "2.12.1-r0"
:param action_msg: string explaining what the user should do to resolve
this
:raises: RuntimeError if the version is outdated
"""
channel_cfg = pmb.config.pmaports.read_config_channel(args)
mirrordir_alpine = channel_cfg["mirrordir_alpine"]
version_min = pmb.config.apk_tools_min_version[mirrordir_alpine]
if pmb.parse.version.compare(version_installed, version_min) >= 0:
return
raise RuntimeError("Found an outdated version of the 'apk' package"
f" manager ({version_installed}, expected at least:"
f" {version_min}). {action_msg}")

View File

@ -1,4 +1,4 @@
# Copyright 2023 Luca Weiss
# Copyright 2020 Luca Weiss
# SPDX-License-Identifier: GPL-3.0-or-later
import datetime
import fnmatch
@ -19,10 +19,9 @@ ANITYA_API_BASE = "https://release-monitoring.org/api/v2"
GITHUB_API_BASE = "https://api.github.com"
GITLAB_HOSTS = [
"https://gitlab.com",
"https://gitlab.freedesktop.org",
"https://gitlab.gnome.org",
"https://invent.kde.org",
"https://source.puri.sm",
"https://gitlab.freedesktop.org",
]
@ -33,17 +32,14 @@ def init_req_headers() -> None:
if req_headers is not None and req_headers_github is not None:
return
# Generic request headers
req_headers = {
'User-Agent': f'pmbootstrap/{pmb.__version__} aportupgrade'}
req_headers = {'User-Agent': 'pmbootstrap/{} aportupgrade'.format(pmb.config.version)}
# Request headers specific to GitHub
req_headers_github = dict(req_headers)
if os.getenv("GITHUB_TOKEN") is not None:
token = os.getenv("GITHUB_TOKEN")
req_headers_github['Authorization'] = f'token {token}'
req_headers_github['Authorization'] = f'token {os.getenv("GITHUB_TOKEN")}'
else:
logging.info("NOTE: Consider using a GITHUB_TOKEN environment variable"
" to increase your rate limit")
logging.info("NOTE: Consider using a GITHUB_TOKEN environment variable to increase your rate limit")
def get_package_version_info_github(repo_name: str, ref: Optional[str]):
@ -81,8 +77,7 @@ def get_package_version_info_gitlab(gitlab_host: str, repo_name: str,
# Get the commits for the repository
commits = pmb.helpers.http.retrieve_json(
f"{gitlab_host}/api/v4/projects/{repo_name_safe}/repository"
f"/commits{ref_arg}",
f"{gitlab_host}/api/v4/projects/{repo_name_safe}/repository/commits{ref_arg}",
headers=req_headers)
latest_commit = commits[0]
commit_date = latest_commit["committed_date"]
@ -97,8 +92,7 @@ def get_package_version_info_gitlab(gitlab_host: str, repo_name: str,
def upgrade_git_package(args, pkgname: str, package) -> bool:
"""
Update _commit/pkgver/pkgrel in a git-APKBUILD (or pretend to do it if
args.dry is set).
Update _commit/pkgver/pkgrel in a git-APKBUILD (or pretend to do it if args.dry is set).
:param pkgname: the package name
:param package: a dict containing package information
:returns: if something (would have) been changed
@ -109,21 +103,16 @@ def upgrade_git_package(args, pkgname: str, package) -> bool:
if 1 <= len(source) <= 2:
source = source[-1]
else:
raise RuntimeError("Unhandled number of source elements. Please open"
f" a bug report: {source}")
raise RuntimeError("Unhandled number of source elements. Please open a bug report: {}".format(source))
verinfo = None
github_match = re.match(
r"https://github\.com/(.+)/(?:archive|releases)", source)
gitlab_match = re.match(
fr"({'|'.join(GITLAB_HOSTS)})/(.+)/-/archive/", source)
github_match = re.match(r"https://github\.com/(.+)/(?:archive|releases)", source)
gitlab_match = re.match(fr"({'|'.join(GITLAB_HOSTS)})/(.+)/-/archive/", source)
if github_match:
verinfo = get_package_version_info_github(
github_match.group(1), args.ref)
verinfo = get_package_version_info_github(github_match.group(1), args.ref)
elif gitlab_match:
verinfo = get_package_version_info_gitlab(
gitlab_match.group(1), gitlab_match.group(2), args.ref)
verinfo = get_package_version_info_gitlab(gitlab_match.group(1), gitlab_match.group(2), args.ref)
if verinfo is None:
# ignore for now
@ -135,11 +124,7 @@ def upgrade_git_package(args, pkgname: str, package) -> bool:
sha_new = verinfo["sha"]
# Format the new pkgver, keep the value before _git the same
if package["pkgver"] == "9999":
pkgver = package["_pkgver"]
else:
pkgver = package["pkgver"]
pkgver = package["pkgver"]
pkgver_match = re.match(r"([\d.]+)_git", pkgver)
date_pkgver = verinfo["date"].strftime("%Y%m%d")
pkgver_new = f"{pkgver_match.group(1)}_git{date_pkgver}"
@ -154,15 +139,12 @@ def upgrade_git_package(args, pkgname: str, package) -> bool:
logging.info("{}: upgrading pmaport".format(pkgname))
if args.dry:
logging.info(f" Would change _commit from {sha} to {sha_new}")
logging.info(f" Would change pkgver from {pkgver} to {pkgver_new}")
logging.info(f" Would change pkgrel from {pkgrel} to {pkgrel_new}")
logging.info(" Would change _commit from {} to {}".format(sha, sha_new))
logging.info(" Would change pkgver from {} to {}".format(pkgver, pkgver_new))
logging.info(" Would change pkgrel from {} to {}".format(pkgrel, pkgrel_new))
return True
if package["pkgver"] == "9999":
pmb.helpers.file.replace_apkbuild(args, pkgname, "_pkgver", pkgver_new)
else:
pmb.helpers.file.replace_apkbuild(args, pkgname, "pkgver", pkgver_new)
pmb.helpers.file.replace_apkbuild(args, pkgname, "pkgver", pkgver_new)
pmb.helpers.file.replace_apkbuild(args, pkgname, "pkgrel", pkgrel_new)
pmb.helpers.file.replace_apkbuild(args, pkgname, "_commit", sha_new, True)
return True
@ -170,83 +152,55 @@ def upgrade_git_package(args, pkgname: str, package) -> bool:
def upgrade_stable_package(args, pkgname: str, package) -> bool:
"""
Update _commit/pkgver/pkgrel in an APKBUILD (or pretend to do it if
args.dry is set).
Update _commit/pkgver/pkgrel in an APKBUILD (or pretend to do it if args.dry is set).
:param pkgname: the package name
:param package: a dict containing package information
:returns: if something (would have) been changed
"""
# Looking up if there's a custom mapping from postmarketOS package name
# to Anitya project name.
mappings = pmb.helpers.http.retrieve_json(
f"{ANITYA_API_BASE}/packages/?distribution=postmarketOS"
f"&name={pkgname}", headers=req_headers)
if mappings["total_items"] < 1:
projects = pmb.helpers.http.retrieve_json(
f"{ANITYA_API_BASE}/projects/?name={pkgname}", headers=req_headers)
if projects["total_items"] < 1:
logging.warning(f"{pkgname}: failed to get Anitya project")
return False
else:
project_name = mappings["items"][0]["project"]
ecosystem = mappings["items"][0]["ecosystem"]
projects = pmb.helpers.http.retrieve_json(
f"{ANITYA_API_BASE}/projects/?name={project_name}&"
f"ecosystem={ecosystem}",
headers=req_headers)
projects = pmb.helpers.http.retrieve_json(f"{ANITYA_API_BASE}/projects/?name={pkgname}", headers=req_headers)
if projects["total_items"] < 1:
logging.warning(f"{pkgname}: didn't find any projects, can't upgrade!")
return False
if projects["total_items"] > 1:
logging.warning(f"{pkgname}: found more than one project, can't "
f"upgrade! Please create an explicit mapping of "
f"\"project\" to the package name.")
return False
# There is no Anitya project with the package name.
# Looking up if there's a custom mapping from postmarketOS package name to Anitya project name.
mappings = pmb.helpers.http.retrieve_json(
f"{ANITYA_API_BASE}/packages/?distribution=postmarketOS&name={pkgname}", headers=req_headers)
if mappings["total_items"] < 1:
logging.warning("{}: failed to get Anitya project".format(pkgname))
return False
project_name = mappings["items"][0]["project"]
projects = pmb.helpers.http.retrieve_json(
f"{ANITYA_API_BASE}/projects/?name={project_name}", headers=req_headers)
# Get the first, best-matching item
project = projects["items"][0]
# Check that we got a version number
if len(project["stable_versions"]) < 1:
if project["version"] is None:
logging.warning("{}: got no version number, ignoring".format(pkgname))
return False
version = project["stable_versions"][0]
# Compare the pmaports version with the project version
if package["pkgver"] == version:
if package["pkgver"] == project["version"]:
logging.info("{}: up-to-date".format(pkgname))
return False
if package["pkgver"] == "9999":
pkgver = package["_pkgver"]
else:
pkgver = package["pkgver"]
pkgver_new = version
pkgver = package["pkgver"]
pkgver_new = project["version"]
pkgrel = package["pkgrel"]
pkgrel_new = 0
if not pmb.parse.version.validate(pkgver_new):
logging.warning(f"{pkgname}: would upgrade to invalid pkgver:"
f" {pkgver_new}, ignoring")
logging.warning("{}: would upgrade to invalid pkgver: {}, ignoring".format(pkgname, pkgver_new))
return False
logging.info("{}: upgrading pmaport".format(pkgname))
if args.dry:
logging.info(f" Would change pkgver from {pkgver} to {pkgver_new}")
logging.info(f" Would change pkgrel from {pkgrel} to {pkgrel_new}")
logging.info(" Would change pkgver from {} to {}".format(pkgver, pkgver_new))
logging.info(" Would change pkgrel from {} to {}".format(pkgrel, pkgrel_new))
return True
if package["pkgver"] == "9999":
pmb.helpers.file.replace_apkbuild(args, pkgname, "_pkgver", pkgver_new)
else:
pmb.helpers.file.replace_apkbuild(args, pkgname, "pkgver", pkgver_new)
pmb.helpers.file.replace_apkbuild(args, pkgname, "pkgver", pkgver_new)
pmb.helpers.file.replace_apkbuild(args, pkgname, "pkgrel", pkgrel_new)
return True
@ -278,8 +232,7 @@ def upgrade_all(args) -> None:
Upgrade all packages, based on args.all, args.all_git and args.all_stable.
"""
for pkgname in pmb.helpers.pmaports.get_list(args):
# Always ignore postmarketOS-specific packages that have no upstream
# source
# Always ignore postmarketOS-specific packages that have no upstream source
skip = False
for pattern in pmb.config.upgrade_ignore:
if fnmatch.fnmatch(pkgname, pattern):
@ -287,5 +240,4 @@ def upgrade_all(args) -> None:
if skip:
continue
upgrade(args, pkgname, args.all or args.all_git,
args.all or args.all_stable)
upgrade(args, pkgname, args.all or args.all_git, args.all or args.all_stable)

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import copy
import os
@ -20,7 +20,7 @@ import pmb.helpers.git
...
2. Argparse merged with others
Variables from the user's config file (~/.config/pmbootstrap.cfg) that
Variables from the user's config file (~/.config/pmbootstrap.cfg), that
can be overridden from the command line (pmb/parse/arguments.py) and
fall back to the defaults defined in pmb/config/__init__.py (see
"defaults = {..."). The user's config file gets generated interactively
@ -31,7 +31,30 @@ import pmb.helpers.git
args.device ("samsung-i9100", "qemu-amd64" etc.)
args.work ("/home/user/.local/var/pmbootstrap", override with --work)
3. Parsed configs
3. Shortcuts
Long variables or function calls that always return the same information
may have a shortcut defined, to make the code more readable (see
add_shortcuts() below).
Example:
args.arch_native ("x86_64" etc.)
4. Cache
pmbootstrap uses this dictionary to save the result of expensive
results, so they work a lot faster the next time they are needed in the
same session. Usually the cache is written to and read from in the same
Python file, with code similar to the following:
def lookup(args, key):
if key in args.cache["mycache"]:
return args.cache["mycache"][key]
ret = expensive_operation(args, key)
args.cache["mycache"][key] = ret
return ret
See add_cache() below for details.
5. Parsed configs
Similar to the cache above, specific config files get parsed and added
to args, so they can get accessed quickly (without parsing the configs
over and over). These configs are not only used in one specific
@ -56,9 +79,7 @@ def fix_mirrors_postmarketos(args):
subparsers: <https://bugs.python.org/issue9338> """
# -mp not specified: use default mirrors
if not args.mirrors_postmarketos:
cfg = pmb.config.load(args)
args.mirrors_postmarketos = \
cfg["pmbootstrap"]["mirrors_postmarketos"].split(",")
args.mirrors_postmarketos = pmb.config.defaults["mirrors_postmarketos"]
# -mp="": use no postmarketOS mirrors (build everything locally)
if args.mirrors_postmarketos == [""]:
@ -93,11 +114,33 @@ def replace_placeholders(args):
setattr(args, key, os.path.expanduser(getattr(args, key)))
def add_shortcuts(args):
""" Add convenience shortcuts """
setattr(args, "arch_native", pmb.parse.arch.alpine_native())
def add_cache(args):
""" Add a caching dict (caches parsing of files etc. for the current
session) """
repo_update = {"404": [], "offline_msg_shown": False}
setattr(args, "cache", {"apkindex": {},
"apkbuild": {},
"apk_min_version_checked": [],
"apk_repository_list_updated": [],
"built": {},
"find_aport": {},
"pmb.helpers.package.depends_recurse": {},
"pmb.helpers.package.get": {},
"pmb.helpers.repo.update": repo_update,
"pmb.helpers.git.parse_channels_cfg": {},
"pmb.config.pmaports.read_config": None})
def add_deviceinfo(args):
""" Add and verify the deviceinfo (only after initialization) """
setattr(args, "deviceinfo", pmb.parse.deviceinfo(args))
arch = args.deviceinfo["arch"]
if (arch != pmb.config.arch_native and
if (arch != args.arch_native and
arch not in pmb.config.build_device_architectures):
raise ValueError("Arch '" + arch + "' is not available in"
" postmarketOS. If you would like to add it, see:"
@ -109,7 +152,8 @@ def init(args):
fix_mirrors_postmarketos(args)
pmb.config.merge_with_args(args)
replace_placeholders(args)
pmb.helpers.other.init_cache()
add_shortcuts(args)
add_cache(args)
# Initialize logs (we could raise errors below)
pmb.helpers.logging.init(args)
@ -131,7 +175,9 @@ def update_work(args, work):
args_new = copy.deepcopy(args.from_argparse)
# Keep from the modified args:
# * the old log file descriptor (so we can close it)
# * the unmodified args from argparse (to check if --aports was specified)
args_new.logfd = args.logfd
args_new.from_argparse = args.from_argparse
# Generate modified args again, replacing $WORK with the new work folder

View File

@ -1,13 +1,9 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import datetime
import logging
import os
import re
import readline
import sys
import pmb.config
class ReadlineTabCompleter:
@ -27,8 +23,7 @@ class ReadlineTabCompleter:
# First time: build match list
if iteration == 0:
if input_text:
self.matches = [s for s in self.options
if s and s.startswith(input_text)]
self.matches = [s for s in self.options if s and s.startswith(input_text)]
else:
self.matches = self.options[:]
@ -38,29 +33,24 @@ class ReadlineTabCompleter:
return None
def ask(question="Continue?", choices=["y", "n"], default="n",
def ask(args, question="Continue?", choices=["y", "n"], default="n",
lowercase_answer=True, validation_regex=None, complete=None):
"""
Ask a question on the terminal.
:param question: display prompt
:param choices: short list of possible answers,
displayed after prompt if set
:param choices: short list of possible answers, displayed after prompt if set
:param default: default value to return if user doesn't input anything
:param lowercase_answer: if True, convert return value to lower case
:param validation_regex: if set, keep asking until regex matches
:param complete: set to a list to enable tab completion
"""
styles = pmb.config.styles
while True:
date = datetime.datetime.now().strftime("%H:%M:%S")
line = question
question_full = "[" + date + "] " + question
if choices:
line += f" ({str.join('/', choices)})"
question_full += " (" + str.join("/", choices) + ")"
if default:
line += f" [{default}]"
line_color = f"[{date}] {styles['BOLD']}{line}{styles['END']}"
line = f"[{date}] {line}"
question_full += " [" + str(default) + "]"
if complete:
readline.parse_and_bind('tab: complete')
@ -68,23 +58,21 @@ def ask(question="Continue?", choices=["y", "n"], default="n",
if '-' in delims:
delims = delims.replace('-', '')
readline.set_completer_delims(delims)
readline.set_completer(
ReadlineTabCompleter(complete).completer_func)
readline.set_completer(ReadlineTabCompleter(complete).completer_func)
ret = input(f"{line_color}: ")
ret = input(question_full + ": ")
# Stop completing (question is answered)
if complete:
# set_completer(None) would use the default file system completer
readline.set_completer(lambda text, state: None)
readline.set_completer(None)
if lowercase_answer:
ret = ret.lower()
if ret == "":
ret = str(default)
pmb.helpers.logging.logfd.write(f"{line}: {ret}\n")
pmb.helpers.logging.logfd.flush()
args.logfd.write(question_full + " " + ret + "\n")
args.logfd.flush()
# Validate with regex
if not validation_regex:
@ -102,45 +90,12 @@ def confirm(args, question="Continue?", default=False, no_assumptions=False):
"""
Convenience wrapper around ask for simple yes-no questions with validation.
:param no_assumptions: ask for confirmation, even if "pmbootstrap -y'
is set
:param no_assumptions: ask for confirmation, even if "pmbootstrap -y' is set
:returns: True for "y", False for "n"
"""
default_str = "y" if default else "n"
if args.assume_yes and not no_assumptions:
logging.info(question + " (y/n) [" + default_str + "]: y")
return True
answer = ask(question, ["y", "n"], default_str, True, "(y|n)")
answer = ask(args, question, ["y", "n"], default_str, True, "(y|n)")
return answer == "y"
def progress_print(args, progress):
"""
Print a snapshot of a progress bar to STDOUT. Call progress_flush to end
printing progress and clear the line. No output is printed in
non-interactive mode.
:param progress: completion percentage as a number between 0 and 1
"""
width = 79
try:
width = os.get_terminal_size().columns - 6
except OSError:
pass
chars = int(width * progress)
filled = "\u2588" * chars
empty = " " * (width - chars)
percent = int(progress * 100)
if pmb.config.is_interactive and not args.details_to_stdout:
sys.stdout.write(f"\u001b7{percent:>3}% {filled}{empty}")
sys.stdout.flush()
sys.stdout.write("\u001b8\u001b[0K")
def progress_flush(args):
"""
Finish printing a progress bar. This will erase the line. Does nothing in
non-interactive mode.
"""
if pmb.config.is_interactive and not args.details_to_stdout:
sys.stdout.flush()

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import os
import glob
@ -23,17 +23,14 @@ def find_path(args, codename, file=''):
return g[0]
def list_codenames(args, vendor=None, unmaintained=True):
def list_codenames(args, vendor=None):
"""
Get all devices, for which aports are available
:param vendor: vendor name to choose devices from, or None for all vendors
:param unmaintained: include unmaintained devices
:returns: ["first-device", "second-device", ...]
"""
ret = []
for path in glob.glob(args.aports + "/device/*/device-*"):
if not unmaintained and '/unmaintained/' in path:
continue
device = os.path.basename(path).split("-", 1)[1]
if (vendor is None) or device.startswith(vendor + '-'):
ret.append(device)
@ -58,8 +55,8 @@ def list_apkbuilds(args):
"""
ret = {}
for device in list_codenames(args):
apkbuild_path = f"{args.aports}/device/*/device-{device}/APKBUILD"
ret[device] = pmb.parse.apkbuild(apkbuild_path)
apkbuild_path = args.aports + "/device/*/device-" + device + "/APKBUILD"
ret[device] = pmb.parse.apkbuild(args, apkbuild_path)
return ret

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import os
@ -26,7 +26,7 @@ def replace_apkbuild(args, pkgname, key, new, in_quotes=False):
:param in_quotes: expect the value to be in quotation marks ("") """
# Read old value
path = pmb.helpers.pmaports.find(args, pkgname) + "/APKBUILD"
apkbuild = pmb.parse.apkbuild(path)
apkbuild = pmb.parse.apkbuild(args, path)
old = apkbuild[key]
# Prepare old/new strings
@ -41,8 +41,8 @@ def replace_apkbuild(args, pkgname, key, new, in_quotes=False):
replace(path, "\n" + line_old + "\n", "\n" + line_new + "\n")
# Verify
del (pmb.helpers.other.cache["apkbuild"][path])
apkbuild = pmb.parse.apkbuild(path)
del (args.cache["apkbuild"][path])
apkbuild = pmb.parse.apkbuild(args, path)
if apkbuild[key] != str(new):
raise RuntimeError("Failed to set '{}' for pmaport '{}'. Make sure"
" that there's a line with exactly the string '{}'"

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import glob
import json
@ -12,11 +12,9 @@ import pmb.build.autodetect
import pmb.chroot
import pmb.chroot.initfs
import pmb.chroot.other
import pmb.ci
import pmb.config
import pmb.export
import pmb.flasher
import pmb.helpers.aportupgrade
import pmb.helpers.devices
import pmb.helpers.git
import pmb.helpers.lint
@ -26,13 +24,12 @@ import pmb.helpers.pmaports
import pmb.helpers.repo
import pmb.helpers.repo_missing
import pmb.helpers.run
import pmb.helpers.aportupgrade
import pmb.helpers.status
import pmb.install
import pmb.install.blockdevice
import pmb.netboot
import pmb.parse
import pmb.qemu
import pmb.sideload
def _parse_flavor(args, autoinstall=True):
@ -40,19 +37,22 @@ def _parse_flavor(args, autoinstall=True):
Verify the flavor argument if specified, or return a default value.
:param autoinstall: make sure that at least one kernel flavor is installed
"""
# Install a kernel and get its "flavor", where flavor is a pmOS-specific
# identifier that is typically in the form
# "postmarketos-<manufacturer>-<device/chip>", e.g.
# "postmarketos-qcom-sdm845"
# Install at least one kernel and get installed flavors
suffix = "rootfs_" + args.device
flavor = pmb.chroot.other.kernel_flavor_installed(
args, suffix, autoinstall)
flavors = pmb.chroot.other.kernel_flavors_installed(args, suffix, autoinstall)
if not flavor:
# Parse and verify the flavor argument
flavor = args.flavor
if flavor:
if flavor not in flavors:
raise RuntimeError("No kernel installed with flavor " + flavor + "!" +
" Run 'pmbootstrap flasher list_flavors' to get a list.")
return flavor
if not len(flavors):
raise RuntimeError(
"No kernel flavors installed in chroot " + suffix + "! Please let"
" your device package depend on a package starting with 'linux-'.")
return flavor
return flavors[0]
def _parse_suffix(args):
@ -69,24 +69,6 @@ def _parse_suffix(args):
return "native"
def _install_ondev_verify_no_rootfs(args):
chroot_dest = "/var/lib/rootfs.img"
dest = f"{args.work}/chroot_installer_{args.device}{chroot_dest}"
if os.path.exists(dest):
return
if args.ondev_cp:
for _, chroot_dest_cp in args.ondev_cp:
if chroot_dest_cp == chroot_dest:
return
raise ValueError(f"--no-rootfs set, but rootfs.img not found in install"
" chroot. Either run 'pmbootstrap install' without"
" --no-rootfs first to let it generate the postmarketOS"
" rootfs once, or supply a rootfs file with:"
f" --cp os.img:{chroot_dest}")
def aportgen(args):
for package in args.packages:
logging.info("Generate aport: " + package)
@ -103,8 +85,7 @@ def build(args):
return
# Set src and force
src = os.path.realpath(os.path.expanduser(args.src[0])) \
if args.src else None
src = os.path.realpath(os.path.expanduser(args.src[0])) if args.src else None
force = True if src else args.force
if src and not os.path.exists(src):
raise RuntimeError("Invalid path specified for --src: " + src)
@ -132,21 +113,6 @@ def checksum(args):
pmb.build.checksum.update(args, package)
def sideload(args):
arch = args.deviceinfo["arch"]
if args.arch:
arch = args.arch
user = args.user
host = args.host
pmb.sideload.sideload(args, user, host, args.port, arch, args.install_key,
args.packages)
def netboot(args):
if args.action_netboot == "serve":
pmb.netboot.start_nbd_server(args)
def chroot(args):
# Suffix
suffix = _parse_suffix(args)
@ -201,7 +167,7 @@ def config(args):
raise RuntimeError("config --reset requires a name to be given.")
value = pmb.config.defaults[args.name]
cfg["pmbootstrap"][args.name] = value
logging.info(f"Config changed to default: {args.name}='{value}'")
logging.info("Config changed to default: " + args.name + "='" + value + "'")
pmb.config.save(args, cfg)
elif args.value is not None:
cfg["pmbootstrap"][args.name] = args.value
@ -233,8 +199,7 @@ def initfs(args):
def install(args):
if args.no_fde:
logging.warning("WARNING: --no-fde is deprecated,"
" as it is now the default.")
logging.warning("WARNING: --no-fde is deprecated, as it is now the default.")
if args.rsync and args.full_disk_encryption:
raise ValueError("Installation using rsync is not compatible with full"
" disk encryption.")
@ -258,21 +223,12 @@ def install(args):
if args.rsync:
raise ValueError("--on-device-installer cannot be combined with"
" --rsync")
if args.filesystem:
raise ValueError("--on-device-installer cannot be combined with"
" --filesystem")
if args.deviceinfo["cgpt_kpart"]:
raise ValueError("--on-device-installer cannot be used with"
" ChromeOS devices")
else:
if args.ondev_cp:
raise ValueError("--cp can only be combined with --ondev")
if args.ondev_no_rootfs:
if not args.ondev_rootfs:
raise ValueError("--no-rootfs can only be combined with --ondev."
" Do you mean --no-image?")
if args.ondev_no_rootfs:
_install_ondev_verify_no_rootfs(args)
if args.ondev_cp:
raise ValueError("--cp can only be combined with --ondev")
# On-device installer overrides
if args.on_device_installer:
@ -293,10 +249,7 @@ def install(args):
if flasher.get("split", False):
args.split = True
# Android recovery zip related
if args.android_recovery_zip and args.filesystem:
raise ValueError("--android-recovery-zip cannot be combined with"
" --filesystem (patches welcome)")
# Warning for android recovery zip with FDE
if args.android_recovery_zip and args.full_disk_encryption:
logging.info("WARNING: --fde is rarely used in combination with"
" --android-recovery-zip. If this does not work, consider"
@ -319,9 +272,6 @@ def install(args):
" packages found. Consider 'pmbootstrap zap -p'"
" to delete them.")
# Verify that the root filesystem is supported by current pmaports branch
pmb.install.get_root_filesystem(args)
pmb.install.install(args)
@ -377,17 +327,11 @@ def newapkbuild(args):
def kconfig(args):
if args.action_kconfig == "check":
details = args.kconfig_check_details
# Build the components list from cli arguments (--waydroid etc.)
components_list = []
for name in pmb.parse.kconfig.get_all_component_names():
if getattr(args, f"kconfig_check_{name}"):
components_list += [name]
# Handle passing a file directly
if args.file:
if pmb.parse.kconfig.check_file(args.package, components_list,
details=details):
if pmb.parse.kconfig.check_file(args, args.package,
anbox=args.anbox,
details=True):
logging.info("kconfig check succeeded!")
return
raise RuntimeError("kconfig check failed!")
@ -407,15 +351,15 @@ def kconfig(args):
packages.sort()
for package in packages:
if not args.force:
pkgname = package if package.startswith("linux-") \
else "linux-" + package
pkgname = package if package.startswith("linux-") else "linux-" + package
aport = pmb.helpers.pmaports.find(args, pkgname)
apkbuild = pmb.parse.apkbuild(f"{aport}/APKBUILD")
apkbuild = pmb.parse.apkbuild(args, aport + "/APKBUILD")
if "!pmb:kconfigcheck" in apkbuild["options"]:
skipped += 1
continue
if not pmb.parse.kconfig.check(args, package, components_list,
details=details):
if not pmb.parse.kconfig.check(args, package,
force_anbox_check=args.anbox,
details=True):
error = True
# At least one failure
@ -426,13 +370,8 @@ def kconfig(args):
logging.info("NOTE: " + str(skipped) + " kernel(s) was skipped"
" (consider 'pmbootstrap kconfig check -f')")
logging.info("kconfig check succeeded!")
elif args.action_kconfig in ["edit", "migrate"]:
if args.package:
pkgname = args.package
else:
pkgname = args.deviceinfo["codename"]
use_oldconfig = args.action_kconfig == "migrate"
pmb.build.menuconfig(args, pkgname, use_oldconfig)
elif args.action_kconfig == "edit":
pmb.build.menuconfig(args, args.package)
def deviceinfo_parse(args):
@ -460,12 +399,12 @@ def apkbuild_parse(args):
print(package + ":")
aport = pmb.helpers.pmaports.find(args, package)
path = aport + "/APKBUILD"
print(json.dumps(pmb.parse.apkbuild(path), indent=4,
print(json.dumps(pmb.parse.apkbuild(args, path), indent=4,
sort_keys=True))
def apkindex_parse(args):
result = pmb.parse.apkindex.parse(args.apkindex_path)
result = pmb.parse.apkindex.parse(args, args.apkindex_path)
if args.package:
if args.package not in result:
raise RuntimeError("Package not found in the APKINDEX: " +
@ -516,7 +455,7 @@ def shutdown(args):
def stats(args):
# Chroot suffix
suffix = "native"
if args.arch != pmb.config.arch_native:
if args.arch != args.arch_native:
suffix = "buildroot_" + args.arch
# Install ccache and display stats
@ -531,25 +470,18 @@ def work_migrate(args):
def log(args):
log_testsuite = f"{args.work}/log_testsuite.txt"
if args.clear_log:
pmb.helpers.run.user(args, ["truncate", "-s", "0", args.log])
pmb.helpers.run.user(args, ["truncate", "-s", "0", log_testsuite])
pmb.helpers.run.user(args, ["tail", "-n", args.lines, "-F", args.log],
output="tui")
cmd = ["tail", "-n", args.lines, "-F"]
# Follow the testsuite's log file too if it exists. It will be created when
# starting a test case that writes to it (git -C test grep log_testsuite).
if os.path.exists(log_testsuite):
cmd += [log_testsuite]
# tail writes the last lines of the files to the terminal. Put the regular
# log at the end, so that output is visible at the bottom (where the user
# looks for an error / what's currently going on).
cmd += [args.log]
pmb.helpers.run.user(args, cmd, output="tui")
def log_distccd(args):
logpath = "/home/pmos/distccd.log"
if args.clear_log:
pmb.chroot.user(args, ["truncate", "-s", "0", logpath])
pmb.chroot.user(args, ["tail", "-n", args.lines, "-f", logpath],
output="tui")
def zap(args):
@ -557,7 +489,7 @@ def zap(args):
distfiles=args.distfiles, pkgs_local=args.pkgs_local,
pkgs_local_mismatch=args.pkgs_local_mismatch,
pkgs_online_mismatch=args.pkgs_online_mismatch,
rust=args.rust, netboot=args.netboot)
rust=args.rust)
# Don't write the "Done" message
pmb.helpers.logging.disable()
@ -566,8 +498,7 @@ def zap(args):
def bootimg_analyze(args):
bootimg = pmb.parse.bootimg(args, args.path)
tmp_output = "Put these variables in the deviceinfo file of your device:\n"
for line in pmb.aportgen.device.\
generate_deviceinfo_fastboot_content(bootimg).split("\n"):
for line in pmb.aportgen.device.generate_deviceinfo_fastboot_content(args, bootimg).split("\n"):
tmp_output += "\n" + line.lstrip()
logging.info(tmp_output)
@ -603,53 +534,10 @@ def lint(args):
if not packages:
packages = pmb.helpers.pmaports.get_list(args)
pmb.helpers.lint.check(args, packages)
for package in packages:
pmb.helpers.lint.check(args, package)
def status(args):
if not pmb.helpers.status.print_status(args, args.details):
sys.exit(1)
def ci(args):
topdir = pmb.helpers.git.get_topdir(args, os.getcwd())
if not os.path.exists(topdir):
logging.error("ERROR: change your current directory to a git"
" repository (e.g. pmbootstrap, pmaports) before running"
" 'pmbootstrap ci'.")
exit(1)
scripts_available = pmb.ci.get_ci_scripts(topdir)
scripts_available = pmb.ci.sort_scripts_by_speed(scripts_available)
if not scripts_available:
logging.error("ERROR: no supported CI scripts found in current git"
" repository, see https://postmarketos.org/pmb-ci")
exit(1)
scripts_selected = {}
if args.scripts:
if args.all:
raise RuntimeError("Combining --all with script names doesn't"
" make sense")
for script in args.scripts:
if script not in scripts_available:
logging.error(f"ERROR: script '{script}' not found in git"
" repository, found these:"
f" {', '.join(scripts_available.keys())}")
exit(1)
scripts_selected[script] = scripts_available[script]
elif args.all:
scripts_selected = scripts_available
if args.fast:
for script, script_data in scripts_available.items():
if "slow" not in script_data["options"]:
scripts_selected[script] = script_data
if not pmb.helpers.git.clean_worktree(args, topdir):
logging.warning("WARNING: this git repository has uncommitted changes")
if not scripts_selected:
scripts_selected = pmb.ci.ask_which_scripts_to_run(scripts_available)
pmb.ci.run_scripts(args, topdir, scripts_selected)

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import configparser
import logging
@ -8,7 +8,6 @@ import time
import pmb.build
import pmb.chroot.apk
import pmb.config
import pmb.helpers.pmaports
import pmb.helpers.run
@ -109,8 +108,8 @@ def parse_channels_cfg(args):
...}} """
# Cache during one pmbootstrap run
cache_key = "pmb.helpers.git.parse_channels_cfg"
if pmb.helpers.other.cache[cache_key]:
return pmb.helpers.other.cache[cache_key]
if args.cache[cache_key]:
return args.cache[cache_key]
# Read with configparser
cfg = configparser.ConfigParser()
@ -139,15 +138,13 @@ def parse_channels_cfg(args):
if channel == "channels.cfg":
continue # meta section
channel_new = pmb.helpers.pmaports.get_channel_new(channel)
ret["channels"][channel_new] = {}
ret["channels"][channel] = {}
for key in ["description", "branch_pmaports", "branch_aports",
"mirrordir_alpine"]:
value = cfg.get(channel, key)
ret["channels"][channel_new][key] = value
ret["channels"][channel][key] = value
pmb.helpers.other.cache[cache_key] = ret
args.cache[cache_key] = ret
return ret
@ -229,12 +226,12 @@ def pull(args, name_repo):
return 0
def is_outdated(path):
def is_outdated(args, path):
# FETCH_HEAD always exists in repositories cloned by pmbootstrap.
# Usually it does not (before first git fetch/pull), but there is no good
# fallback. For exampe, getting the _creation_ date of .git/HEAD is non-
# trivial with python on linux (https://stackoverflow.com/a/39501288).
# Note that we have to assume here that the user had fetched the "origin"
# Note that we have to assume here, that the user had fetched the "origin"
# repository. If the user fetched another repository, FETCH_HEAD would also
# get updated, even though "origin" may be outdated. For pmbootstrap status
# it is good enough, because it should help the users that are not doing
@ -246,29 +243,3 @@ def is_outdated(path):
date_outdated = time.time() - pmb.config.git_repo_outdated
return date_head <= date_outdated
def get_topdir(args, path):
""" :returns: a string with the top dir of the git repository, or an
empty string if it's not a git repository. """
return pmb.helpers.run.user(args, ["git", "rev-parse", "--show-toplevel"],
path, output_return=True, check=False).rstrip()
def get_files(args, path):
""" Get all files inside a git repository, that are either already in the
git tree or are not in gitignore. Do not list deleted files. To be used
for creating a tarball of the git repository.
:param path: top dir of the git repository
:returns: all files in a git repository as list, relative to path """
ret = []
files = pmb.helpers.run.user(args, ["git", "ls-files"], path,
output_return=True).split("\n")
files += pmb.helpers.run.user(args, ["git", "ls-files",
"--exclude-standard", "--other"], path,
output_return=True).split("\n")
for file in files:
if os.path.exists(f"{path}/{file}"):
ret += [file]
return ret

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import hashlib
import json
@ -17,7 +17,6 @@ def download(args, url, prefix, cache=True, loglevel=logging.INFO,
:param url: the http(s) address of to the file to download
:param prefix: for the cache, to make it easier to find (cache files
get a hash of the URL after the prefix)
:param cache: if True, and url is cached, do not download it again
:param loglevel: change to logging.DEBUG to only display the download
message in 'pmbootstrap log', not in stdout. We use
this when downloading many APKINDEX files at once, no
@ -41,8 +40,7 @@ def download(args, url, prefix, cache=True, loglevel=logging.INFO,
# Offline and not cached
if args.offline:
raise RuntimeError("File not found in cache and offline flag is"
f" enabled: {url}")
raise RuntimeError(f"File not found in cache and offline flag is enabled: {url}")
# Download the file
logging.log(loglevel, "Download " + url)
@ -89,6 +87,6 @@ def retrieve(url, headers=None, allow_404=False):
def retrieve_json(*args, **kwargs):
""" Fetch the contents of a URL, parse it as JSON and return it. See
retrieve() for the list of all parameters. """
""" Fetch the contents of a URL, parse it as JSON and return it. See retrieve() for the
list of all parameters. """
return json.loads(retrieve(*args, **kwargs))

View File

@ -1,7 +1,6 @@
# Copyright 2023 Danct12 <danct12@disroot.org>
# Copyright 2020 Danct12 <danct12@disroot.org>
# SPDX-License-Identifier: GPL-3.0-or-later
import logging
import os
import pmb.chroot
import pmb.chroot.apk
@ -10,38 +9,16 @@ import pmb.helpers.run
import pmb.helpers.pmaports
def check(args, pkgnames):
"""
Run apkbuild-lint on the supplied packages
:param pkgnames: Names of the packages to lint
"""
def check(args, pkgname):
pmb.chroot.apk.install(args, ["atools"])
# Mount pmaports.git inside the chroot so that we don't have to copy the
# package folders
pmaports = "/mnt/pmaports"
pmb.build.mount_pmaports(args, pmaports)
# Locate all APKBUILDs and make the paths be relative to the pmaports
# root
apkbuilds = []
for pkgname in pkgnames:
aport = pmb.helpers.pmaports.find(args, pkgname)
if not os.path.exists(aport + "/APKBUILD"):
raise ValueError("Path does not contain an APKBUILD file:" +
aport)
relpath = os.path.relpath(aport, args.aports)
apkbuilds.append(f"{relpath}/APKBUILD")
# Run apkbuild-lint in chroot from the pmaports mount point. This will
# print a nice source identifier à la "./cross/grub-x86/APKBUILD" for
# each violation.
pkgstr = ", ".join(pkgnames)
logging.info(f"(native) linting {pkgstr} with apkbuild-lint")
# Run apkbuild-lint on copy of pmaport in chroot
pmb.build.init(args)
pmb.build.copy_to_buildpath(args, pkgname)
logging.info("(native) linting " + pkgname + " with apkbuild-lint")
options = pmb.config.apkbuild_custom_valid_options
return pmb.chroot.root(args, ["apkbuild-lint"] + apkbuilds,
return pmb.chroot.user(args, ["apkbuild-lint", "APKBUILD"],
check=False, output="stdout",
output_return=True,
working_dir=pmaports,
working_dir="/home/pmos/build",
env={"CUSTOM_VALID_OPTIONS": " ".join(options)})

View File

@ -1,11 +1,8 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import logging
import os
import sys
import pmb.config
logfd = None
class log_handler(logging.StreamHandler):
@ -23,40 +20,14 @@ class log_handler(logging.StreamHandler):
not self._args.quiet and
record.levelno >= logging.INFO):
stream = self.stream
styles = pmb.config.styles
msg_col = (
msg.replace(
"NOTE:",
f"{styles['BLUE']}NOTE:{styles['END']}",
1,
)
.replace(
"WARNING:",
f"{styles['YELLOW']}WARNING:{styles['END']}",
1,
)
.replace(
"ERROR:",
f"{styles['RED']}ERROR:{styles['END']}",
1,
)
.replace(
"DONE!",
f"{styles['GREEN']}DONE!{styles['END']}",
1,
)
)
stream.write(msg_col)
stream.write(msg)
stream.write(self.terminator)
self.flush()
# Everything: Write to logfd
msg = "(" + str(os.getpid()).zfill(6) + ") " + msg
logfd.write(msg + "\n")
logfd.flush()
self._args.logfd.write(msg + "\n")
self._args.logfd.flush()
except (KeyboardInterrupt, SystemExit):
raise
@ -78,31 +49,29 @@ def add_verbose_log_level():
logging.addLevelName(logging.VERBOSE, "VERBOSE")
logging.Logger.verbose = lambda inst, msg, * \
args, **kwargs: inst.log(logging.VERBOSE, msg, *args, **kwargs)
logging.verbose = lambda msg, *args, **kwargs: logging.log(logging.VERBOSE,
msg, *args,
**kwargs)
logging.verbose = lambda msg, *args, **kwargs: logging.log(logging.VERBOSE, msg,
*args, **kwargs)
def init(args):
"""
Set log format and add the log file descriptor to logfd, add the
Set log format and add the log file descriptor to args.logfd, add the
verbose log level.
"""
global logfd
# Set log file descriptor (logfd)
if args.details_to_stdout:
logfd = sys.stdout
setattr(args, "logfd", sys.stdout)
else:
# Require containing directory to exist (so we don't create the work
# folder and break the folder migration logic, which needs to set the
# version upon creation)
dir = os.path.dirname(args.log)
if os.path.exists(dir):
logfd = open(args.log, "a+")
setattr(args, "logfd", open(args.log, "a+"))
else:
logfd = open(os.devnull, "a+")
setattr(args, "logfd", open(os.devnull, "a+"))
if args.action != "init":
print(f"WARNING: Can't create log file in '{dir}', path"
print("WARNING: Can't create log file in '" + dir + "', path"
" does not exist!")
# Set log format

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import os
import pmb.helpers.run
@ -6,7 +6,7 @@ import pmb.helpers.run
def ismount(folder):
"""
Ismount() implementation that works for mount --bind.
Ismount() implementation, that works for mount --bind.
Workaround for: https://bugs.python.org/issue29707
"""
folder = os.path.realpath(os.path.realpath(folder))
@ -45,7 +45,7 @@ def bind(args, source, destination, create_folders=True, umount=False):
# Actually mount the folder
pmb.helpers.run.root(args, ["mount", "--bind", source, destination])
# Verify that it has worked
# Verify, that it has worked
if not ismount(destination):
raise RuntimeError("Mount failed: " + source + " -> " + destination)
@ -77,7 +77,7 @@ def umount_all_list(prefix, source="/proc/mounts"):
"""
Parses `/proc/mounts` for all folders beginning with a prefix.
:source: can be changed for testcases
:returns: a list of folders that need to be umounted
:returns: a list of folders, that need to be umounted
"""
ret = []
prefix = os.path.realpath(prefix)
@ -100,7 +100,7 @@ def umount_all_list(prefix, source="/proc/mounts"):
def umount_all(args, folder):
"""
Umount all folders that are mounted inside a given folder.
Umount all folders, that are mounted inside a given folder.
"""
for mountpoint in umount_all_list(folder):
pmb.helpers.run.root(args, ["umount", mountpoint])

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import glob
import logging
@ -17,9 +17,10 @@ def folder_size(args, path):
faster than doing the same task in pure Python). This result is only
approximatelly right, but good enough for pmbootstrap's use case (#760).
:returns: folder size in kilobytes
:returns: folder size in bytes
"""
output = pmb.helpers.run.root(args, ["du", "-ks",
output = pmb.helpers.run.root(args, ["du", "--summarize",
"--block-size=1",
path], output_return=True)
# Only look at last line to filter out sudo garbage (#1766)
@ -29,7 +30,7 @@ def folder_size(args, path):
return ret
def check_grsec():
def check_grsec(args):
"""
Check if the current kernel is based on the grsec patchset, and if
the chroot_deny_chmod option is enabled. Raise an exception in that
@ -47,23 +48,17 @@ def check_binfmt_misc(args):
"""
Check if the 'binfmt_misc' module is loaded by checking, if
/proc/sys/fs/binfmt_misc/ exists. If it exists, then do nothing.
Otherwise, load the module and mount binfmt_misc.
If that fails as well, raise an exception pointing the user to the wiki.
Otherwise, raise an exception pointing to user to the Wiki.
"""
path = "/proc/sys/fs/binfmt_misc/status"
if os.path.exists(path):
return
# check=False: this might be built-in instead of being a module
pmb.helpers.run.root(args, ["modprobe", "binfmt_misc"], check=False)
# check=False: we check it below and print a more helpful message on error
pmb.helpers.run.root(args, ["mount", "-t", "binfmt_misc", "none",
"/proc/sys/fs/binfmt_misc"], check=False)
if not os.path.exists(path):
link = "https://postmarketos.org/binfmt_misc"
raise RuntimeError(f"Failed to set up binfmt_misc, see: {link}")
link = "https://postmarketos.org/binfmt_misc"
raise RuntimeError("It appears that your system has not loaded the"
" module 'binfmt_misc'. This is required to run"
" foreign architecture programs with QEMU (eg."
" armhf on x86_64):\n See: <" + link + ">")
def migrate_success(args, version):
@ -202,34 +197,6 @@ def migrate_work_folder(args):
migrate_success(args, 5)
current = 5
if current == 5:
# Ask for confirmation
logging.info("Changelog:")
logging.info("* besides edge, pmaports channels have the same name")
logging.info(" as the branch now (pmbootstrap#2015)")
logging.info("Migration will do the following:")
logging.info("* Zap your chroots")
logging.info("* Adjust subdirs of your locally built packages dir:")
logging.info(f" {args.work}/packages")
logging.info(" stable => v20.05")
logging.info(" stable-next => v21.03")
if not pmb.helpers.cli.confirm(args):
raise RuntimeError("Aborted.")
# Zap chroots to avoid potential "ERROR: Chroot 'native' was created
# for the 'stable' channel, but you are on the 'v20.05' channel now."
pmb.chroot.zap(args, False)
# Migrate
packages_dir = f"{args.work}/packages"
for old, new in pmb.config.pmaports_channels_legacy.items():
if os.path.exists(f"{packages_dir}/{old}"):
pmb.helpers.run.root(args, ["mv", old, new], packages_dir)
# Update version file
migrate_success(args, 6)
current = 6
# Can't migrate, user must delete it
if current != required:
raise RuntimeError("Sorry, we can't migrate that automatically. Please"
@ -267,49 +234,13 @@ def validate_hostname(hostname):
return False
# Check that it only contains valid chars
if not re.match(r"^[0-9a-z-\.]*$", hostname):
if not re.match("^[0-9a-z-]*$", hostname):
logging.fatal("ERROR: Hostname must only contain letters (a-z),"
" digits (0-9), minus signs (-), or periods (.)")
" digits (0-9) or minus signs (-)")
return False
# Check that doesn't begin or end with a minus sign or period
if re.search(r"^-|^\.|-$|\.$", hostname):
logging.fatal("ERROR: Hostname must not begin or end with a minus"
" sign or period")
# Check that doesn't begin or end with a minus sign
if hostname[:1] == "-" or hostname[-1:] == "-":
logging.fatal("ERROR: Hostname must not begin or end with a minus sign")
return False
return True
"""
pmbootstrap uses this dictionary to save the result of expensive
results, so they work a lot faster the next time they are needed in the
same session. Usually the cache is written to and read from in the same
Python file, with code similar to the following:
def lookup(key):
if key in pmb.helpers.other.cache["mycache"]:
return pmb.helpers.other.cache["mycache"][key]
ret = expensive_operation(args, key)
pmb.helpers.other.cache["mycache"][key] = ret
return ret
"""
cache = None
def init_cache():
global cache
""" Add a caching dict (caches parsing of files etc. for the current
session) """
repo_update = {"404": [], "offline_msg_shown": False}
cache = {"apkindex": {},
"apkbuild": {},
"apk_min_version_checked": [],
"apk_repository_list_updated": [],
"built": {},
"find_aport": {},
"pmb.helpers.package.depends_recurse": {},
"pmb.helpers.package.get": {},
"pmb.helpers.repo.update": repo_update,
"pmb.helpers.git.parse_channels_cfg": {},
"pmb.config.pmaports.read_config": None}

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
"""
Functions that work with both pmaports and binary package repos. See also:
@ -33,16 +33,10 @@ def get(args, pkgname, arch, replace_subpkgnames=False, must_exist=True):
* None if the package was not found """
# Cached result
cache_key = "pmb.helpers.package.get"
if (
arch in pmb.helpers.other.cache[cache_key] and
pkgname in pmb.helpers.other.cache[cache_key][arch] and
replace_subpkgnames in pmb.helpers.other.cache[cache_key][arch][
pkgname
]
):
return pmb.helpers.other.cache[cache_key][arch][pkgname][
replace_subpkgnames
]
if (arch in args.cache[cache_key] and
pkgname in args.cache[cache_key][arch] and
replace_subpkgnames in args.cache[cache_key][arch][pkgname]):
return args.cache[cache_key][arch][pkgname][replace_subpkgnames]
# Find in pmaports
ret = None
@ -102,13 +96,11 @@ def get(args, pkgname, arch, replace_subpkgnames=False, must_exist=True):
# Save to cache and return
if ret:
if arch not in pmb.helpers.other.cache[cache_key]:
pmb.helpers.other.cache[cache_key][arch] = {}
if pkgname not in pmb.helpers.other.cache[cache_key][arch]:
pmb.helpers.other.cache[cache_key][arch][pkgname] = {}
pmb.helpers.other.cache[cache_key][arch][pkgname][
replace_subpkgnames
] = ret
if arch not in args.cache[cache_key]:
args.cache[cache_key][arch] = {}
if pkgname not in args.cache[cache_key][arch]:
args.cache[cache_key][arch][pkgname] = {}
args.cache[cache_key][arch][pkgname][replace_subpkgnames] = ret
return ret
# Could not find the package
@ -127,9 +119,9 @@ def depends_recurse(args, pkgname, arch):
"linux-samsung-i9100", ...] """
# Cached result
cache_key = "pmb.helpers.package.depends_recurse"
if (arch in pmb.helpers.other.cache[cache_key] and
pkgname in pmb.helpers.other.cache[cache_key][arch]):
return pmb.helpers.other.cache[cache_key][arch][pkgname]
if (arch in args.cache[cache_key] and
pkgname in args.cache[cache_key][arch]):
return args.cache[cache_key][arch][pkgname]
# Build ret (by iterating over the queue)
queue = [pkgname]
@ -149,9 +141,9 @@ def depends_recurse(args, pkgname, arch):
ret.sort()
# Save to cache and return
if arch not in pmb.helpers.other.cache[cache_key]:
pmb.helpers.other.cache[cache_key][arch] = {}
pmb.helpers.other.cache[cache_key][arch][pkgname] = ret
if arch not in args.cache[cache_key]:
args.cache[cache_key][arch] = {}
args.cache[cache_key][arch][pkgname] = ret
return ret

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import logging
@ -18,7 +18,7 @@ def package(args, pkgname, reason="", dry=False):
"""
# Current and new pkgrel
path = pmb.helpers.pmaports.find(args, pkgname) + "/APKBUILD"
apkbuild = pmb.parse.apkbuild(path)
apkbuild = pmb.parse.apkbuild(args, path)
pkgrel = int(apkbuild["pkgrel"])
pkgrel_new = pkgrel + 1
@ -34,8 +34,8 @@ def package(args, pkgname, reason="", dry=False):
pmb.helpers.file.replace(path, old, new)
# Verify
del pmb.helpers.other.cache["apkbuild"][path]
apkbuild = pmb.parse.apkbuild(path)
del(args.cache["apkbuild"][path])
apkbuild = pmb.parse.apkbuild(args, path)
if int(apkbuild["pkgrel"]) != pkgrel_new:
raise RuntimeError("Failed to bump pkgrel for package '" + pkgname +
"'. Make sure that there's a line with exactly the"
@ -81,10 +81,6 @@ def auto_apkindex_package(args, arch, aport, apk, dry=False):
", ".join(depends)))
missing = []
for depend in depends:
if depend.startswith("!"):
# Ignore conflict-dependencies
continue
providers = pmb.parse.apkindex.providers(args, depend, arch,
must_exist=False)
if providers == {}:
@ -111,20 +107,20 @@ def auto(args, dry=False):
paths = pmb.helpers.repo.apkindex_files(args, arch, alpine=False)
for path in paths:
logging.info("scan " + path)
index = pmb.parse.apkindex.parse(path, False)
index = pmb.parse.apkindex.parse(args, path, False)
for pkgname, apk in index.items():
origin = apk["origin"]
# Only increase once!
if origin in ret:
logging.verbose(
f"{pkgname}: origin '{origin}' found again")
logging.verbose("{}: origin '{}' found again".format(pkgname,
origin))
continue
aport_path = pmb.helpers.pmaports.find(args, origin, False)
if not aport_path:
logging.warning("{}: origin '{}' aport not found".format(
pkgname, origin))
continue
aport = pmb.parse.apkbuild(f"{aport_path}/APKBUILD")
aport = pmb.parse.apkbuild(args, aport_path + "/APKBUILD")
if auto_apkindex_package(args, arch, aport, apk, dry):
ret.append(pkgname)
return ret

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
"""
Functions that work with pmaports. See also:
@ -13,9 +13,9 @@ import pmb.parse
def _find_apkbuilds(args):
# Try to get a cached result first (we assume that the aports don't change
# Try to get a cached result first (we assume, that the aports don't change
# in one pmbootstrap call)
apkbuilds = pmb.helpers.other.cache.get("pmb.helpers.pmaports.apkbuilds")
apkbuilds = args.cache.get("pmb.helpers.pmaports.apkbuilds")
if apkbuilds is not None:
return apkbuilds
@ -27,12 +27,11 @@ def _find_apkbuilds(args):
"subfolders. Please put it only in one folder.")
apkbuilds[package] = apkbuild
# Sort dictionary so we don't need to do it over and over again in
# get_list()
# Sort dictionary so we don't need to do it over and over again in get_list()
apkbuilds = dict(sorted(apkbuilds.items()))
# Save result in cache
pmb.helpers.other.cache["pmb.helpers.pmaports.apkbuilds"] = apkbuilds
args.cache["pmb.helpers.pmaports.apkbuilds"] = apkbuilds
return apkbuilds
@ -100,54 +99,19 @@ def guess_main(args, subpkgname):
return os.path.dirname(path)
def _find_package_in_apkbuild(package, path):
"""
Look through subpackages and all provides to see if the APKBUILD at the
specified path contains (or provides) the specified package.
:param package: The package to search for
:param path: The path to the apkbuild
:return: True if the APKBUILD contains or provides the package
"""
apkbuild = pmb.parse.apkbuild(path)
# Subpackages
if package in apkbuild["subpackages"]:
return True
# Search for provides in both package and subpackages
apkbuild_pkgs = [apkbuild, *apkbuild["subpackages"].values()]
for apkbuild_pkg in apkbuild_pkgs:
if not apkbuild_pkg:
continue
# Provides (cut off before equals sign for entries like
# "mkbootimg=0.0.1")
for provides_i in apkbuild_pkg["provides"]:
# Ignore provides without version, they shall never be
# automatically selected
if "=" not in provides_i:
continue
if package == provides_i.split("=", 1)[0]:
return True
return False
def find(args, package, must_exist=True):
"""
Find the aport path that provides a certain subpackage.
Find the aport path, that provides a certain subpackage.
If you want the parsed APKBUILD instead, use pmb.helpers.pmaports.get().
:param must_exist: Raise an exception, when not found
:returns: the full path to the aport folder
"""
# Try to get a cached result first (we assume that the aports don't change
# Try to get a cached result first (we assume, that the aports don't change
# in one pmbootstrap call)
ret = None
if package in pmb.helpers.other.cache["find_aport"]:
ret = pmb.helpers.other.cache["find_aport"][package]
if package in args.cache["find_aport"]:
ret = args.cache["find_aport"][package]
else:
# Sanity check
if "*" in package:
@ -158,23 +122,36 @@ def find(args, package, must_exist=True):
if path:
ret = os.path.dirname(path)
# Try to guess based on the subpackage name
guess = guess_main(args, package)
if guess:
# ... but see if we were right
if _find_package_in_apkbuild(package, f'{guess}/APKBUILD'):
ret = guess
# Search in subpackages and provides
if not ret:
for path_current in _find_apkbuilds(args).values():
if _find_package_in_apkbuild(package, path_current):
apkbuild = pmb.parse.apkbuild(args, path_current)
found = False
# Subpackages
if package in apkbuild["subpackages"]:
found = True
# Provides (cut off before equals sign for entries like
# "mkbootimg=0.0.1")
if not found:
for provides_i in apkbuild["provides"]:
# Ignore provides without version, they shall never be
# automatically selected
if "=" not in provides_i:
continue
if package == provides_i.split("=", 1)[0]:
found = True
break
if found:
ret = os.path.dirname(path_current)
break
# Use the guess otherwise
# Guess a main package
if not ret:
ret = guess
ret = guess_main(args, package)
# Crash when necessary
if ret is None and must_exist:
@ -182,7 +159,7 @@ def find(args, package, must_exist=True):
package)
# Save result in cache
pmb.helpers.other.cache["find_aport"][package] = ret
args.cache["find_aport"][package] = ret
return ret
@ -193,9 +170,8 @@ def get(args, pkgname, must_exist=True, subpackages=True):
:param pkgname: the package name to find
:param must_exist: raise an exception when it can't be found
:param subpackages: also search for subpackages with the specified
names (slow! might need to parse all APKBUILDs to
find it)
:param subpackages: also search for subpackages with the specified names
(slow! might need to parse all APKBUILDs to find it)
:returns: relevant variables from the APKBUILD as dictionary, e.g.:
{ "pkgname": "hello-world",
"arch": ["all"],
@ -207,42 +183,17 @@ def get(args, pkgname, must_exist=True, subpackages=True):
if subpackages:
aport = find(args, pkgname, must_exist)
if aport:
return pmb.parse.apkbuild(f"{aport}/APKBUILD")
return pmb.parse.apkbuild(args, aport + "/APKBUILD")
else:
path = _find_apkbuilds(args).get(pkgname)
if path:
return pmb.parse.apkbuild(path)
return pmb.parse.apkbuild(args, path)
if must_exist:
raise RuntimeError("Could not find APKBUILD for package:"
f" {pkgname}")
raise RuntimeError(f"Could not find APKBUILD for package: {pkgname}")
return None
def find_providers(args, provide):
"""
Search for providers of the specified (virtual) package in pmaports.
Note: Currently only providers from a single APKBUILD are returned.
:param provide: the (virtual) package to search providers for
:returns: tuple list (pkgname, apkbuild_pkg) with providers, sorted by
provider_priority. The provider with the highest priority
(which would be selected by default) comes first.
"""
providers = {}
apkbuild = get(args, provide)
for subpkgname, subpkg in apkbuild["subpackages"].items():
for provides in subpkg["provides"]:
# Strip provides version (=$pkgver-r$pkgrel)
if provides.split("=", 1)[0] == provide:
providers[subpkgname] = subpkg
return sorted(providers.items(), reverse=True,
key=lambda p: p[1].get('provider_priority', 0))
def get_repo(args, pkgname, must_exist=True):
""" Get the repository folder of an aport.
@ -271,19 +222,3 @@ def check_arches(arches, arch):
if value in arches:
return True
return False
def get_channel_new(channel):
""" Translate legacy channel names to the new ones. Legacy names are still
supported for compatibility with old branches (pmb#2015).
:param channel: name as read from pmaports.cfg or channels.cfg, like
"edge", "v21.03" etc., or potentially a legacy name
like "stable".
:returns: name in the new format, e.g. "edge" or "v21.03"
"""
legacy_cfg = pmb.config.pmaports_channels_legacy
if channel in legacy_cfg:
ret = legacy_cfg[channel]
logging.verbose(f"Legacy channel '{channel}' translated to '{ret}'")
return ret
return channel

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
"""
Functions that work with binary package repos. See also:
@ -15,7 +15,7 @@ import pmb.helpers.run
def hash(url, length=8):
"""
Generate the hash that APK adds to the APKINDEX and apk packages
Generate the hash, that APK adds to the APKINDEX and apk packages
in its apk cache folder. It is the "12345678" part in this example:
"APKINDEX.12345678.tar.gz".
@ -43,10 +43,10 @@ def hash(url, length=8):
def urls(args, user_repository=True, postmarketos_mirror=True, alpine=True):
"""
Get a list of repository URLs, as they are in /etc/apk/repositories.
:param user_repository: add /mnt/pmbootstrap/packages
:param user_repository: add /mnt/pmbootstrap-packages
:param postmarketos_mirror: add postmarketos mirror URLs
:param alpine: add alpine mirror URLs
:returns: list of mirror strings, like ["/mnt/pmbootstrap/packages",
:returns: list of mirror strings, like ["/mnt/pmbootstrap-packages",
"http://...", ...]
"""
ret = []
@ -59,7 +59,7 @@ def urls(args, user_repository=True, postmarketos_mirror=True, alpine=True):
# Local user repository (for packages compiled with pmbootstrap)
if user_repository:
ret.append("/mnt/pmbootstrap/packages")
ret.append("/mnt/pmbootstrap-packages")
# Upstream postmarketOS binary repository
if postmarketos_mirror:
@ -96,7 +96,7 @@ def apkindex_files(args, arch=None, user_repository=True, pmos=True,
:returns: list of absolute APKINDEX.tar.gz file paths
"""
if not arch:
arch = pmb.config.arch_native
arch = args.arch_native
ret = []
# Local user repository (for packages compiled with pmbootstrap)
@ -127,9 +127,9 @@ def update(args, arch=None, force=False, existing_only=False):
# Skip in offline mode, only show once
cache_key = "pmb.helpers.repo.update"
if args.offline:
if not pmb.helpers.other.cache[cache_key]["offline_msg_shown"]:
if not args.cache[cache_key]["offline_msg_shown"]:
logging.info("NOTE: skipping package index update (offline mode)")
pmb.helpers.other.cache[cache_key]["offline_msg_shown"] = True
args.cache[cache_key]["offline_msg_shown"] = True
return False
# Architectures and retention time
@ -151,7 +151,7 @@ def update(args, arch=None, force=False, existing_only=False):
# Find update reason, possibly skip non-existing or known 404 files
reason = None
if url_full in pmb.helpers.other.cache[cache_key]["404"]:
if url_full in args.cache[cache_key]["404"]:
# We already attempted to download this file once in this
# session
continue
@ -179,18 +179,16 @@ def update(args, arch=None, force=False, existing_only=False):
" (" + str(len(outdated)) + " file(s))")
# Download and move to right location
for (i, (url, target)) in enumerate(outdated.items()):
pmb.helpers.cli.progress_print(args, i / len(outdated))
for url, target in outdated.items():
temp = pmb.helpers.http.download(args, url, "APKINDEX", False,
logging.DEBUG, True)
if not temp:
pmb.helpers.other.cache[cache_key]["404"].append(url)
args.cache[cache_key]["404"].append(url)
continue
target_folder = os.path.dirname(target)
if not os.path.exists(target_folder):
pmb.helpers.run.root(args, ["mkdir", "-p", target_folder])
pmb.helpers.run.root(args, ["cp", temp, target])
pmb.helpers.cli.progress_flush(args)
return True
@ -209,7 +207,7 @@ def alpine_apkindex_path(args, repo="main", arch=None):
raise RuntimeError("Invalid Alpine repository: " + repo)
# Download the file
arch = arch or pmb.config.arch_native
arch = arch or args.arch_native
update(args, arch)
# Find it on disk

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import logging

View File

@ -1,10 +1,40 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import shlex
import pmb.helpers.run_core
def flat_cmd(cmd, working_dir=None, env={}):
"""
Convert a shell command passed as list into a flat shell string with
proper escaping.
:param cmd: command as list, e.g. ["echo", "string with spaces"]
:param working_dir: when set, prepend "cd ...;" to execute the command
in the given working directory
:param env: dict of environment variables to be passed to the command, e.g.
{"JOBS": "5"}
:returns: the flat string, e.g.
echo 'string with spaces'
cd /home/pmos;echo 'string with spaces'
"""
# Merge env and cmd into escaped list
escaped = []
for key, value in env.items():
escaped.append(key + "=" + shlex.quote(value))
for i in range(len(cmd)):
escaped.append(shlex.quote(cmd[i]))
# Prepend working dir
ret = " ".join(escaped)
if working_dir:
ret = "cd " + shlex.quote(working_dir) + ";" + ret
return ret
def user(args, cmd, working_dir=None, output="log", output_return=False,
check=None, env={}, sudo=False):
check=None, env={}, kill_as_root=False):
"""
Run a command on the host system as user.
@ -24,15 +54,15 @@ def user(args, cmd, working_dir=None, output="log", output_return=False,
# Add environment variables and run
if env:
cmd = ["sh", "-c", pmb.helpers.run_core.flat_cmd(cmd, env=env)]
cmd = ["sh", "-c", flat_cmd(cmd, env=env)]
return pmb.helpers.run_core.core(args, msg, cmd, working_dir, output,
output_return, check, sudo)
output_return, check, kill_as_root)
def root(args, cmd, working_dir=None, output="log", output_return=False,
check=None, env={}):
"""
Run a command on the host system as root, with sudo or doas.
Run a command on the host system as root, with sudo.
:param env: dict of environment variables to be passed to the command, e.g.
{"JOBS": "5"}
@ -41,8 +71,7 @@ def root(args, cmd, working_dir=None, output="log", output_return=False,
arguments and the return value.
"""
if env:
cmd = ["sh", "-c", pmb.helpers.run_core.flat_cmd(cmd, env=env)]
cmd = pmb.config.sudo(cmd)
cmd = ["sh", "-c", flat_cmd(cmd, env=env)]
cmd = ["sudo"] + cmd
return user(args, cmd, working_dir, output, output_return, check, env,
True)

View File

@ -1,14 +1,12 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import fcntl
import logging
import os
import selectors
import shlex
import subprocess
import sys
import threading
import time
import os
import pmb.helpers.run
""" For a detailed description of all output modes, read the description of
@ -16,42 +14,13 @@ import pmb.helpers.run
called by core(). """
def flat_cmd(cmd, working_dir=None, env={}):
"""
Convert a shell command passed as list into a flat shell string with
proper escaping.
:param cmd: command as list, e.g. ["echo", "string with spaces"]
:param working_dir: when set, prepend "cd ...;" to execute the command
in the given working directory
:param env: dict of environment variables to be passed to the command, e.g.
{"JOBS": "5"}
:returns: the flat string, e.g.
echo 'string with spaces'
cd /home/pmos;echo 'string with spaces'
"""
# Merge env and cmd into escaped list
escaped = []
for key, value in env.items():
escaped.append(key + "=" + shlex.quote(value))
for i in range(len(cmd)):
escaped.append(shlex.quote(cmd[i]))
# Prepend working dir
ret = " ".join(escaped)
if working_dir:
ret = "cd " + shlex.quote(working_dir) + ";" + ret
return ret
def sanity_checks(output="log", output_return=False, check=None):
def sanity_checks(output="log", output_return=False, check=None,
kill_as_root=False):
"""
Raise an exception if the parameters passed to core() don't make sense
(all parameters are described in core() below).
"""
vals = ["log", "stdout", "interactive", "tui", "background", "pipe"]
if output not in vals:
if output not in ["log", "stdout", "interactive", "tui", "background"]:
raise RuntimeError("Invalid output value: " + str(output))
# Prevent setting the check parameter with output="background".
@ -64,25 +33,19 @@ def sanity_checks(output="log", output_return=False, check=None):
if output_return and output in ["tui", "background"]:
raise RuntimeError("Can't use output_return with output: " + output)
if kill_as_root and output in ["interactive", "tui", "background"]:
raise RuntimeError("Can't use kill_as_root with output: " + output)
def background(cmd, working_dir=None):
def background(args, cmd, working_dir=None):
""" Run a subprocess in background and redirect its output to the log. """
ret = subprocess.Popen(cmd, stdout=pmb.helpers.logging.logfd,
stderr=pmb.helpers.logging.logfd, cwd=working_dir)
logging.debug(f"New background process: pid={ret.pid}, output=background")
ret = subprocess.Popen(cmd, stdout=args.logfd, stderr=args.logfd,
cwd=working_dir)
logging.debug("Started process in background with PID " + str(ret.pid))
return ret
def pipe(cmd, working_dir=None):
""" Run a subprocess in background and redirect its output to a pipe. """
ret = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stdin=subprocess.DEVNULL,
stderr=pmb.helpers.logging.logfd, cwd=working_dir)
logging.verbose(f"New background process: pid={ret.pid}, output=pipe")
return ret
def pipe_read(process, output_to_stdout=False, output_return=False,
def pipe_read(args, process, output_to_stdout=False, output_return=False,
output_return_buffer=False):
"""
Read all available output from a subprocess and copy it to the log and
@ -100,7 +63,7 @@ def pipe_read(process, output_to_stdout=False, output_return=False,
# Copy available output
out = process.stdout.readline()
if len(out):
pmb.helpers.logging.logfd.buffer.write(out)
args.logfd.buffer.write(out)
if output_to_stdout:
sys.stdout.buffer.write(out)
if output_return:
@ -108,21 +71,21 @@ def pipe_read(process, output_to_stdout=False, output_return=False,
continue
# No more output (flush buffers)
pmb.helpers.logging.logfd.flush()
args.logfd.flush()
if output_to_stdout:
sys.stdout.flush()
return
def kill_process_tree(args, pid, ppids, sudo):
def kill_process_tree(args, pid, ppids, kill_as_root):
"""
Recursively kill a pid and its child processes
:param pid: process id that will be killed
:param ppids: list of process id and parent process id tuples (pid, ppid)
:param sudo: use sudo to kill the process
:param kill_as_root: use sudo to kill the process
"""
if sudo:
if kill_as_root:
pmb.helpers.run.root(args, ["kill", "-9", str(pid)],
check=False)
else:
@ -131,32 +94,32 @@ def kill_process_tree(args, pid, ppids, sudo):
for (child_pid, child_ppid) in ppids:
if child_ppid == str(pid):
kill_process_tree(args, child_pid, ppids, sudo)
kill_process_tree(args, child_pid, ppids, kill_as_root)
def kill_command(args, pid, sudo):
def kill_command(args, pid, kill_as_root):
"""
Kill a command process and recursively kill its child processes
:param pid: process id that will be killed
:param sudo: use sudo to kill the process
:param kill_as_root: use sudo to kill the process
"""
cmd = ["ps", "-e", "-o", "pid,ppid"]
cmd = ["ps", "-e", "-o", "pid=,ppid=", "--noheaders"]
ret = subprocess.run(cmd, check=True, stdout=subprocess.PIPE)
ppids = []
proc_entries = ret.stdout.decode("utf-8").rstrip().split('\n')[1:]
proc_entries = ret.stdout.decode("utf-8").rstrip().split('\n')
for row in proc_entries:
items = row.split()
if len(items) != 2:
raise RuntimeError("Unexpected ps output: " + row)
ppids.append(items)
kill_process_tree(args, pid, ppids, sudo)
kill_process_tree(args, pid, ppids, kill_as_root)
def foreground_pipe(args, cmd, working_dir=None, output_to_stdout=False,
output_return=False, output_timeout=True,
sudo=False, stdin=None):
kill_as_root=False):
"""
Run a subprocess in foreground with redirected output and optionally kill
it after being silent for too long.
@ -168,7 +131,7 @@ def foreground_pipe(args, cmd, working_dir=None, output_to_stdout=False,
:param output_timeout: kill the process when it doesn't print any output
after a certain time (configured with --timeout)
and raise a RuntimeError exception
:param sudo: use sudo to kill the process when it hits the timeout
:param kill_as_root: use sudo to kill the process when it hits the timeout
:returns: (code, output)
* code: return code of the program
* output: ""
@ -176,8 +139,7 @@ def foreground_pipe(args, cmd, working_dir=None, output_to_stdout=False,
"""
# Start process in background (stdout and stderr combined)
process = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, cwd=working_dir,
stdin=stdin)
stderr=subprocess.STDOUT, cwd=working_dir)
# Make process.stdout non-blocking
handle = process.stdout.fileno()
@ -203,15 +165,15 @@ def foreground_pipe(args, cmd, working_dir=None, output_to_stdout=False,
str(args.timeout) + " seconds. Killing it.")
logging.info("NOTE: The timeout can be increased with"
" 'pmbootstrap -t'.")
kill_command(args, process.pid, sudo)
kill_command(args, process.pid, kill_as_root)
continue
# Read all currently available output
pipe_read(process, output_to_stdout, output_return,
pipe_read(args, process, output_to_stdout, output_return,
output_buffer)
# There may still be output after the process quit
pipe_read(process, output_to_stdout, output_return, output_buffer)
pipe_read(args, process, output_to_stdout, output_return, output_buffer)
# Return the return code and output (the output gets built as list of
# output chunks and combined at the end, this is faster than extending the
@ -233,55 +195,8 @@ def foreground_tui(cmd, working_dir=None):
return process.wait()
def check_return_code(args, code, log_message):
"""
Check the return code of a command.
:param code: exit code to check
:param log_message: simplified and more readable form of the command, e.g.
"(native) % echo test" instead of the full command with
entering the chroot and more escaping
:raises RuntimeError: when the code indicates that the command failed
"""
if code:
logging.debug("^" * 70)
logging.info("NOTE: The failed command's output is above the ^^^ line"
" in the log file: " + args.log)
raise RuntimeError(f"Command failed (exit code {str(code)}): " +
log_message)
def sudo_timer_iterate():
"""
Run sudo -v and schedule a new timer to repeat the same.
"""
if pmb.config.which_sudo() == "sudo":
subprocess.Popen(["sudo", "-v"]).wait()
else:
subprocess.Popen(pmb.config.sudo(["true"])).wait()
timer = threading.Timer(interval=60, function=sudo_timer_iterate)
timer.daemon = True
timer.start()
def sudo_timer_start():
"""
Start a timer to call sudo -v periodically, so that the password is only
needed once.
"""
if "sudo_timer_active" in pmb.helpers.other.cache:
return
pmb.helpers.other.cache["sudo_timer_active"] = True
sudo_timer_iterate()
def core(args, log_message, cmd, working_dir=None, output="log",
output_return=False, check=None, sudo=False, disable_timeout=False):
output_return=False, check=None, kill_as_root=False, disable_timeout=False):
"""
Run a command and create a log entry.
@ -300,57 +215,41 @@ def core(args, log_message, cmd, working_dir=None, output="log",
"stdout", "interactive", "background"), so it's easy to
trace what pmbootstrap does.
The exceptions are "tui" (text-based user interface), where
The exception is "tui" (text-based user interface), where
it does not make sense to write to the log file (think of
ncurses UIs, such as "menuconfig") and "pipe" where the
output is written to a pipe for manual asynchronous
consumption by the caller.
ncurses UIs, such as "menuconfig").
When the output is not set to "interactive", "tui",
"background" or "pipe", we kill the process if it does not
output anything for 5 minutes (time can be set with
"pmbootstrap --timeout").
When the output is not set to "interactive", "tui" or
"background", we kill the process if it does not output
anything for 5 minutes (time can be set with "pmbootstrap
--timeout").
The table below shows all possible values along with
their properties. "wait" indicates that we wait for the
process to complete.
output value | timeout | out to log | out to stdout | wait | pass stdin
------------------------------------------------------------------------
"log" | x | x | | x |
"stdout" | x | x | x | x |
"interactive" | | x | x | x | x
"tui" | | | x | x | x
"background" | | x | | |
"pipe" | | | | |
output value | timeout | out to log | out to stdout | wait
-----------------------------------------------------------
"log" | x | x | | x
"stdout" | x | x | x | x
"interactive" | | x | x | x
"tui" | | | x | x
"background" | | x | |
:param output_return: in addition to writing the program's output to the
destinations above in real time, write to a buffer
and return it as string when the command has
completed. This is not possible when output is
"background", "pipe" or "tui".
"background" or "tui".
:param check: an exception will be raised when the command's return code
is not 0. Set this to False to disable the check. This
parameter can not be used when the output is "background" or
"pipe".
:param sudo: use sudo to kill the process when it hits the timeout.
parameter can not be used when the output is "background".
:param kill_as_root: use sudo to kill the process when it hits the timeout.
:returns: * program's return code (default)
* subprocess.Popen instance (output is "background" or "pipe")
* subprocess.Popen instance (output is "background")
* the program's entire output (output_return is True)
"""
sanity_checks(output, output_return, check)
# Preserve proxy environment variables
env = {}
for var in ["FTP_PROXY", "ftp_proxy", "HTTP_PROXY", "http_proxy",
"HTTPS_PROXY", "https_proxy", "HTTP_PROXY_AUTH"]:
if var in os.environ:
env[var] = os.environ[var]
if env:
cmd = ["sh", "-c", flat_cmd(cmd, env=env)]
if args.sudo_timer and sudo:
sudo_timer_start()
sanity_checks(output, output_return, check, kill_as_root)
# Log simplified and full command (pmbootstrap -v)
logging.debug(log_message)
@ -358,11 +257,7 @@ def core(args, log_message, cmd, working_dir=None, output="log",
# Background
if output == "background":
return background(cmd, working_dir)
# Pipe
if output == "pipe":
return pipe(cmd, working_dir)
return background(args, cmd, working_dir)
# Foreground
output_after_run = ""
@ -376,18 +271,18 @@ def core(args, log_message, cmd, working_dir=None, output="log",
output_to_stdout = True
output_timeout = output in ["log", "stdout"] and not disable_timeout
stdin = subprocess.DEVNULL if output in ["log", "stdout"] else None
(code, output_after_run) = foreground_pipe(args, cmd, working_dir,
output_to_stdout,
output_return,
output_timeout,
sudo, stdin)
kill_as_root)
# Check the return code
if check is not False:
check_return_code(args, code, log_message)
if code and check is not False:
logging.debug("^" * 70)
logging.info("NOTE: The failed command's output is above the ^^^ line"
" in the log file: " + args.log)
raise RuntimeError("Command failed: " + log_message)
# Return (code or output string)
return output_after_run if output_return else code

View File

@ -1,4 +1,4 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
import os
import logging
@ -92,7 +92,7 @@ def print_checks_git_repo(args, repo, details=True):
log_ok("up to date with remote branch")
# Outdated remote information
if pmb.helpers.git.is_outdated(path):
if pmb.helpers.git.is_outdated(args, path):
return log_nok_ret(-5, "outdated remote information",
"update with 'pmbootstrap pull'")
log_ok("remote information updated recently (via git fetch/pull)")
@ -121,7 +121,7 @@ def print_checks_chroots_outdated(args, details):
:returns: list of unresolved checklist items """
if pmb.config.workdir.chroots_outdated(args):
logging.info("[NOK] Chroots not zapped recently")
return ["Run 'pmbootstrap zap' to delete possibly outdated chroots"]
return ["Run 'pmbootststrap zap' to delete possibly outdated chroots"]
elif details:
logging.info("[OK ] Chroots zapped recently (or non-existing)")
return []

View File

@ -1,4 +1,4 @@
# Copyright 2023 Clayton Craft
# Copyright 2020 Clayton Craft
# SPDX-License-Identifier: GPL-3.0-or-later
import os
import glob
@ -12,11 +12,9 @@ def list(args, arch):
:param arch: device architecture, for which the UIs must be available
:returns: [("none", "No graphical..."), ("weston", "Wayland reference...")]
"""
ret = [("none", "Bare minimum OS image for testing and manual"
" customization. The \"console\" UI should be selected if"
" a graphical UI is not desired.")]
ret = [("none", "No graphical environment")]
for path in sorted(glob.glob(args.aports + "/main/postmarketos-ui-*")):
apkbuild = pmb.parse.apkbuild(f"{path}/APKBUILD")
apkbuild = pmb.parse.apkbuild(args, path + "/APKBUILD")
ui = os.path.basename(path).split("-", 2)[2]
if pmb.helpers.package.check_arch(args, apkbuild["pkgname"], arch):
ret.append((ui, apkbuild["pkgdesc"]))

View File

@ -1,9 +1,7 @@
# Copyright 2023 Oliver Smith
# Copyright 2020 Oliver Smith
# SPDX-License-Identifier: GPL-3.0-or-later
from pmb.install._install import install
from pmb.install._install import get_kernel_package
from pmb.install.partition import partition
from pmb.install.partition import partition_cgpt
from pmb.install.format import format
from pmb.install.format import get_root_filesystem
from pmb.install.partition import partitions_mount

Some files were not shown because too many files have changed in this diff Show More