Deleting old Rundeck logs / executions

Rundeck is an open source automation service with a web console, command line tools and a WebAPI. It lets you easily run automation tasks across a set of nodes.

It’s quite convenient to automate most of the tedious, repetitive maintenance and housekeeping tasks with Rundeck. Unfortunately if you use Rundeck for a long time, you might experience a huge amount of log files. As it turns out, Rundeck does not automatically clean up old log files. In my case, I had a task running like every 30 seconds. This lead to more than 150.000 executions and a few gigabytes of log files in a few days.

This behaviour isn’t new to the maintainers of Rundeck and there is / was an active discussion about this topic. The issue is concluded with the statement

the workaround is to use the rd executions deletebulk command.

Means: Build it your own and use the fancy Rundeck CLI to cleanup the old logs. Wait a sec… I could automate this and use Rundeck to clean Rundeck. This would be some kind of Rundeckception.

Fortunately Alex Simenduev alias shamil already dealt with this issue and published a suitable Gist:

#!/bin/bash -e

# export RD_URL=https://rundeck.example.com RD_USER=admin RD_PASSWORD=admin RD_HTTP_TIMEOUT=300

# make sure rd & jq commands are in the PATH
which -- rd jq >/dev/null

del_executions() {
    local project=$1

    while true; do
        rd executions deletebulk -y -m ${RD_OPTION_BATCH:-20} --older ${RD_OPTION_OLDER_THAN:-7d} -p $project || break
        sleep 1s
    done
}

del_executions $1

exit 0

I made some minor adaptions to his original version, as the original loop had to be cancelled manually. The script requires the Rundeck CLI and jq to be available on the system. You can use the following Dockerfile as a quickstart for cleaning up your old Rundeck logs:

FROM thofer/rundeck-cli

ENV RD_URL=https://rundeck.example.com \
    RD_USER=admin \
    RD_PASSWORD=admin \
    RD_HTTP_TIMEOUT=300

COPY clean.sh /tmp

RUN apt-get update -y && apt-get install -y jq
RUN chmod +x /tmp/clean.sh

ENTRYPOINT ["/tmp/clean.sh"]

Build it by putting both files (clean.sh and the Dockerfile) into the same directory:

docker build -t rdc
# rdc = rundeck-cleaner

And execute it afterwards by passing the respective ENVs with the targeted project name as the first argument:

docker run -it -e RD_URL=https://rundeck.unicorn.com -e RD_PASSWORD=Unicorn rdc Housekeeping
# wheres as „Housekeeping“ would be the project name (refer to the Rundeck admin interface)

OpenShift 3.9 missing Webconsole

Getting started with OpenShift 3.9 differs from the prior versions. The OpenShift webconsole, which was provided by default in the past, seems to be no longer shipped and enabled by default. Trying to access via https://oc.example.com:8443 reveals the following (unhelpful) message:

missing service (service “webconsole” not found)
missing route (service “webconsole” not found)

Digging through GitHub issues reveals, that this behavior is intentional and the users have to deploy the Webconsole manually from now (at least if not relying on the official Ansible automation). The required deployment / route / service configurations are provided in the official OpenShift Origin repository. Running the following commands gets you up and running with a fresh webconsole under OpenShift Origin 3.9:

$ git clone https://github.com/openshift/origin
$ echo "Optionally checkout the specific version you are trying to get running"
$ oc login -u system:admin
$ oc create namespace openshift-web-console
$ # Customize install/origin-web-console/console-config.yaml before running the following command
$ # Replace 127.0.0.1 with you own IP / domain if you are not running locally
$ oc process -f install/origin-web-console/console-template.yaml -p "API_SERVER_CONFIG=$(cat install/origin-web-console/console-config.yaml)" | oc apply -n openshift-web-console -f -

Fortunately the uggly error message will most probably disappear in subsequent releases. There is an open PR dealing with it: Bug 1538006 – Improve error page when console not installed

Further references

How to build you own Open GApps package

The Open GApps project provides a convenient way to get up-to-date Google App packages (most often used in combination with custom ROMs). Unfortunately they do not always offer the most recent versions and it takes some time until new Android releases are reflected on the official Open GApps project website. As of this writing, Android 8.1 is the most recent Android version and is not yet in the portfolio of the Open GApps project.

Fortunately they publish their automation and all necessary assets on GitHub. Therefore building your package from their sources is pretty feasible. The following guide is done for macOS, but should be quite similar to most Linux distributions (hint: you can use the beevelop/android Docker image to save some time):

# Install lzip via brew (Open GApps depends on it)
brew install lzip
# Clone the main repository
git clone git@github.com:opengapps/opengapps.git
# Download the sources for your targeted architecture (arm64 in my case)
# Downloading and „uncompacting“ the repositories takes quite some time
# Get a coffee or two in the meantime
./download_sources.sh --shallow arm64

# The final step is building the package itself
# This also does take quite some time
# especially depending on your CPU power (due to compression stuff, etc.)
make arm64-27-stock

The following script might help you getting started by using the Docker image mentioned above. Just run the following commands inside the Docker container (e.g. docker run -it --rm beevelop/android):

apt install build-essential lzip git zip
# SSH-Key on the machine is required and has to be added to your GitHub account
git clone git@github.com:opengapps/opengapps.git
./download_sources.sh --shallow arm64
make arm64-27-stock
# The command should great you with:
# SUCCESS: Built Open GApps variation stock with API 27 level for arm64 as [...path...]

Afterwards you can transfer the resulting zip file from Docker container to your host machine using docker cp:

docker cp practical_wilson:/root/opengapps/out/open_gapps-arm64-8.1-stock-20180203-UNOFFICIAL.zip .
# from there on scp it to your local computer and put it on your gorgeous mobile phone

Installing Python with PIP on Boot2Docker

Boot2Docker is quite awesome for getting started with Docker on Windows or OS X / macOS. Tools like „minishift“ do still use Boot2Docker to simplify their setup. Unfortunately Boot2Docker does not provide a solid package manager by default (like most Linux distributions do nowadays with apt, yum, etc.). But as Boot2Docker is based on Tiny Linux, it offers tce-load for some basic package management.

tce-load even provides a full-blown python package, what enables us to install Python with PIP on Boot2Docker. Running the following snippet gets you up and running:

tce-load -wi python
curl https://bootstrap.pypa.io/get-pip.py | sudo python -

Further references

  • List of installable packages for tce-load: http://distro.ibiblio.org/tinycorelinux/tcz_2x.html
  • Documentation of the tce-load command: http://wiki.tinycorelinux.net/wiki:tce-load

OpenShift: “systemd” is different from docker cgroup driver: “cgroupfs”

Messing around with OpenShift Origin 3.6.0 and their Docker Quickstart guide I stumbled upon some configuration incompatibilities with Kubernetes:

F0819 08:47:34.208186    9065 node.go:282] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"

The issue is caused due to using cgrougfs as Cgroup Driver for Docker. You can verify your Docker config by running docker info | grep Cgroup. You can change your Cgroup Driver to systemd using the native.cgroupdriver parameter for the Docker Daemon. Add the following arg to your /etc/systemd/system/docker.service.d/docker-thinpool.conf (ExecStart) file and /etc/default/docker (DOCKER_OPTS):

--exec-opt native.cgroupdriver=systemd

Relaunch the dockerd afterwards and verify your changes:

systemctl daemon-reload
systemctl restart docker.service
docker info | grep Cgroup

Further references

docker change cgroup driver to systemd

I want to docker to start with systemd cgroup driver. for some reason it using only cgroupfs on my centos 7 server. here is startup config file. # systemctl cat docker # /usr/lib/systemd/system/d…

openshift/origin docker container fails to start: failed to create kubelet: misconfiguration: kubelet cgroup driver: “systemd” is different from docker cgroup driver: “cgroupfs” · Issue #14766 · openshift/origin

Affected System configuration

$ docker version

Client:
 Version:      17.06.1-ce
 API version:  1.30
 Go version:   go1.8.3
 Git commit:   874a737
 Built:        Thu Aug 17 22:51:12 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.06.1-ce
 API version:  1.30 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   874a737
 Built:        Thu Aug 17 22:50:04 2017
 OS/Arch:      linux/amd64
 Experimental: false

$ lsb_release -a
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.3 LTS
Release:    16.04
Codename:   xenial

Docker: Error opening terminal: unknown.

Executing bash in a running Docker container via docker exec -it [container] bash intending to edit a file with nano (or any other of those spiffy editors) might result in bash refusing to do its job.

Error opening terminal: unknown.

Thankfully there are some smart folks out there, who traced the TERM variable as the underlying troublemaker. Therefore assigning xterm as the variable’s value resolves the complications:

export TERM=xterm