Deleting old Rundeck logs / executions

Rundeck is an open source automation service with a web console, command line tools and a WebAPI. It lets you easily run automation tasks across a set of nodes.

It’s quite convenient to automate most of the tedious, repetitive maintenance and housekeeping tasks with Rundeck. Unfortunately if you use Rundeck for a long time, you might experience a huge amount of log files. As it turns out, Rundeck does not automatically clean up old log files. In my case, I had a task running like every 30 seconds. This lead to more than 150.000 executions and a few gigabytes of log files in a few days.

This behaviour isn’t new to the maintainers of Rundeck and there is / was an active discussion about this topic. The issue is concluded with the statement

the workaround is to use the rd executions deletebulk command.

Means: Build it your own and use the fancy Rundeck CLI to cleanup the old logs. Wait a sec… I could automate this and use Rundeck to clean Rundeck. This would be some kind of Rundeckception.

Fortunately Alex Simenduev alias shamil already dealt with this issue and published a suitable Gist:

#!/bin/bash -e

# export RD_URL=https://rundeck.example.com RD_USER=admin RD_PASSWORD=admin RD_HTTP_TIMEOUT=300

# make sure rd & jq commands are in the PATH
which -- rd jq >/dev/null

del_executions() {
    local project=$1

    while true; do
        rd executions deletebulk -y -m ${RD_OPTION_BATCH:-20} --older ${RD_OPTION_OLDER_THAN:-7d} -p $project || break
        sleep 1s
    done
}

del_executions $1

exit 0

I made some minor adaptions to his original version, as the original loop had to be cancelled manually. The script requires the Rundeck CLI and jq to be available on the system. You can use the following Dockerfile as a quickstart for cleaning up your old Rundeck logs:

FROM thofer/rundeck-cli

ENV RD_URL=https://rundeck.example.com \
    RD_USER=admin \
    RD_PASSWORD=admin \
    RD_HTTP_TIMEOUT=300

COPY clean.sh /tmp

RUN apt-get update -y && apt-get install -y jq
RUN chmod +x /tmp/clean.sh

ENTRYPOINT ["/tmp/clean.sh"]

Build it by putting both files (clean.sh and the Dockerfile) into the same directory:

docker build -t rdc
# rdc = rundeck-cleaner

And execute it afterwards by passing the respective ENVs with the targeted project name as the first argument:

docker run -it -e RD_URL=https://rundeck.unicorn.com -e RD_PASSWORD=Unicorn rdc Housekeeping
# wheres as „Housekeeping“ would be the project name (refer to the Rundeck admin interface)

CocoaPods can’t reach GitHub

Running CocoaPods on “old” versions of OS X / macOS might lead to some weird behaviour – namely CocoaPods telling you that GitHub might be down:

$ pod repo update --verbose

Updating spec repo `master`
[!] Failed to connect to GitHub to update the CocoaPods/Specs specs repo - Please check if you are offline, or that GitHub is down

/Library/Ruby/Gems/2.0.0/gems/cocoapods-core-1.3.1/lib/cocoapods-core/github.rb:105:in `rescue in modified_since_commit'
[...]

The error message does not only sound insane, it is actually completely misleading. The issue seems to be caused by a failing request to GitHub. Originating from a failed TLS handshake (as it happens quite often with the older Ruby 2.0.0 version). More recent versions of Ruby don’t have a problem with communicating via SSL. Using a more recent Ruby version does in fact circumvent the issue:

brew install ruby
sudo gem install cocoapods

Depending on your system, you might need to forcefully “overwrite” the Ruby installation using brew link:

brew link --overwrite ruby

Further references

OpenShift 3.9 missing Webconsole

Getting started with OpenShift 3.9 differs from the prior versions. The OpenShift webconsole, which was provided by default in the past, seems to be no longer shipped and enabled by default. Trying to access via https://oc.example.com:8443 reveals the following (unhelpful) message:

missing service (service “webconsole” not found)
missing route (service “webconsole” not found)

Digging through GitHub issues reveals, that this behavior is intentional and the users have to deploy the Webconsole manually from now (at least if not relying on the official Ansible automation). The required deployment / route / service configurations are provided in the official OpenShift Origin repository. Running the following commands gets you up and running with a fresh webconsole under OpenShift Origin 3.9:

$ git clone https://github.com/openshift/origin
$ echo "Optionally checkout the specific version you are trying to get running"
$ oc login -u system:admin
$ oc create namespace openshift-web-console
$ # Customize install/origin-web-console/console-config.yaml before running the following command
$ # Replace 127.0.0.1 with you own IP / domain if you are not running locally
$ oc process -f install/origin-web-console/console-template.yaml -p "API_SERVER_CONFIG=$(cat install/origin-web-console/console-config.yaml)" | oc apply -n openshift-web-console -f -

Fortunately the uggly error message will most probably disappear in subsequent releases. There is an open PR dealing with it: Bug 1538006 – Improve error page when console not installed

Further references

fastlane supply: Google Api Error (Google Play)

fastlane is an awesome tool to release your iOS and Android apps. It handles all your tedious tasks, like generating screenshots, dealing with code signing, and releasing your application. From my experience fastlane is pretty reliable and a true blessing when developing mobile applications. supply is the component of fastlane that is responsible for updating Android apps (binaries), release management (e.g. beta & alpha tracks) and the respective metadata (store listing and screenshots) on the Google Play Store.

Using supply to automate the alpha releases, I had to deal with the following error message:

[10:04:54]: Updating track 'alpha'...
[10:04:55]: Uploading all changes to Google Play...

[!] Google Api Error: multiApkShadowedActiveApk: Version 2100384 of this app can not be downloaded by any devices as they will all receive APKs with higher version codes.

Not only does this complication keep the builds failing, it also prevents releasing new alpha builds – therefore effectively jamming the release cycle. According to a related GitHub Issues the error gets triggered after promoting a release directly from alpha to production (skipping the beta track). As of this writing there is a pretty fresh (14 hours old) Pull Request, which tries to circumvent the outlined problem. Until the possible fix is released, you can workaround the problem by manually uploading a new APK to the alpha track. All subsequent builds should be fixed.


  • The described problem could be reproduced with supply 2.19.0 on Ubuntu 16.04.