~derf / Interblag
dark mode

Archives: 2008 2009 2010 2011 2012 2013 2014 2016 2017 2018 2019 2020 2021

A few years back, I bought an RND Lab RND 320-KA3005P bench power supply both for its capability of delivering up to 30V @ 5A, and for its USB serial control channel. The latter can be used to both read out voltage/current data and change all settings which are accessible from the front panel, including voltage and current limits.

This weekend, I finally got around to writing a proper Python tool for controlling and automating it: korad-logger works with most KAxxxxP power supplies, which are sold under brand names such as Korad or RND Lab.

Now, basic characteristics such as I-V curves are trivial to generate. For instance, here's the I-V curve for an unknown RGB power LED.

It's based on three calls of the following command.

bin/korad-logger --voltage-limit 5 --current-range '0 0.2 0.001' --save led-$color.log 210

At a sample rate of about 10 Hz and 1 mA / 10 mV resolution, the bench supply won't perform miracles. Nevertheless, it is quite handy. If you measure only current (e.g. in CV mode), or only voltage (CC mode), you can even get near 20 Hz.

The MSP430FR launchpad series is a pretty nifty tool both for research and teaching. You get an ultra-low-power 16-bit microcontroller, persistent FRAM, and energy measurement capabilities, all for under $20.

Unfortunately, especially when it comes to teaching, there's one major drawback: Out of bound memory accesses which are off by several thousand bytes can permanently brick the CPU. This typically happens either due to a buffer overflow in FRAM or a stack pointer underflow (i.e., stack overflow) in SRAM.

This issue recently bit one of my students and it turns out that it could have been avoided. So I'll give a quick overview of symptoms, cause, and protection against it, both as a reference for myself and for others.

Symptoms

A bricked MSP430FR launchpad is no longer flashable or erasable via JTAG or BSL. Attempts to control it via MSP Flasher fail with error 16: "The Debug Interface to the device has been secured".

* -----/|-------------------------------------------------------------------- *
*     / |__                                                                   *
*    /_   /   MSP Flasher v1.3.20                                             *
*      | /                                                                    *
* -----|/-------------------------------------------------------------------- *
*
* Evaluating triggers...
* Invalid argument for -i trigger. Default used (USB).
* Checking for available FET debuggers:
* Found USB FET @ ttyACM0 <- Selected
* Initializing interface @ ttyACM0...done
* Checking firmware compatibility:
* FET firmware is up to date.
* Reading FW version...done
* Setting VCC to 3000 mV...done
* Accessing device...
# Exit: 16
# ERROR: The Debug Interface to the device has been secured
* Starting target code execution...done
* Disconnecting from device...done
*
* ----------------------------------------------------------------------------
* Driver      : closed (Internal error)
* ----------------------------------------------------------------------------
*/

Unless you know the exact memory pattern written by the buffer overflow (and it specifies a reasonable password length), there is no remedy I'm aware of. The CPU is permanently bricked.

Cause

MSP430FR CPUs use a unified memory architecture: Registers, volatile SRAM, and persistent FRAM are all part of the same address space. This includes fuses (“JTAG signatures”) used to secure the device by either disabling JTAG access altogether or protecting it with a user-defined password.

While write access to several CPU registers requires specific passwords and timing sequences to be observed, this is not the case for the JTAG signatures. Change them, reset the CPU, and it's game over.

The JTAG signatures reside next to the reset vector and interrupt vector at the 16-bit address boundary, within the address range from 0xff80 to 0xffff. On MSP430FR5994 CPUs, the (writable!) text segment ends at 0xff7f and SRAM is located in 0x1c00 to 0x3bff. So, a small buffer overflow in a persistent variable (located in FRAM) or a significant stack pointer underflow (starting in SRAM, growing down, and wrapping from 0x0000 to 0xffff) may overwrite the JTAG signatures with arbitrary data.

Protection

MSP430FR CPUs contain a bare-bones Memory Protection Unit. It can partition the address space into up to three distinct regions with 1kB granularity and enforce RWX settings for each region. So, if we disallow writes to the 1kB region from 0xfc00 to 0xffff, we no longer have to worry about accidentally overwriting the JTAG signatures. To do so, place the following lines in your startup code:

MPUCTL0 = MPUPW;
MPUSEGB2 = 0x1000; // memory address 0x10000
MPUSEGB1 = 0x0fc0; // memory address 0x0fc00
MPUSAM &= ~MPUSEG2WE; // disallow writes
MPUSAM |= MPUSEG2VS;  // reset CPU on violation
MPUCTL0 = MPUPW | MPUENA;
MPUCTL0_H = 0;

Note that this disallows writes not just to the JTAG signatures, but also to part of the text segment as well as the interrupt vector table. If an application dynamically alters interrupt vector table entries or uses persistent FRAM variables at addresses beyond 0xfbff, this method will break the application. Most practical use cases shouldn't run into this issue.

The Things Indoor Gateway (TTIG) is an affordable LoRaWAN gateway, ideal for getting started with The Things Network or other setups. Here are two ways of monitoring its radio performance and feeding data into e.g. InfluxDB, so you can display the results in a small Grafana dashboard.

TTN Gateway Server API

The Things Stack's Gateway Server API allows requesting uplink and downlink stats of a gateway if you have an appropriate API key.

First, you need to navigate to the gateway page in your TTN console and create a new API key with “View gateway status” rights. Using this key and your gateway ID, you can request connection statistics:

> curl -H "Authorization: Bearer GATEWAY_KEY" \
  https://eu1.cloud.thethings.network/api/v3/gs/gateways/GATEWAY_ID/connection/stats | jq
{
  "last_uplink_received_at": "2021-09-12T11:00:41.490891018Z",
  "uplink_count": "115",
  "last_downlink_received_at": "2021-09-12T00:05:45.008438327Z",
  "downlink_count": "2",
}

With a cronjob running every few minutes, you can pass the data to InfluxDB. I'm using the following Python script for this:

#!/usr/bin/env python3
# vim:tabstop=4 softtabstop=4 shiftwidth=4 textwidth=160 smarttab expandtab colorcolumn=160

import requests

def main(auth_token, gateway_id):
    response = requests.get(
        f"https://eu1.cloud.thethings.network/api/v3/gs/gateways/{gateway_id}/connection/stats",
        headers={
            "Authorization": "Bearer {auth_token}"
        },
    )

    data = response.json()

    uplink_count = data.get("uplink_count", 0)
    downlink_count = data.get("downlink_count", 0)

    requests.post(
        "http://influxdb:8086/write?db=hosts",
        f"ttn_gateway,name={gateway_id} uplink_count={uplink_count},downlink_count={downlink_count}",
    )


if __name__ == "__main__":
    main("GATEWAY_KEY", "GATEWAY_ID")

It's also possible to assign “Read gateway traffic” rights to an API key. I didn't play around with that yet.

USB-UART Logs

By soldering a 1kΩ resistor onto R86 on the TTIG PCB, you can enable its built-in CP2102N USB-UART converter. This allows you to use the USB port not just for power, but also for observing its debug output. See Xose Pérez' Hacking the TTI Indoor Gateway blog post for details.

With this hack, connecting the TTIG to a linux computer capable of sourcing up to 900mA via USB will cause a /dev/ttyUSB serial interface to apper. You can use tools such as screen or picocom with a baud rate of 115200 to observe the output. Apart from memory usage and time synchronization logs, it includes a line similar to the following one for each received LoRa transmission:

RX 868.3MHz DR5 SF7/BW125 snr=9.0 rssi=-46 xtime=0x43000FB11517C3 - updf mhdr=40 DevAddr=01234567 FCtrl=00 FCnt=502 FOpts=[] 0151B4 mic=-1842874694 (15 bytes)

So you can log statistics about Received Signal Strength, Signal-to-Noise Ratio, Spreading Factor and similar.

The Python script I'm using for this is somewhat more involved:

#!/usr/bin/env python3
# vim:tabstop=4 softtabstop=4 shiftwidth=4 textwidth=160 smarttab expandtab colorcolumn=160

import re
import requests
import serial
import serial.threaded
import sys
import time


class SerialReader(serial.threaded.Protocol):
    def __init__(self, callback):
        self.callback = callback
        self.recv_buf = ""

    def __call__(self):
        return self

    def data_received(self, data):
        try:
            str_data = data.decode("UTF-8")
            self.recv_buf += str_data

            lines = self.recv_buf.split("\n")
            if len(lines) > 1:
                self.recv_buf = lines[-1]
                for line in lines[:-1]:
                    self.callback(str.strip(line))

        except UnicodeDecodeError:
            pass
            # sys.stderr.write('UART output contains garbage: {data}\n'.format(data = data))


class SerialMonitor:
    def __init__(self, port: str, baud: int, callback):
        self.ser = serial.serial_for_url(port, do_not_open=True)
        self.ser.baudrate = baud
        self.ser.parity = "N"
        self.ser.rtscts = False
        self.ser.xonxoff = False

        try:
            self.ser.open()
        except serial.SerialException as e:
            sys.stderr.write(
                "Could not open serial port {}: {}\n".format(self.ser.name, e)
            )
            sys.exit(1)

        self.reader = SerialReader(callback=callback)
        self.worker = serial.threaded.ReaderThread(self.ser, self.reader)
        self.worker.start()

    def close(self):
        self.worker.stop()
        self.ser.close()


if __name__ == "__main__":

    def parse_line(line):

        match = re.search(
            "RX ([0-9.]+)MHz DR([0-9]+) SF([0-9]+)/BW([0-9]+) snr=([0-9.-]+) rssi=([0-9-]+) .* DevAddr=([^ ]*)",
            line,
        )

        if match:
            requests.post(
                "http://influxdb:8086/write?db=hosts",
                data=f"ttn_rx,gateway=GATEWAY_ID,devaddr={match.group(7)} dr={match.group(2)},sf={match.group(3)},bw={match.group(4)},snr={match.group(5)},rssi={match.group(6)}",
            )

    monitor = SerialMonitor(
        "/dev/ttyUSB0",
        115200,
        parse_line,
    )

    try:
        while True:
            time.sleep(60)
    except KeyboardInterrupt:
        monitor.close()
2021-09-03 18:19

EFA-APIs mit JSON nutzen

Die meisten deutschen Fahrplanauskünfte nutzen entweder EFA ("Elektronische FahrplanAuskunft") oder HAFAS ("HAcon Fahrplan-Auskunfts-System"). Die meisten EFA-Instanzen wiederum bringen mittlerweile native JSON-Unterstützung mit, so dass sie leicht von Skripten aus nutzbar sind. JSON-APIS wie die von https://vrrf.finalrewind.org sind damit weitgehend obsolet.

Hier ein Python-Beispiel für https://efa.vrr.de:

#!/usr/bin/env python3

import aiohttp
import asyncio
from datetime import datetime
import json


class EFA:
    def __init__(self, url, proximity_search=False):
        self.dm_url = url + "/XML_DM_REQUEST"
        self.dm_post_data = {
            "language": "de",
            "mode": "direct",
            "outputFormat": "JSON",
            "type_dm": "stop",
            "useProxFootSearch": "0",
            "useRealtime": "1",
        }

        if proximity_search:
            self.dm_post_data["useProxFootSearch"] = "1"

    async def get_departures(self, place, name, ts):
        self.dm_post_data.update(
            {
                "itdDateDay": ts.day,
                "itdDateMonth": ts.month,
                "itdDateYear": ts.year,
                "itdTimeHour": ts.hour,
                "itdTimeMinute": ts.minute,
                "name_dm": name,
            }
        )
        if place is None:
            self.dm_post_data.pop("place_dm", None)
        else:
            self.dm_post_data.update({"place_dm": place})
        departures = list()
        async with aiohttp.ClientSession() as session:
            async with session.post(self.dm_url, data=self.dm_post_data) as response:
                # EFA may return JSON with a text/html Content-Type, which response.json() does not like.
                departures = json.loads(await response.text())
        return departures


async def main():
    now = datetime.now()
    departures = await EFA("https://efa.vrr.de/standard/").get_departures(
        "Essen", "Hbf", now
    )
    print(json.dumps(departures))


if __name__ == "__main__":
    asyncio.get_event_loop().run_until_complete(main())

Setting PULSE_SERVER forwards the entire system audio to a remote (tcp) network sink. A more fine-granular solution (with control on stream- instead of system level) is almost as easy, thanks to module-tunnel-sink:

pacmd load-module module-tunnel-sink server=192.168.0.195

Now you can select the remote sink for individual streams (or turn it into the default / fallback one) and, for instance, have two different videos play back on two different remote sinks while your messenger's notification sounds remain local.

Wer DBF aus einem fahrenden Zug heraus aufruft, kann seit heute nur per GPS-Position Informationen zu diesem Zug erhalten – zumindest in den meisten Fällen und mit ein paar Einschränkungen. Ich möchte hier das Konzept dahinter erläutern.

Da über GTFS derzeit nur Solldaten zur Verfügung stehen und das HAFAS Zugradar lediglich nach beliebigen Fahrten im Umkreis sucht, ohne dabei konkrete Strecken zu berücksichtigen, greift die DBF-Implementierung nicht darauf zurück.

Stattdessen hat sie als einzige API-Abhängigkeit die Ankunfts-/Abfahrtstafel für Bahnhöfe und berechnet alles weitere selbst. Auch bei Abschaltung des HAFAS Zugradars bleibt sie funktionsfähig.

Abbildung von Positionen auf Nachbarstationen

Kern der Lokalisierung ist eine Datenbank, die Deutschland in ca. 200m × 300m große Rechtecke einteilt¹. Für jedes Rechteck, das mindestens eine Bahnstrecke enthält, listet sie alle Bahnhöfe auf, die von einem diese Bahnstrecke passierenden Zug planmäßig als nächstes angefahren werden. Eine Position auf dem Tunnel durch den Teutoburger Wald bei Lengerich enthält beispielsweise unter anderem

  • Lengerich (Westf) und Natrup-Hagen (RB66),
  • Münster (Westf) Hbf und Osnabrück Hbf (IC/ICE Linien 30 und 31) sowie
  • Essen Hbf und Hamburg Hbf (IC-Verbindung Hamburg – Ruhrgebiet ohne Unterwegshalte).

Die Datenbank beruht derzeit überwiegend auf dem von NVBW bereitgestellten SPNV GTFS-Liniennetzplan. Dieser enthält erfreulicherweise auch RE- und RB-Linien außerhalb von Baden-Württembeg. Erweitert wird sie mit einer (leider unfreien und unvollständigen) Menge an IC/ICE- und S-Bahn-Verbindungen. Für Hinweise zu weiteren offenen Datenquellen mit Liniennetzangaben bin ich dankbar.

Bestimmung von Zugkandidaten

Auf Basis einer GPS-Position werden zunächst die Nachbarstationen aus der Datenbank geholt und dann die Ankünfte der nächsten zwei Stunden an jeder Station abgefragt. Dieser Vorgang kann bei einer großen Menge an Stationen einige Sekunden dauern, da die Abfragen nicht parallel stattfinden. Zwar wäre die dadurch ausgelöste zusätzliche Last verglichen mit den restlichen (durch Menschen verursachten) HAFAS-Anfragen noch nicht einmal messbar, zu viele parallele Anfragen von einer einzigen IP dürften aber dennoch nicht gerne gesehen werden.

Für jede Zugfahrt sind Soll- und Ist-Zeit der Ankunft an der angefragten Station sowie die Namen und Soll-Abfahrtszeiten aller vorherigen Stationen bekannt. Züge, die an mehreren der angefragten Stationen verkehren, sind mehrfach vorhanden und werden zu einer einzigen Zugfahrt vereinigt. Nun geht es daran, für jeden Zug abzuschätzen, ob er sich gerade an der angefragten Position befinden könnte oder nicht.

Da die Datenbank mit Paaren von Stationen gefüttert wird, fliegt zunächst jeder Zug raus, der nur eine der angefragten Stationen passiert. Bei solchen Zügen ist sehr wahrscheinlich, dass sie die gesuchte Position auf ihrer Strecke nicht passieren. Anschließend wird für jeden Zug mit Hilfe der (bekannten) Verspätung an der angefragten Station die (unbekannte) verspätung an den vorherigen Unterwegshalten geschätzt und anhand dieser Echtzeitdaten bestimmt, zwischen welchen beiden Unterwegshalten er sich gerade befindet. Ebenso wird für jedes Paar von Unterwegshalten die Entfernung zwischen der angefragten Position und der Luftlinie zwischen den Halten bestimmt.

Jetzt fliegen alle Züge, deren aktuelle geschätzte Position sich nicht zwischen dem Paar von Unterwegshalten mit der kürzesten Entfernung zur angefragten Position befindet. Denn diese sind gerade sehr wahrscheinlich nicht auf dem richtigen Streckenabschnitt. Ebenso werden Züge verworfen, die sich noch an der Startstation befinden und nicht innerhalb der nächsten fünf Minuten losfahren. Eine S-Bahn, die erst in einer Stunde losfährt, ist wohl kaum gerade auf einer Bahnstrecke unterwegs oder auch nur einstiegsbereit am Bahnsteig.

Für die verbleibenden Züge wird die aktuelle Position auf der Luftlinie zwischen ihren Halten geschätzt. Dabei gehe ich von konstanter Geschwindigkeit aus, da ich keine Beschleunigungsprofile oder Streckengeschwindigkeiten kenne. Anschließend werden die Züge sortiert nach der Entfernung zur gesuchten Position aufgelistet.

Genauere Positionsabschätzung

Mit Verwendung des tatsächlichen Linienverlaufs einer Fahrt anstelle der Luftlinie zwischen Unterwegshalten ließe sich die Position noch viel genauer abschätzen und insbesondere bestimmen, ob die Route eines Zuges überhaupt die gesuchte Position enthält – wenn nicht, kann er direkt verworfen werden, auch wenn er nur wenige km neben der gesuchten Position auf einer anderen Bahnstrecke entlangfährt.

Diese Verbesserung ist derzeit nicht implementiert, da das die Menge notwendiger API-Anfragen nochmals erhöhen würde und ich zunächst testen möchte, ob die Ergebnisse mit linearer (Luftlinien-)Interpolation bereits hinreichend nützlich sind. Außerdem kommt es regelmäßig vor, dass das HAFAS die Linie selbst falsch einschätzt und z.B. einen ICE auf einer nicht elektrisierten Nebenbahn (statt der einige km entfernt verlaufenden, aber insgesamt längeren, elektrisierten Hauptbahn) platziert.

Ebenso wäre es auf Dauer interessant, anstelle der Entfernung zur Position die Zeit bis zum Erreichen (oder seit dem Erreichen) der Position als Gütemaß zu verwenden. S-Bahnen und ICE sind ja durchaus unterschiedlich schnell unterwegs. Das steht noch auf der Todo-Liste.

Quelltext

Die Implementierung ist noch ein wenig frickelig und undokumentiert, aber selbstverständlich auf GitHub verfügbar: derf/geolocation-to-train.

Fußnoten

¹ Der Einfachheit halber werden auf drei Nachkommastellen gerundete GPS-Koordinaten genutzt. Das resultierende Gitternetz ist unseren Breitengeraden nicht quadratisch.

The build instructions on the sigrok Wiki only work for Python2, which is past its end of life date. To build libsigrok with Python bindings for Python3, you need to set PYTHON=python3 when running configure.

The dependency list is also slightly different:

sudo apt-get install git-core gcc g++ make autoconf autoconf-archive \
  automake libtool pkg-config libglib2.0-dev libglibmm-2.4-dev libzip-dev \
  libusb-1.0-0-dev libftdi1-dev check doxygen python3-numpy \
  python3-dev python-gi-dev python3-setuptools swig default-jdk

"python-gi-dev" is not a typo -- the package covers both Python2 and Python3.

Side note: Installing libserialport-dev instead of building your own version as documented on the Wiki seems to work fine.

A .deb package is an easy solution for distributing Perl modules to Debian-based systems. Unlike manual installation using Module::Build, it does not require re-installation whenever the perl minor version changes. Unlike project-specific cpanm or carton setups, the module is available system-wide and can easily be used in random Perl scripts which are not bound to a project repository.

The Debian package dh-make-perl (also known as cpan2deb) does a good job here. In many cases, creating a personal package for a Perl module is as easy as cpan2deb Acme::Octarine. Delegating the build process to Docker may be useful if you do not have a Debian build host available and would rather avoid having the build process depend on the (probably not well-defined) state of your dev machine.

For CPAN modules, all you need is a Debian container with dh-make-perl. Using this container, run cpan2deb and extract the resulting .deb. You can find a Dockerfile and some scripts for this task in my docker-dh-make-perl repository. The Dockerfile is used to create a dh-make-perl image (so you don't need to install dh-make-perl in a fresh Debian image whenever you build a module). scripts/makedeb-docker-helper builds the package inside the container and copies it to the out/ directory, and scripts/makedeb-docker orchestrates the process.

Note: A package generated this way is suitable for personal use. It is not fit for inclusion in the Debian package repository. As all Debian packages must have an author, you need to set the DEBEMAIL and DEBFULLNAME environment variables to appropriate values. Feel free to extend the Dockerfile and scripts as you see fit – the repository is meant to provide a starting point only.

For non-CPAN content (e.g. if you are a module author and do not want to wait for your freshly uploaded release to appear on CPAN, or if you need to build a patched version of a CPAN module), the process is slightly more involvevd. It requires

  • additional bind mounts (docker run -v "${PWD}:/orig:ro") to copy the module content into the container,
  • a manually provided version (in my case via git describe --dirty), and
  • disabling module signing (unless you pass your GPG keyring to the container).

I also manually specify the packages needed for building and testing. I assume that this is not needed and can be performed automatically by dh-make-perl --install-deps --install-build-deps.

Module content and versioning depends on your setup, so I will not provide a git repository for this case. Please refer to the makedeb-docker and makedeb-docker-helper scripts in Travel::Routing::DE::VRR, Travel::Status::DE::IRIS and Travel::Status::DE::VRR for examples.

“Deep Sleep” allows an ESP8266 microcontroller to enter a very low-power sleep mode with less than 1mA sleep current. It works by connecting GPIO16 (which can be controlled from deep sleep) to the reset pin (RST) and programming the ESP8266 to provide a falling edge on GPIO16 after a specific amount of time, causing a system reset and thus a wakeup.

Here is how to use it on an ESP8266 controller (e.g. NodeMCU board or Wemos D1 mini) running the NodeMCU Lua firmware:

  • Connect pin D0 (ESP8266 GPIO16) to RST (ESP8266 reset). Note that as long as D0 and RST are connected, you need to manually push the reset button when uploading new firmware using esptool – if that's too much of a hassle, consider using a jumper or another kind of reversible connection. Uploading NodeMCU applications is not affected by this, as it relies entirely on in-band signaling via UART.
  • Do not use any GPIO functions operating on pin D0.
  • Call rtctime.dsleep to go to sleep. When the sleep time has elapsed, execution will not continue normally -- instead, the ESP8266 will be reset and start over.
  • You might also be able to use node.dsleep

To increase flash lifetime and avoid problems with unexpected power cuts, I run all of my embedded Linux systems from a readonly root filesystem. This is a moving target: Depending on the software in use as well as the version and configuration of systemd and userland software, different adjustments may be needed.

I have created a readonly linux reference page containing all tweaks I know of at the moment, which are mostly tmpfs mounts and /etc/tmpfiles.d entries. I'll update it when I run by something new.

Preseeding is a handy way of automating Debian installations. With a proper preseed.cfg, a Debian installation can run completely unattended in about 10 minutes, including setup of users, sudo and SSH keys.

For future reference, here are the things I found helpful

custom post-installation commands

d-i preseed/late_command executes arbitrary commands after the installation is completed. I use this to set up SSH keys and sudo, like so:

in-target mkdir -p /root/.ssh /home/derf/.ssh; \
in-target wget -O /root/.ssh/authorized_keys https://.../keys-root; \
in-target wget -O /home/derf/.ssh/authorized_keys https://.../keys; \
in-target chmod 700 /root/.ssh /home/derf/.ssh; \
in-target chmod 600 /root/.ssh/authorized_keys /home/derf/.ssh/authorized_keys; \
in-target chown -R derf:derf /home/derf/.ssh; \
apt-install sudo; in-target adduser derf sudo

Adding preseed.cfg to virt-install images

--initrd-inject embeds arbitrary files into the root of the installation image. So, for preseeding, just add --initrd-inject .../preseed.cfg to your virt-install invocation.

Adding preseed.cfg to USB images (with UEFI support)

This is a bit more tricky. Basically: Download and unpack ISO, inject preseed.cfg into initrd, refresh md5sums, rebuild ISO and add UEFI support.

The following script should do the job for most amd64 systems. Usage: ./mkpreseediso debian-x.y.z-amd64-netinst.iso

#!/bin/sh

set -e

ISO="$1"
WD="$(mktemp -d)"

7z x -o$WD $ISO

cd $WD

gunzip install.amd/initrd.gz
cp /tmp/preseed.cfg .
echo preseed.cfg | cpio -o -H newc -A -F install.amd/initrd
rm preseed.cfg
gzip install.amd/initrd

find -follow -type f -print0 | xargs --null md5sum > md5sum.txt

cd

xorriso -as mkisofs -o $ISO -isohybrid-mbr /usr/lib/ISOLINUX/isohdpfx.bin \
-c isolinux/boot.cat -b isolinux/isolinux.bin -no-emul-boot -boot-load-size 4 \
-boot-info-table -eltorito-alt-boot -e boot/grub/efi.img -no-emul-boot \
-isohybrid-gpt-basdat $WD

findmnt --raw --noheadings --output options --target SOME_DIRECTORY | grep -qE '(^|,)ro($|,)'

  • findmnt is a handy alternative to mount when writing scripts
  • SOME_DIRECTORY does not have to correspond to a mountpoint. If it doesn't, findmnt will traverse its parent directories until it finds the corresponding filesystem / mountpoint.
  • a simple grep ro would also match options like errors=remount-ro, so we make sure to only match the single ro option. It must be delimited by commas or the start/end of the option string.

I have a set of maildirs (one for each mailing list / other context) and want to know which of them contains unread mail without firing up my MUA.

Luckily, this is easy to do on the commandline without even looking at mail contents, as there's (mostly?) two kinds of unread mail:

  • new and unprocessed mail. These messages are stored in Maildir/new, so if there's anything in there, it's an unread mail
  • new but no longer "Recent" mail. These messages have not been read yet, but have already been transferred to a MUA using a Read-Write operation, causing them to be marked as no longer new on the server side. They are stored in Maildir/cur alongside read mail, but do not have the "Seen" (S) flag set.

This is easy to check with zsh globbing: new/*(N) expands to a non-empty list if new and unprocessed mail is present, and cur/*~*,*S*(N) expands to a non-empty list if old but unread mail is present. Note that it requires the extended_glob zsh option to be set.

TIL: If esptool can successfully communicate with an ESP8266, but the chip seems stone dead otherwise (i.e., no flashed programs work), it may be due to a wrong flash mode.

Debugging aid:

  • The ESP8266 bootloader sends some debug output at 74880 baud after each reset.
  • This baud rate is not supported by screen – miniterm.py (provided by python-serial) can handle it just fine, though.
  • If it complains about a “csum err”, you probably flashed the wrong file / at the wrong address / used the wrong flash mode (there are differences e.g. between various NodeMCU / D1 mini shipments!)
  • esptool -fm dout seems to be a safe (but slow) fallback

Today I learned: The Banana Pi contains an AXP20 power management unit and the Linux kernel (or at least the Banian-provided one) has a working driver for it , thus making it very easy to read out the system's current voltage and current consumption:

#!/bin/zsh
printf "%.2fA @ %.1fV (%.1f°C)\n" \
$(( $(cat /sys/class/power_supply/ac/current_now) * 0.000001 )) \
$(( $(cat /sys/class/power_supply/ac/voltage_now) * 0.000001)) \
$(( $(cat /sys/class/hwmon/hwmon0/device/temp1_input) * 0.001))

I wrote two very simple munin plugins for these values: bananapi pm voltage and bananapi pm current

Note that the Ampere reading only reflects the current consumption of the board itself. The SATA connector is not accounted for, the USB ports might or might not be.

The perl module LWP::UserAgent (at version 6.08) does not play well with custom CA certificates and most online resources seem to be outdated. Two notes on that (which may or may not apply to non-Debian systems as well):

  • If a certificate failed verification, you will not get a nice error message. Temporary workaround: sudo mv /usr/share/perl/5.20.1/IO/Socket/IP.pm{,_} (and then later sudo mv /usr/share/perl/5.20.1/IO/Socket/IP.pm{,_})
  • LWP::UserAgent does not support custom certificates installed with update-ca-certificates. You'll need HTTPS_CA_FILE=/etc/ssl/certs/ca-certificates.crt in your environment.
2013-12-18 20:10

Koffeinhaltige Schokolade

Schokolade ist gut, Koffein auch. Go figure.

Hier gibt's erstmal nur ein Rezept, Bilder oder sonstige Details folgen irgendwann™ in schön auf chaosdorf.de.

Zutaten

  • 100 bis 300 Gramm schmelzbare Schokolade (Mit der Zartbitterkuvertüre zu 79ct/200g von Rewe hab ich gute Erfahrungen gemacht)
  • bis zu 10 Gramm Pulver(gemisch) je 100 Gramm Schokolade.
  • Gerätschaften: Mikrowelle, Porzellanschale, breiter Teller/Plastikdose, Messer

Das Pulver kann recht beliebig sein, bisher getestet wurden feingemahlener Kaffee und Guarana (siehe Ergebnisse). Mischungen davon oder reines Koffein sollten entsprechend dosiert ebenfalls kein Problem sein.

Zubereitung

  • Porzellanschale mit der Schokolade füllen
  • 2 bis 3 Minuten auf mittlerer Stufe (600 .. 700W) in die Mikrowelle stellen
  • Rausnehmen und die halbflüssige Schokolade mit einem Messer o.ä. zerkleinern
  • Wieder 1 bis 2 Minuten in die Mikrowelle, das ganze solange wiederholen, bis in der Schale eine homogene flüssige Schokoladenmasse ist (kleine Bröckchen können i.d.R. durch Rühren beseitigt werden, einzelne Blasen auf der Oberfläche sind kein Prolem). Erfahrungsgemäß sind drei Mikrowelleniterationen ausreichend
  • Pulver gründlich unterrühren
  • Schokolade in geeignetes mit Alufolie ausgekleidetes Gefäß ausgießen und durch Schwenken / Streichen gleichmäßig verteilen — Empfehlenswert sind Plastikdosen oder nicht zu flache Teller
  • 3 bis 6 Stunden (je nach Raumtemperatur) warten
  • Masse mit Alufolie umdrehen, Alufolie ablösen, Schokoblock in geeignete Stücke schneiden
  • Nom

Die Schokolade am besten nicht unter Luftabschluss (bsp. geschlossene Plastikdose) erkalten lassen; andernfalls nimmt sie beim Erkalten Luft auf, wird unangenehm bröselig und schmeckt nach wenig.

Ergebnisse

Mit Kaffeepulver (11g/100g): Man schmeckt den Kaffee sehr stark raus und merkt auch zügig das Koffein, dafür hält es nicht all zu lange an. Üblicherweise bleibt etwas Pulver an den Zähnen halften. Koffeingehalt 3 bis 5 mg je Gramm.

Mit Guaranapulver (8g/100g): Schmeckt (fast?) wie normale Schokolade, es bleibt auch nichts an den Zähnen haften. Wirkt mit ca. 2 Stunden Versatz, dann aber durchaus merkbar. Koffeingehalt 3 bis 7 mg je Gramm.

Mit Kaffee und Guarana (unbekannte Dosis): Schmeckt weiterhin nur nach Kaffee, hat aber Sofort- und Spätwirkung. Durchaus empfehlenswert.

Kommentare

An @derfnull oder derf@chaosdorf.de. Oder direkt im Chaosdorf / auf dem 30C3.

2013-03-05 10:25

Backups and Monitoring

(tldr: Beware of pipes with set -e. And write more checks.)

At the Chaosdorf, we have an automated weekly backup of all servers and other hosts. The script uses set -e right at the start and reports its success with send_nsca just before quitting. A freshness threshold is used to produce an alert if a backup run does not report in time.

This sounds like nothing can go wrong without being noticed. However, there is a problem: backup_external uses pipes. And in a pipe, only the return value of the last command is actually evaluated:

descent ~ > ( set -e; false | true; echo foo )
foo

So, if something along the way (e.g. tar or gpg) has a problem, the script will happily run along and report its success at the end. Which will result in something like this:

flux ~ > sudo ls -l /chaosdorf/backups/09 | fgrep feedback
-rw-r--r-- 1 chaosdorf chaosdorf    0 Mar  4 00:03 feedback.chaosdorf.dn42_etc.tar.xz.gpg
-rw-r--r-- 1 chaosdorf chaosdorf  24K Mar  4 00:03 feedback.chaosdorf.dn42_packages
-rw-r--r-- 1 chaosdorf chaosdorf    0 Mar  4 00:03 feedback.chaosdorf.dn42_root.tar.xz.gpg
-rw-r--r-- 1 chaosdorf chaosdorf    0 Mar  4 00:03 feedback.chaosdorf.dn42_usr_local.tar.xz.gpg
-rw-r--r-- 1 chaosdorf chaosdorf    0 Mar  4 00:03 feedback.chaosdorf.dn42_var_local.tar.xz.gpg
-rw-r--r-- 1 chaosdorf chaosdorf    0 Mar  4 00:03 feedback.chaosdorf.dn42_var_log.tar.xz.gpg

In this case, it was likely GPG refusing to work on a readonly filesystem (it's an embedded host running on an SD card, so making it readonly makes sense).

The good thing about this is: The failed backups are all empty files, and finding empty files is as easy as running find -size 0. So now we have a second check on the receiving host to alert me whenever an obviously failed backup is transferred.

So:

  • Never, ever trust a single check
  • If you have the disk space, keep more than just the most recent three backups (I actually did this right)
2012-08-19 16:05

Semantic Mediawiki Examples

It's been two months since we installed Semantic Mediawiki and Semantic Forms in the Chaosdorf Wiki, and I've got to say it's pretty nice.

The basic idea of semantic mediawiki is: Pages are not just there, but can have Properties (like a language, hostname, preview image etc.), and based on these properties one can create queries, such as selecting all pages in Category:Projects which have a description and a preview image.

This article is not meant as an Introduction to Semantic Mediawiki (see the link for that), just some real-world examples of what's possible. I tried to sort the examples by ascending complexity.

Eliminating Category: pages

wiki/Foo certainly looks nicer than wiki/Category:Foo. It's already possible to set a #REDIRECT from the former to the latter, but that still leaves the long page title as default. Using the query

{{#ask: [[Category:Foo]]
 | format = ul
}}

you get a listing very similar to a category page. If the page containing the query is in the category itself, it will even be marked bold.

Example: https://wiki.chaosdorf.de/History

Listing pages with a certain property (e.g. items in a certain room)

Assume you have a page for every room or location of your whatever (Hackerspace in our case), and also a page for every item inside. If you have a property describing the item's location (which is very easy with Semantic Forms, it even provides autocompletion of existing locations), there's a simple way to have every location page list its contents.

Each item page specifies its location as [[Has location::Someplace]], the query is

{{#ask: [[Has location::{{PAGENAME}}]]
 | format = ul
 | intro=Beinhaltet:
}}

The intro text is only displayed if there is something to list, so if the location has no items, nothing will be displayed.

Example: https://wiki.chaosdorf.de/Serverraum (the query is hidden inside https://wiki.chaosdorf.de/Template:Location)

Pretty-listing pages with certain properties (e.g. preview images)

With projects or resources in a wiki, it'd be very nice to have more than a simple list of page names, like a table with preview images and short descriptions. It is of course possible to create one manually, but mainting it quickly becomes a pain or is simply forgotten. Also, duplicate information sucks.

So: Create a Mediawiki Template and a Form for project/resource pages, which contain (with properties) its name, description, preview image (if available), etc. Semantic Mediawiki allows formatting its query results with templates, so it's not hard to get a reasonably nice list of projects with images.

Also, Semantic Forms allows uploading the preview image from inside the Form, so when creating a project/resource page one does not even need to open Special:Upload in another tab and copy-paste filenames.

The query in this case is

{{#ask: [[Category:Projects]] [[Has image::+]] [[Duplicate::false]]
| format = template
| template = PreviewSMW
| outrotemplate = PreviewEnd
| link = none
| ?Has description
| ?Has image
}}

So it formats each result with Template:PreviewSMW, does not link to the pages by itself (the template will do it), and also passes the properties Has description and Has image of each page to the template. Example: https://wiki.chaosdorf.de/Projects

Library system with lend book / return book buttons

This is not what Semantic Mediawiki is meant for, but a fun experiment. Create a page for each book you have (in our case in the custom Book namespace). The most important part here is the lent by property, which corresponds to the lent argument of Template:Book and Form:Book and, if set, is the user who has currently lent it.

A list of all books can be obtained in the standard query fashion:

{{#ask: [[Book:+]]
 | format = table
 | ?Has author = Autor
 | ?Has ISBN = ISBN
 | ?lent by = ausgeliehen?
}}

Thanks to the #autoedit function, it's possible to add lend / return buttons to each book page (via Template:Book, which we already have anyways to fill in the properties and have a book form). First, let's have some markup:

{{#if:{{{lent|}}}|
{{#autoedit:form=book
 | target=Book:{{PAGENAME}}
 | link type=button
 | link text=zurückgeben
 | Book[lent]=
 | Book[lent at]=now
 | namespace=Book
 | reload
}}
|
{{#autoedit:form=book
 | target=Book:{{PAGENAME}}
 | link type=button
 | link text=ausleihen
 | Book[lent]={{#USERNAME:}}
 | Book[lent at]=now
 | namespace=Book
 | reload
}}
}}

The {{#if: condition | yes | no }} part comes from Parserfunctions, which is not part of Semantic Mediawiki / Semantic Forms, but also nice to have. The two buttons are mutually exclusive, so it makes sure only the correct one is shown.

target=Book:{{PAGENAME}} tells autoedit to edit the current page, and reload makes sure a wiki user sees the effect of their actions. (otherwise the page would be changed, but the user would still see the its old version).

The interesting parts are Book[lent] and Book[lent at]. Right in the first line, we tell autoedit to use the "Book" form, and in these two places we access parameters of that form. The first one works like using {{Book | ... | lent = someuser}}.

In Book[lent at]=now, the "now" becomes the YYYY/MM/DD representation of today. This works because in Form:Book, the lent at parameter declared to be a date. (If lent at corresponded to a property of type date, this would not even be neccessary).

First, there's an excellent parallel port howto on epanorama.net.

What I'd like to add: You can't just control LEDs or relay coils / transistors with the parallel port, it's also pretty easy to talk to microcontrollers (an ATTiny 2313 in my case).

interfacing

Basically, you need one or two parallel port pins as inputs to the AVR, and then come up with some sort of protocol.

one-wire

No clock signal, so we rely on proper timing. The most primitive solution is to toggle the pin n times and have the AVR increment a counter on every toggle, then read the counter value after a fixed time without counter increment has passed. Excruciatingly slow, but dead simple to implement both on the computer and the AVR.

two-wire

One pin for data, one for the clock (e.g. set data, then toggle clock, set an AVR interrupt on rising edge on clock to read data pin). I didn't try it yet.

more?

So far, the computer talks to the AVR but not vice versa. Two-way communication shouldn't be hard, but I didn't try it yet. If I do, I may write a new entry.

hardware

To be on the safe side, I decided to completely isolate the parallel port from the rest of the circuit.

components used:

  • KB817 opto-coupler. forward voltage ~1.2V, collector-emitter voltage 35V/6V
  • 3.3K resistor between opto-coupler and parallel port, since the parallel port provides 2.4 to 5V
  • BC338-40 transistor to make sure the opto-coupler output is registered by the microcontroller

The rest is usual stuff. The parallel port circuit is located in the bottom left of the schematic.

software

microcontroller

Assuming the transistor (optocoupler) is connected to INT0

  • set an interrupt on INT0 rising edge: increment counter and reset timeout
  • set a timer interrupt: check timeout, if zero handle counter as command

code snippet:

ISR(INT0_vect)
{
    cli();
    command++;
    cmd_wait = 6;
}

ISR(TIMER1_COMPA_vect)
{
    if (cmd_wait)
        cmd_wait--;
    else if (command) {
        run_command();
        command = 0;
    }
}

the whole file is available on github: main.c

computer

I'm using the parapin library, see pgctl.c on github. Note that timing is important here, so I'm running the code at the lowest possible niceness.

That's it. An example project is available as derf/pgctl on github.

The whole point of this post is: interfacing with the parallel port is easy. It doesn't have a future, but if you still have one, you can use it for quite a lot of things.