RPi 3 and the real time kernel

As a beta tester for MOD I thought it would be cool to play around with netJACK which is supported on the MOD Duo. The MOD Duo can run as a JACK master and you can connect any JACK slave to it as long as it runs a recent version of JACK2. This opens a plethora of possibilities of course. I’m thinking about building a kind of sidecar device to offload some stuff to using netJACK, think of synths like ZynAddSubFX or other CPU greedy plugins like fat1.lv2. But more on that in a later blog post.

So first I need to set up a sidecar device and I sacrificed one of my RPi’s for that, an RPi 3. Flashed an SD card with Raspbian Jessie Lite and started to do some research on the status of real time kernels and the Raspberry Pi because I’d like to use a real time kernel to get sub 5ms system latency. I compiled real time kernels for the RPi before but you had to jump through some hoops to get those running so I hoped things would have improved somewhat. Well, that’s not the case so after having compiled a first real time kernel the RPi froze as soon as I tried to runapt-get install rt-tests. After having applied a patch to fix how the RPi folks implemented the FIQ system the kernel compiled without issues:

Linux raspberrypi 4.9.33-rt23-v7+ #2 SMP PREEMPT RT Sun Jun 25 09:45:58 CEST 2017 armv7l GNU/Linux

And the RPi seems to run stable with acceptable latencies:

Histogram of the latency on the RPi with a real time kernel during 300000 cyclictest loops
Histogram of the latency on the RPi with a real time kernel during 300000 cyclictest loops

So that’s a maximum latency of 75 µs, not bad. I also spotted some higher values around 100 but that’s still okay for this project. The histogram was created with mklatencyplot.bash. I used a different invocation of cyclictest though:

cyclictest -Sm -p 80 -n -i 500 -l 300000

And I ran hackbench in the background to create some load on the RPi:

(while true; do hackbench > /dev/null; done) &

Compiling a real time kernel for the RPi is still not a trivial thing to do and it doesn’t help that the few howto’s on the interwebs are mostly copy-paste work, incomplete and contain routines that are unclear or even unnecessary. One thing that struck me too is that the howto’s about building kernels for RPi’s running Raspbian don’t mention the make deb-pkg routine to build a real time kernel. This will create deb packages that are just so much easier to transfer and install then rsync’ing the kernel image and modules. Let’s break down how I built a real time kernel for the RPi 3.

First you’ll need to git clone the Raspberry Pi kernel repository:

git clone -b 'rpi-4.9.y' --depth 1 https://github.com/raspberrypi/linux.git

This will only clone the rpi-4.9.y branch into a directory called linux without any history so you’re not pulling in hundreds of megs of data. You will also need to clone the tools repository which contains the compiler we need to build a kernel for the Raspberry Pi:

git clone https://github.com/raspberrypi/tools.git

This will end up in the tools directory. Next step is setting some environment variables so subsequent make commands pick those up:

export KERNEL=kernel7
export ARCH=arm
export CROSS_COMPILE=/path/to/tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian/bin/arm-linux-gnueabihf-
export CONCURRENCY_LEVEL=$(nproc)

The KERNEL variable is needed to create the initial kernel config. The ARCH variable is to indicate which architecture should be used. The CROSS_COMPILE variable indicates where the compiler can be found. The CONCURRENCY_LEVEL variable is set to the number of cores to speed up certain make routines like cleaning up or installing the modules (not the number of jobs, that is done with the -j option of make).

Now that the environment variables are set we can create the initial kernel config:

cd linux
make bcm2709_defconfig

This will create a .config inside the linux directory that holds the initial kernel configuration. Now download the real time patch set and apply it:

cd ..
wget https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/patch-4.9.33-rt23.patch.xz
cd linux
xzcat ../patch-4.9.33-rt23.patch.xz | patch -p1

Most howto’s now continue with building the kernel but that will result in a kernel that will freeze your RPi because of the FIQ system implementation that causes lock ups of the RPi when using threaded interrupts which is the case with real time kernels. That part needs to be patched so download the patch and dry-run it:

cd ..
wget https://www.osadl.org/monitoring/patches/rbs3s/usb-dwc_otg-fix-system-lockup-when-interrupts-are-threaded.patch
cd linux
patch -i ../usb-dwc_otg-fix-system-lockup-when-interrupts-are-threaded.patch -p1 --dry-run

You will notice one hunk will fail, you will have to add that stanza manually so note which hunk it is for which file and at which line it should be added. Now apply the patch:

patch -i ../usb-dwc_otg-fix-system-lockup-when-interrupts-are-threaded.patch -p1

And add the failed hunk manually with your favorite editor. With the FIQ patch in place we’re almost set for compiling the kernel but before we can move on to that step we need to modify the kernel configuration to enable the real time patch set. I prefer doing that with make menuconfig. You will need the libncurses5-dev package to run this commando so install that with apt-get install libncurses5-dev. Then select Kernel Features - Preemption Model - Fully Preemptible Kernel (RT) and select Exit twice. If you’re asked if you want to save your config then confirm. In the Kernel features menu you could also set the the timer frequency to 1000 Hz if you wish, apparently this could improve USB throughput on the RPi (unconfirmed, needs reference). For real time audio and MIDI this setting is irrelevant nowadays though as almost all audio and MIDI applications use the hr-timer module which has a way higher resolution.

With our configuration saved we can start compiling. Clean up first, then disable some debugging options which could cause some overhead, compile the kernel and finally create ready to install deb packages:

make clean
scripts/config --disable DEBUG_INFO
make -j$(nproc) deb-pkg

Sit back, enjoy a cuppa and when building has finished without errors deb packages should be created in the directory above the linux one. Copy the deb packages to your RPi and install them on the RPi with dpkg -i. Open up /boot/config.txt and add the following line to it:

kernel=vmlinuz-4.9.33-rt23-v7+

Now reboot your RPi and it should boot with the realtime kernel. You can check with uname -a:

Linux raspberrypi 4.9.33-rt23-v7+ #2 SMP PREEMPT RT Sun Jun 25 09:45:58 CEST 2017 armv7l GNU/Linux

Since Rasbian uses almost the same kernel source as the one we just built it is not necessary to copy any dtb files. Also running mkknlimg is not necessary anymore, the RPi boot process can handle vmlinuz files just fine.

The basis of the sidecar unit is now done. Next up is tweaking the OS and setting up netJACK.

Edit: there’s a thread on LinuxMusicians referring to this article which already contains some very useful additional information.

RPi 3 and the real time kernel

The Zynthian project

Recently I found out that I was not the only one trying to build a synth module out of a Raspberry Pi with ZynAddSubFX. The Zynthian project is trying to achieve the exact same goal and so far it looks very promising. I contacted the project owner to ask if he would be interested in collaborating. I got a reply promptly and we both agreed it would be a good idea to join forces. The Zynthian project has all the things that I still had to set up already in place but I think I can still help out. The Zynthian set-up might benefit from some optimizations like a real-time kernel and things like boot time can be improved. Also I could help out testing, maybe do some packaging. And if there’s a need for things like a repository, web server or other hosting related stuff I could provide those.

Protoype of the Zynthian project
Zynthian prototype

I’m very happy with these developments of our projects converging. Check out the Zynthian blog for more information on the current state of the project.

The Zynthian project

The hardest part

At least, for me. Now I have to code something that enables me to select banks and instruments on my synth module. I’ve settled for pyliblo to talk to ZynAddSubFX but I’m just no coder. My elbow now rests on “Learning Python” from O’Reilly, I started reading it like a year ago but never got past chapter 3 or something. Time to persist, I’ve been wanting to be able to code a little bit for years now.

As a note to self, and maybe it’s helpful for others too, what follows are some of the relevant OSC messages to change banks and instruments in ZynAddSubFX.

Changing banks can be done with /loadbank. pyliblo comes with an example script send_osc.py and loading a bank with send_osc.py works as follows:

send_osc.py 7777 /loadbank ,i 3

7777 is the port ZynAddSubFX runs on. The /loadbank message wants an integer (,i) and ,i 3 loads the fourth bank (Choir and Voice) as ZynAddSubFX starts counting from 0. Bear in mind that this will only load the bank, it won’t change the instrument that is loaded. To load an instrument from a loaded bank the following send_osc.py incantation does the trick:

send_osc.py 7777 /setprogram ,c $'\x03'

So /setprogram loads an instrument from an active bank. It takes a character (,c) as an argument because the program numbers are in hex. But hex are multiple characters so you have to add some escape sequences to make it work (and I lost it there so I could be completely wrong). The above command should load the fourth instrument from the Choir and Voice bank (Voice OOH) as that bank should have been loaded by the previous /loadbank message.

It is also possible to load instrument (.xiz) files. This can be done with the /load_xiz message:

send_osc.py 7777 /load_xiz 0 "/usr/share/zynaddsubfx/banks/Bass/0001-Bass 1.xiz"

This will load the Bass 1 instrument fom the Bass bank into part 1 (remember that ZynAddSubFX starts counting from 0). So to load the Bass 2 instrument into part 2 you’d do the following:

send_osc.py 7777 /load_xiz 1 "/usr/share/zynaddsubfx/banks/Bass/0002-Bass 2.xiz"

So now I have to incorporate this stuff into Python code that gets called when I press buttons on my LCD display. These are the mappings I’d like to accomplish:

  • Up: toggle next bank and display first instrument from that bank
  • Down: toggle previous bank and display first instrument from that bank
  • Left: toggle previous instrument and display instrument
  • Right: toggle next instrument and display instrument
  • Select: select displayed instrument

That shouldn’t be too hard right? Well, first hurdle, pyliblo can only send or dump OSC messages, it doesn’t seem to be able to handle return messages from the OSC server (ZynAddSubFX in this case). To be continued…

Edit: of course sending just messages with pyliblo won’t handle any return messages, you will need a receiving part. But maybe I should take a look at using MIDI with mididings for instance. Thanks Georg Mill for the tip!

The hardest part

Two steps further

For my little synth module project I created the following systemd unit file /etc/systemd/system/zynaddsubfx.service that starts up a ZynAddSubFX proces at boot time:

[Unit]
Description=ZynAddSubFX

[Service]
Type=forking
User=zynaddsubfx
Group=zynaddsubfx
ExecStart=/usr/local/bin/zynaddsubfx-mpk
ExecStop=/usr/bin/killall zynaddsubfx

[Install]
WantedBy=local-fs.target

/usr/local/bin/zynaddsubfx-mpk is a simple script that starts ZynAddSubFX and connects my Akai MPK:

#!/bin/bash

zynaddsubfx -r 48000 -b 64 -I alsa -O alsa -P 7777 -L /usr/share/zynaddsubfx/banks/SynthPiano/0040-BinaryPiano2.xiz &

while ! aconnect 'MPK mini' 'ZynAddSubFX'
do
  sleep 0.1
done

/usr/local/bin/zynpi.py

/usr/local/bin/zynpi.py in its turn is a small Python script that shows a message and a red LED on a 16×2 LCD display so that I know the synth module is ready to use:

#!/usr/bin/python
# Example using a character LCD plate.
import math
import time

import Adafruit_CharLCD as LCD

# Initialize the LCD using the pins
lcd = LCD.Adafruit_CharLCDPlate()

# Show some basic colors.
lcd.set_color(1.0, 0.0, 0.0)
lcd.clear()
lcd.message('Raspberry Pi 2\nZynAddSubFX')

The LCD is not an Adafruit one though but a cheaper version I found on Dealextreme. It works fine though with the Adafruit LCD Python library. Next step is to figure out if I can use the buttons on the LCD board to change banks and presets.

synth_module_lcd_startup
Raspberry Pi synth module with 16×2 LCD display
synth_module_test_setup
The synth module test environment
Two steps further

Working on a stable setup

Next step for the synth module project was to get the Raspberry Pi 2 to run in a stable manner. It seems like I’m getting close or that I’m already there. First I built a new RT kernel based on the 4.1.7 release of the RPi kernel. Therefore I had to checkout an older git commit because the RPi kernel is already at 4.1.8. The 4.1.7-rt8 patchset applied cleanly and the kernel booted right away:

pi@rpi-jessie:~$ uname -a
 Linux rpi-jessie 4.1.7-rt8-v7 #1 SMP PREEMPT RT Sun Sep 27 19:41:20 CEST 2015 armv7l GNU/Linux

After cleaning up my cmdline.txt it seems to run fine without any hiccups so far. My cmdline.txt now looks like this:

dwc_otg.lpm_enable=0 dwc_otg.speed=1 console=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 rootflags=data=writeback elevator=deadline rootwait

Setting USB speed to Full Speed (so USB1.1) by using dwc_otg.speed=1 is necessary otherwise the audio coming out of my USB DAC sounds distorted.

I’m starting ZynAddSubFX as follows:

zynaddsubfx -r 48000 -b 64 -I alsa -O alsa -P 7777 -L /usr/share/zynaddsubfx/banks/SynthPiano/0040-BinaryPiano2.xiz

With a buffer of 64 frames latency is very low and so far I haven’t run into instruments that cause a lot of xruns with this buffer size. Not even the multi-layered ones from Will Godfrey.

So I guess it’s time for the next step, creating a systemd startup unit so that ZynAddSubFX starts at boot. And it would be nice if USB MIDI devices would get connected automatically. And if you could see somehow which instrument is loaded, an LCD display would be great for this. Also I’d like to have the state of the synth saved, maybe by saving an .xmz file whenever there’s a state change or on regular intervals. And the synth module will need a housing or casing. Well, let’s get the software stuff down first.

Working on a stable setup

Building a synth module using a Raspberry Pi

Ever since I did an acid set with my brother in law at the now closed bar De Vinger I’ve been playing with the idea of creating some kind of synth module out of a Raspberry Pi. The Raspberry Pi 2 should be powerful enough to run a complex synth like ZynAddSubFX. When version  2.5.1 of that synth got released the idea resurfaced again since that version allows to remote control a running headless instance of ZynAddSubFX via OSC that is running on for instance a Raspberry Pi. I looked at this functionality before a few months ago but the developer was just starting to implement this feature so it wasn’t very usable yet.

zynaddsubfx-ext-guiBut with the release of ZynAddSubFX 2.5.1 the stabilitity of the zynaddsubfx-ext-gui utility has improved to such an extent that it’s a very usable tool. In the above screenshot you can see zynaddsubfx-ext-gui running on my notebook with Ubuntu 14.04 controlling a remote instance of ZynAddSubFX running on a Raspberry Pi.

So basically all the necessary building blocks for a synth module are there. Coupled with my battered Akai MPK Mini and a cheap PCM2704 USB DAC I started setting up a test setup.

For the OS on the Raspberry Pi 2 I chose Debian Jessie as I feel Raspbian isn’t getting you the most out of your Pi. It’s running a 4.1.6 kernel with the 4.1.5-rt5 RT patch set, which applied cleanly and seems to run so far:

pi@rpi-jessie:~$ uname -a
Linux rpi-jessie 4.1.6-rt0-v7 #1 SMP PREEMPT RT Sun Sep 13 21:01:19 CEST 2015 armv7l GNU/Linux

This isn’t a very clean solution of course so let’s hope a real 4.1.6 RT patch set will happen or maybe I could give the 4.1.6 PREEMPT kernel that rpi-update installed a try. I packaged a headless ZynAddSubFX for the RPi on my notebook using pbuilder with a Jessie armhf root and installed the package for Ubuntu 14.04 from the KXStudio repos. I slightly overclocked the RPi to 1000MHz and set the CPU scaling governor to performance. The filesystem is Ext4, mounted with noatime,nobarrier,data=writeback.

To get the USB audio interface and the USB MIDI keyboard into line I had to add the following line to my /etc/modprobe.d/alsa.conf file:

options snd-usb-audio index=0,1 vid=0x08bb,0x09e8 pid=0x2704,0x007c

This makes sure the DAC gets loaded as the first audio interface, so with index 0. Before adding this line the Akai would claim index 0 and since I’m using ZynAddSubFX with ALSA it couldn’t find an audio interface. But all is fine now:

pi@rpi-jessie:~$ cat /proc/asound/cards
 0 [DAC            ]: USB-Audio - USB Audio DAC
                      Burr-Brown from TI USB Audio DAC at usb-bcm2708_usb-1.3, full speed
 1 [mini           ]: USB-Audio - MPK mini
                      AKAI PROFESSIONAL,LP MPK mini at usb-bcm2708_usb-1.5, full speed

So no JACK as the audio back-end, the output is going directly to ALSA. I’ve decided to do it this way because I will only be running one single application that uses the audio interface so basically I don’t need JACK. And JACK tends to add a bit of overhead, you barely notice this on a PC system but on small systems like the Raspberry Pi JACK can consume a noticeable amount of resources. To make ZynAddSubFX use ALSA as the back-end I’m starting it with the -O alsa option:

zynaddsubfx -r 48000 -b 256 -I alsa -O alsa -P 7777

The -r option sets the sample rate, the -b option sets the buffer size, -I is for the MIDI input and the -P option sets the UDP port on which ZynAddSubFX starts listening for OSC messages. And now that’s the cool part. If you then start zynaddsubfx-ext-gui on another machine on the network and tell it to connect to this port it starts only the GUI and sends all changes to the GUI as OSC messages to the headless instance it is connected to:

zynaddsubfx-ext-gui osc.udp://10.42.0.83:7777

Next up is stabilizing this setup and testing with other kernels or kernel configs as the kernel I’ve cooked up now isn’t a viable long-term solution. And I’d like to add a physical MIDI in and maybe a display like described on the Samplerbox site. And the project needs a casing of course.

Building a synth module using a Raspberry Pi

Raspberry Pi Revisited

When the Raspberry Pi 2 was released I certainly got curious. Would it be really better than it’s little brother? As soon as it got available in The Netherlands I bought it and sure this thing flies compared to the Raspberry Pi 1. The four cores and 1GB of memory are certainly an improvement. The biggest improvement though is the shift from ARMv6 to ARMv7. Now you can really run basically anything on it and thus I soon parted from Raspbian and I’m now running plain Debian Jessie armhf on the RPi.

So is everything fine and dandy with the RPi2? Well, no. It still uses the poor USB implementation and audio output. And it was quite a challenge to prepare it for its intended use: a musical instrument. To my great surprise a new version of the Wolfson Audio Card was available too for the new Raspberry Pi board layout so as soon as people reported they got it to work with the RPi2 I ordered one too.

 

http://community.emlid.com/t/raspberry-2-realtime-kernel-image/112/12

Cirrus Logic Audio Card for Raspberry Pi

One of the first steps to make the device suitable for use as a musical device was to build a real-time kernel for it. Building the kernel itself was quite easy as the RT patchset of the kernel being used at the moment by the Raspberry Foundation (3.18) applied cleanly and it also booted without issues. But after a few minutes the RPi2 would lock up without logging anything. Fortunately there were people on the same boat as me and with the help of the info and patches provided by the Emlid community I managed to get my RPi2 stable with a RT kernel.

Next step was to get the right software running so I dusted off my RPi repositories and added a Jessie armhf repo. With the help of fundamental the latest version of ZynAddSubFX now runs like charm with very acceptable latencies, when using not all too elaborate instrument patches Zyn is happy with an internal latency of 64/48000=1.3ms. I haven’t measured the total round-trip latency but it probably stays well below 10ms. LinuxSampler with the Salamander Grand Piano sample pack also performs a lot better than on the RPi1 and when using ALSA directly I barely get any underruns with a slightly higher buffer setting.

I’d love to get Guitarix running on the RPi2 with the Cirrus Logic Audio Card so that will be the next challenge.

Raspberry Pi Revisited

Wolfson Audio Card for Raspberry Pi

Just ordered a Wolfson Audio Card for Raspberry Pi via RaspberryStore. I asked them about this audio interface at their stand during the NLLGG meeting where I did a presentation about doing real-time audio with the RPi and they told me they would ship it as soon as it would become available. They kept their word so I’m hoping to mount this buddy on my RPi this very week. Hopefully it will be an improvement and allow me to achieve low latencies with a more stable RPi so that I can use it in more critical environments (think live on stage). It has a mic in so I can probably set up the RPi with the Wolfson card quite easily as a guitar pedal. Just a pot after the line output, stick it in a Hammond case, put guitarix on it and rock on.

Wolfson Audio Card for Raspberry Pi
Wolfson Audio Card for Raspberry Pi

Wolfson Audio Card for Raspberry Pi

Using a Raspberry Pi as a piano

Recently I posted about my successful attempt to get LinuxSampler running on the Raspberry Pi. I’ve taken this a bit further and produced a script that turns the Raspberry Pi into a fully fledged piano. Don’t expect miracles, the sample library I used is good quality so the RPi might choke on it every now and then with regard to disk IO. But it’s usable if you don’t play too many notes at once or make extensive use of a sustain pedal. I’ve tested the script with a Class 4 SD though so a faster SD card could improve stability.

Edit: finally got around buying a better SD card and the difference is huge! I bought a SanDisk Extreme Class 10 SD card and with this SD card I can run LinuxSampler at lower latencies and I can play more notes at once.

Before you can run the script on your Raspberry Pi you will need to tweak your Raspbian installation so you can do low latency audio. How to achieve this is all described in the Raspberry Pi wiki article I’ve put up on wiki.linuxaudio.org. After you’ve set up your RPi you will need to install JACK and LinuxSampler with sudo apt-get install jackd1 linuxsampler. Next step is to get the Salamander Grand Piano sample pack on your RPi:

cd
mkdir LinuxSampler
cd LinuxSampler
wget -c http://download.linuxaudio.org/lau/SalamanderGrandPianoV2
/SalamanderGrandPianoV2_44.1khz16bit.tar.bz2
wget -c http://dl.dropbox.com/u/16547648/sgp44.1khz_V2toV3.tar.bz2
tar jxvf SalamanderGrandPianoV2/SalamanderGrandPianoV2_44.1khz16bit.tar.bz2
tar jxvf sgp44.1khz_V2toV3.tar.bz2 -C SalamanderGrandPianoV2_44.1khz16bit
--strip-components=1

Please note that decompressing the tarballs on the RPi could take some time. Now that you’ve set up the Salamander Grand Piano sample library you can download the script and the LinuxSampler config file:

cd
mkdir bin
wget -c https://raw.github.com/AutoStatic/scripts/rpi/piano -O /home/pi/bin/piano
chmod +x bin/piano
wget -c https://raw.github.com/AutoStatic/configs/rpi/home/pi/LinuxSampler
/SalamanderGrandPianoV3.lscp -O
/home/pi/LinuxSampler/SalamanderGrandPianoV3.lscp

Almost there. We’ve installed the necessary software and downloaded the sample library, LinuxSampler config and piano script. Now we need to dot the i’s and cross the t’s because the script assumes some defaults that might be different in your setup. Let’s dissect the script:

#!/bin/bash

if ! pidof jackd &> /dev/null
then
  sudo killall ifplugd &> /dev/null
  sudo killall dhclient-bin &> /dev/null
  sudo service ntp stop &> /dev/null
  sudo service triggerhappy stop &> /dev/null
  sudo service ifplugd stop &> /dev/null
  sudo service dbus stop &> /dev/null
  sudo killall console-kit-daemon &> /dev/null
  sudo killall polkitd &> /dev/null
  killall gvfsd &> /dev/null
  killall dbus-daemon &> /dev/null
  killall dbus-launch &> /dev/null
  sudo mount -o remount,size=128M /dev/shm &> /dev/null
  echo -n performance
| sudo tee /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor &> /dev/null
  if ip addr | grep wlan &> /dev/null
  then
    echo -n "1-1.1:1.0" | sudo tee /sys/bus/usb/drivers/smsc95xx/unbind &> /dev/null
  fi
  jackd -P84 -p128 -t2000 -d alsa -dhw:UA25 -p512 -n2 -r44100 -s -P -Xseq
&> /dev/null &
fi

This is the first section of the script. An if clause that checks if JACK is already running and if that’s not the case the system gets set up for low latency use, a simple check is done if there is an active WiFi adapter and if so the ethernet interface is disabled and then on the last line JACK is invoked. Notice the ALSA name used, hw:UA25, this could be different on your RPi, you can check with aplay -l.

jack_wait -w &> /dev/null

jack_wait is a simple app that does nothing else but checking if JACK is active, the -w option means to wait for JACK to become active.

if ! pidof linuxsampler &> /dev/null
then
  linuxsampler --instruments-db-location $HOME/LinuxSampler/instruments.db
&> /dev/null &
  sleep 5
netcat -q 3 localhost 8888
< $HOME/LinuxSampler/SalamanderGrandPianoV3.lscp &> /dev/null &
fi

This stanza checks if LinuxSampler is running, if not LinuxSampler is started and 5 seconds later the config file is pushed to the LinuxSampler backend with the help of netcat.

while [ "$STATUS" != "100" ]
do
  STATUS=$(echo "GET CHANNEL INFO 0" | netcat -q 3 localhost 8888
| grep INSTRUMENT_STATUS | cut -d " " -f 2 | tr -d 'rn')
done

A simple while loop that checks the load status of LinuxSampler. When the load status has reached 100% the script will move on.

jack_connect LinuxSampler:0 system:playback_1 &> /dev/null
jack_connect LinuxSampler:1 system:playback_2 &> /dev/null
#jack_connect alsa_pcm:MPK-mini/midi_capture_1 LinuxSampler:midi_in_0 &> /dev/null
jack_connect alsa_pcm:USB-Keystation-61es/midi_capture_1 LinuxSampler:midi_in_0
&> /dev/null

This part sets up the necessary JACK connections. The portnames of the MIDI devices can be different on your system, you can look them up with jack_lsp which will list all available JACK ports.

jack_midiseq Sequencer 176400 0 69 20000 22050 57 20000 44100 64 20000 66150 67 20000 &
sleep 4
jack_connect Sequencer:out LinuxSampler:midi_in_0
sleep 3.5
jack_disconnect Sequencer:out LinuxSampler:midi_in_0
killall jack_midiseq

This is the notification part of the script that will play four notes. It’s based on jack_midiseq, another JACK example tool that does nothing more but looping a sequence of notes. It’s an undocumented utility so I’ll explain how it is invoked:

jack_midiseq

<command> <JACK port name> <loop length> <start value> <MIDI note value> <length value>

Example:
jack_midiseq Sequencer 176400 0 69 20000 22050 57 20000 44100 64 20000 66150 67 20000

JACK port name: Sequencer
Loop length: 4 seconds at 44.1 KHz (176400/44100)
Start value of first note: 0
MIDI note value of first note: 69 (A4)
Length value: 20000 samples, so that's almost half a second
Start value of second note: 22050 (so half a second after the first note)
MIDI note value of second note: 57 (A3)
Length value: 20000 samples
Start value of third note: 44100 (so a second after the first note)
MIDI note value of second note: 64 (E4)
Length value: 20000 samples
Start value of third note: 66150 (so one second and a half after the first note)
MIDI note value of second note: 67 (G4)
Length value: 20000 samples

Now the script is finished, the last line calls exit with a status value of 0 which means the script was run successfully.

exit 0

After making the script executable with chmod +x ~/bin/piano and running it you can start playing piano with your Raspberry Pi! Again, bear in mind that the RPi is not made for this specific purpose so it could happen that audio starts to stutter every now and then, especially when you play busy parts or play more than 4 notes at once.


Using a Raspberry Pi as a piano: quick demo

Using a Raspberry Pi as a piano

Raspberry Jam Review

Last Thursday the first Dutch Raspberry Jam took place at the Ordina HQ in Nieuwegein. I offered to do a presentation slash demonstration about realtime audio and the the Raspberry Pi so I promised myself to be there at least an hour before the scheduled starting time of my demo. That way I could also join Gert van Loo‘s presentation. When I arrived at 19:15 there was no Gert van Loo though so that should’ve triggered some alarms. Also I didn’t look out for members of the organization as soon as I came in. Instead I chose to dot the i’s and cross the t’s with regards to my demo.

Wrong decision.

About half an hour later the event was closed.

WTF?

I approached the person who closed the event and introduced myself. He replied that they thought I wasn’t coming anymore. Apparently they misinterpreted my e-mail I sent earlier that day that I didn’t manage to produce something workable for the laser show guy. They took it for a cancellation. But immediately the event got kind of reopened and I set up my stuff. We had some audio issues but in the end everything went quite well actually. I showed off what is possible with a Raspberry Pi and realtime audio with the use of some of my favorite software. Guitarix featured of course. I grabbed my guitar, fired up guitarix on the RPi and played some stuff. Hooked up my MIDI foot controller and showed how to select different presets. I also demonstrated the use of the RPi as a piano with the help of LinuxSampler and the awesome Salamander Grand Piano samplepack and did some drumming by using drumkv1. Before the realtime audio demo I presented an overview of the Linux audio ecosystem and talked about the alternatives of how to get sound in and out of your Raspberry Pi. These alternatives are not bound to the onboard sound and USB, since recently it is also possible to hook up an external audio codec to the I2S bus of the Raspberry Pi. I got one in myself this week, a MikroElektronika Audio Codec PROTO board based on the WM8731 codec, so more on that soon. It’d be awesome if I can get that codec to work reliably at lower latencies.

So it all turned out well, I had a great time doing my presentation and judging by the interest shown by some attendants who came up to me after the presentation I hope I got some more people enthusiastic about doing realtime audio with the Raspberry Pi and Linux. So thanks Ordina for offering this opportunity and thanks everyone who stuck around!

Raspberry Jam Review