No Man’s Sky – musings and wallpapers

It’s unusual for me to take much interest in anything with so much hype, but I thought I must make an exception for No Man’s Sky, a game which brings together two things I’ve treasured during my life: firstly, the concept of space exploration in a computer game, as originally explored in the form of the glorious game Elite which I spent a great deal of time playing as a teenager; and secondly, graphics inspired by wonderful 70s sci-fi artwork such as that produced by Chris Foss, which I’ve enjoyed in books since my early childhood. I was so keen to play this game that I even went so far as to buy a new PS4 so I could play it.

The controversy surrounding the game is curious, though it’s sad that so much of it has been so negative. The amount of shocking abuse that’s been hurled at Sean Murray, the founder of Hello Games, is simply appalling. As a game it’s not exactly adrenaline-fueled and it certainly has its flaws, so I can see why some are disappointed and frustrated with it. It’s easy to see the places in the game where development was cut short, and I hope that Hello Games will continue to work on the game to realise the full extent of their original vision. Even if that doesn’t happen, I won’t regret for a second the time I’ve spent immersed in this endlessly gorgeous universe.

For me, indeed, this game is a beautiful, almost Zen-like work of art which – to an intriguing extent – explores notions of, and raises questions about, the purpose of our existence and the nature of the cosmos. The visuals in the game are remarkably reminiscent of the beauty we find within nature in the real world, and the need to find our own reasons for playing the game – without being handed a story on a plate like most other games – echoes the existential dilemma we all face when it comes to trying to find meaning within our lives. The mind-bogglingly huge, procedurally generated universe in the game reflects our increasing suspicion that our own universe could actually be a virtual simulation, generated by machines located in some other reality, as explored in films such as The Matrix and World on a Wire.

Oh, one other thing: last but certainly not least, the epic and atmospheric soundtrack by 65daysofstatic is an excellent accompaniment to the graphics and gameplay in No Man’s Sky.

This is a video I took within the game just after I’d first started playing it, quite a while ago now. I still think it’s a great example of the visuals and atmosphere within the game, and of course this is just some of the stuff that happens in space. For planetary artwork, check out the desktop wallpapers I created below as a result of adopting the role of “photographer” within the game, just as I do in real life.

Desktop wallpapers

These are all taken directly from the PS4 version of the game. They’re not modified or edited in any way, apart from the HUD details which I’ve photoshopped out.

Monitoring HP ProLiant DL360 hardware in CentOS, with Nagios (optional)

My original post for monitoring HP storage hardware in CentOS is now out of date, so I decided to write an updated post for monitoring all hardware, not just storage hardware, and for optionally including this hardware monitoring in Nagios.

This is written primarily for CentOS 6. It should be largely fine for CentOS 5 and CentOS 7 too, although one or two modifications may be needed. It should also work with some other HP ProLiant servers such as the DL380.

smartd for (supposedly) predicting drive failure

Before we get onto the HP software, it’s worth taking a minute to install smartd, which you can obtain by installing the smartmontools package in CentOS. This software uses the SMART system to attempt to predict when drives are going to fail. It’s easy to configure so that smartd supposedly emails you as soon as problems are detected with drives.

Here’s an older example of an /etc/smartd.conf file on a server which has two SAS disks arranged into a single RAID partition:

/dev/cciss/c0d0 -d cciss,0 -a -m
/dev/cciss/c0d0 -d cciss,1 -a -m

Here’s a more recent example of an /etc/smartd.conf file on a server which has two SSDs configured as RAID 1:

/dev/sda -a -m

However, I’ve never found smartd to be very useful. It starts up fine and indicates via syslog that it’s monitoring the disks, but I’ve never had smartd give a warning before a drive failure even though I’m quite sure it’s configured correctly.

HP software for hardware monitoring

So, onto the really useful stuff. If you try to do this using the official methods as advised by HP, you’ll probably end up installing a whole bunch of awful bloated software that you don’t need taking up resources on your servers. In fact there are only two or three fairly small components which you actually need.

Previously it was necessary to get the first two of these from the HP Service Pack For ProLiant, but HP have recently changed everything once again, so now it’s necessary to get the Management Component Pack for CentOS 6 (also known as hp-mcp) from CentOS 6 Downloads on the Support section of the HP website; this provides the the hp-health (previously known as hpasm) and hpssacli (previously known as hpacucli) components that you’ll need.

If you have SSDs installed, you’ll also want to get the HP Smart Storage Administrator Diagnostic Utility (also known as HP SSADU or hpssaducli, previously known as hpadu) from the Software – System Management section in Red Hat Enterprise Linux 6 Server (x86-64) Downloads on the Support section of the HP website.

Sorry if that all seems a bit longwinded, but HP do have a way of making things complicated.

When you extract the hp-mcp tarball after downloading the Management Component Pack for CentOS 6, you’ll find a subdirectory called something like mcp/CentOS/6/x86_64/10.10 in which there are a bunch of RPM files. Upload the hp-health and hpssacli RPMs to your servers, along with the hpssaducli RPM you got from the HP Smart Storage Administrator Diagnostic Utility if you have SSDs. Then install them the usual way, with rpm -i ... etc.

Checking server hardware with hpasmcli

Once these are installed you can check server hardware by running hpasmcli. Once in, if you type show then you’ll see what things you can check. For example, show powersupply gives you up to date information on – unsurprisingly – the power supplies:

Power supply #1
        Present  : Yes
        Redundant: Yes
        Condition: Ok
        Hotplug  : Supported
        Power    : 40 Watts
Power supply #2
        Present  : Yes
        Redundant: Yes
        Condition: Ok
        Hotplug  : Supported
        Power    : 30 Watts

Type help to get more information.

Checking storage hardware with hpssacli

Next, to check the RAID controller and installed drives, use a command like the following:

hpssacli ctrl all show status ; hpssacli ctrl slot=0 ld all show status ; 
hpssacli ctrl slot=0 pd all show status

That command should show something like this:

Smart Array P440ar in Slot 0 (Embedded)
   Controller Status: OK
   Cache Status: Not Configured
   Battery/Capacitor Status: OK

   logicaldrive 1 (111.8 GB, 1): OK

   physicaldrive 1I:1:1 (port 1I:box 1:bay 1, 120 GB): OK
   physicaldrive 1I:1:2 (port 1I:box 1:bay 2, 120 GB): OK

Type hpssacli help to get more information on how to use it.

Checking SSDs with hpssaducli

If you have SSDs and you installed hpssaducli, you can also check SSD status with this command:

hpssaducli -ssd -txt -f /tmp/ssd.txt ; cat /tmp/ssd.txt

That should show you a bunch of information about wear on the SSDs, e.g:

Smart Array P440ar in Embedded Slot : Internal Drive Cage at Port 1I : Box 1 : Physical Drive (120 GB SATA SSD) 1I:1:1 : SmartSSD Wear Gauge

   Status                               OK
   Supported                            TRUE
   Log Full                             FALSE
   Utilization                          0.000000
   Power On Hours                       47
   Has Smart Trip SSD Wearout           FALSE

Integrating HP hardware monitoring with Nagios

If you’re not using Nagios then obviously you can stop reading now!

Server hardware

I’ve always used the check_hpasm plugin for checking server hardware, and it’s worked well for me. Just follow their instructions to install it, then you can integrate it into your Nagios configuration as needed.

Note that you’ll need to add the following line to your /etc/sudoers so that it has permission to run:

nrpe              ALL=NOPASSWD: /sbin/hpasmcli

Storage hardware

I’ve always used the check_hparray plugin for checking storage hardware, and it’s always worked perfectly for me, notifying me every time there’s been a drive failure. However, I see that it apparently hasn’t worked for some people, and it’s not clear why not, so use at your own risk.

Note that it does need to be modified now that HP have changed the name of their software, so just replace all instances of “hpacucli” in the script with “hpssacli” then it should work fine. Put the script in your Nagios plugins folder, then you can integrate it into your Nagios configuration as needed.

Note that you’ll need to add the following line to your /etc/sudoers so that it has permission to run:

nrpe              ALL=NOPASSWD: /sbin/hpssacli


To check the wear status of SSDs, I wrote a simple Nagios plugin which you can obtain from my GitHub repository. You’ll need to install the dos2unix command if it’s not already installed (with yum -y install dos2unix). Just install the plugin in your Nagios plugins directory, then you can integrate it into your Nagios configuration as needed.

Reclaiming storage space on two-node MongoDB replica sets

At they seem to assume we can create MongoDB replica sets using unlimited numbers of instances which have infinite amounts of storage. In practice, however, we often need to use replica sets with only two nodes (plus arbiter) which have limited storage. The problem then is that MongoDB has the tendency to use vast amounts of disk space without reclaiming the space from dropped data, so it consumes ever-increasing amounts of storage. It’s then hard to deal with this storage problem given the limited options available in a two-node replica set.

A solution to this is clearing all the data from each node in turn, which forces MongoDB to rebuild its data using only the disk space it needs. When performed on a regular basis, this stops the amount of storage which MongoDB is using from constantly increasing at an unacceptable rate.

To achieve this, I wrote the following script which can be run on the primary node via cron as the mongod user on a regular basis (e.g. once a week, or even once a day, depending on the seriousness of the problem). The script firstly clears then rebuilds data on the secondary, then temporarily promotes the secondary to primary whilst clearing and rebuilding data on the primary, then puts everything back to normal again.

N.B. Whilst I’ve built a lot of safety checks and backups into this script, be aware that it deletes all data on your MongoDB nodes so there is high potential for serious problems such as complete data loss if you’re not careful. So, read through the following points very carefully, and deal with these issues before you even think about running the script:

  • Only run this on a properly functioning, problem-free two-node system where you have an arbiter configured on a third machine.
  • Follow the instructions in the comments at the top and ensure that you have the mongod user, SSH and sudo set up properly before commencing.
  • For the latest version of the script you’ll need the timeout Unix command installed, so make sure that’s available on your systems before you start.
  • Get this working properly and safely in test environments before considering deployment in any production environments.
  • Before adding this to cron, run it manually so you can see what it’s doing and stop it if necessary to fix issues.
  • Always make sure you have recent data backups before running it, so that you can restore all your data in the event of a disaster.
  • I’ve run this in various environments with CentOS 5 and CentOS 6, but I haven’t tested it on Debian or Ubuntu, so you may need to make some changes to run it on those distributions.

If you choose to use this then you do so at your own risk, and after all those warnings I’m not going to take any responsibility if you lose data as a result!

2016-01-11: I’ve modified the script to use the timeout command in various places. This adds a level of safety to the script to stop it from unexpectedly doing dangerous things if it doesn’t run properly for some reason.

Change your environments and hostnames in the script as needed. You can get the script from GitHub or copy and paste it below:


# Force MongoDB to only use as much storage as it needs
# instead of taking up more and more space without reclaiming it

# Make sure of the following:
# 1. The mongod user has its shell set to /bin/bash on both machines
# 2. The mongod user has SSH keys set up such that it can SSH from 
#    the primary to the secondary without prompt
# 3. The mongod user has the following permissions in /etc/sudoers:
#    mongod ALL=NOPASSWD: /sbin/service mongod status, /sbin/service mongod stop, /sbin/service mongod start
#    (modify accordingly if not using Red Hat/CentOS)
# 4. Make sure the requiretty option is off in /etc/sudoers

# Only run as mongod user
if [ "$(whoami)" != "mongod" ] ; then echo "Not mongod user" ; exit 1 ; fi

# Determine environment - change these as needed
case "$(hostname)" in ) ; ;; ) ; ;; ) ; ;;
  * ) echo "Unknown environment" ; exit 1 ;;

# Check sudo and SSH
if ! sudo -n /sbin/service mongod status > /dev/null ; then
  echo "Problem with sudo on $primary" ; exit 1
elif ! ssh -q $secondary "sudo -n /sbin/service mongod status > /dev/null" ; then
  echo "Problem with SSH and/or sudo on $secondary" ; exit 1

# Take backup on primary
echo -n "$(date +'%Y-%m-%d %H-%M-%S') Taking backup /tmp/dump on $primary..."
cd /tmp ; rm -rf dump ; mongodump > /dev/null
if [ "$?" != "0" ] ; then echo " Problem taking backup on $primary" ; exit 1 ; fi
echo " done"

# Clear data on secondary
echo -n "$(date +'%Y-%m-%d %H-%M-%S') Clearing data on $secondary..."
timeout 300 ssh -q $secondary "sudo -n /sbin/service mongod stop > /dev/null"
if [ "$?" != "0" ] ; then echo " Problem stopping mongod on $secondary" ; exit 1 ; fi
timeout 300 ssh -q $secondary "rm -rf /var/lib/mongo/*"
if [ "$?" != "0" ] ; then echo " Problem clearing /var/lib/mongo on $secondary" ; exit 1 ; fi
timeout 300 ssh -q $secondary "sudo -n /sbin/service mongod start > /dev/null"
if [ "$?" != "0" ] ; then echo " Problem starting mongod on $secondary" ; exit 1 ; fi
echo " done"

# Wait for secondary to come back up
issecondary=$(timeout 300 ssh -q $secondary "echo 'db.isMaster()' | mongo" | grep secondary | awk -F '[ ,]' '{print $3}')
if [ "$?" != "0" ] ; then echo " Problem getting isMaster status on $secondary" ; exit 1 ; fi
echo -n "$(date +'%Y-%m-%d %H-%M-%S') Waiting for $secondary to come up..."
until [ "$issecondary" == "true" ] ; do
  sleep 5
  echo -n "."
  issecondary=$(timeout 300 ssh -q $secondary "echo 'db.isMaster()' | mongo" | grep secondary | awk -F '[ ,]' '{print $3}')
  if [ "$?" != "0" ] ; then echo " Problem getting isMaster status on $secondary" ; exit 1 ; fi
echo " done"

# Demote primary so secondary is master
echo -n "$(date +'%Y-%m-%d %H-%M-%S') Demoting $primary..."
echo 'rs.stepDown()' | mongo --quiet > /dev/null
if [ "$?" != "0" ] ; then echo " Problem demoting $primary" ; exit 1 ; fi
echo " done"

# Wait for secondary to take over as master
issecondary=$(echo 'db.isMaster()' | mongo | grep secondary | awk -F '[ ,]' '{print $3}')
echo -n "$(date +'%Y-%m-%d %H-%M-%S') Waiting for $secondary to become master..."
until [ "$issecondary" == "true" ] ; do
  sleep 5
  echo -n "."
  issecondary=$(echo 'db.isMaster()' | mongo | grep secondary | awk -F '[ ,]' '{print $3}')
echo " done"

# Clear data on primary
echo -n "$(date +'%Y-%m-%d %H-%M-%S') Clearing data on $primary..."
sudo -n /sbin/service mongod stop > /dev/null
if [ "$?" != "0" ] ; then echo " Problem stopping mongod on $primary" ; exit 1 ; fi
rm -rf /var/lib/mongo/*
sudo -n /sbin/service mongod start > /dev/null
if [ "$?" != "0" ] ; then echo " Problem starting mongod on $primary" ; exit 1 ; fi
echo " done"

# Wait for primary to come up
issecondary=$(echo 'db.isMaster()' | mongo | grep secondary | awk -F '[ ,]' '{print $3}')
echo -n "$(date +'%Y-%m-%d %H-%M-%S') Waiting for $primary to come up..."
until [ "$issecondary" == "true" ] ; do
  sleep 5
  echo -n "."
  issecondary=$(echo 'db.isMaster()' | mongo | grep secondary | awk -F '[ ,]' '{print $3}')
echo " done"

# Demote secondary so primary is master
echo -n "$(date +'%Y-%m-%d %H-%M-%S') Demoting $secondary..."
timeout 300 ssh -q $secondary "echo 'rs.stepDown()' | mongo --quiet > /dev/null"
if [ "$?" != "0" ] ; then echo " Problem demoting $secondary" ; exit 1 ; fi
echo " done"

# Wait for primary to take over as master
isprimary=$(echo 'db.isMaster()' | mongo | grep ismaster | awk -F '[ ,]' '{print $3}')
echo -n "$(date +'%Y-%m-%d %H-%M-%S') Waiting for $primary to become master..."
until [ "$isprimary" == "true" ] ; do
  sleep 5
  echo -n "."
  isprimary=$(echo 'db.isMaster()' | mongo | grep ismaster | awk -F '[ ,]' '{print $3}')
echo " done"

Setting up an IPsec VPN on pfSense 2.1 for mobile OS X and iOS clients

I recently had to configure the open-source firewall pfSense to allow VPN access for mobile clients, particularly those using OS X on Macs and iOS on iPhones and iPads.

I haven’t found too many examples out there from people who have set this up successfully, so I thought it might be helpful to share this information for others who are trying to set up a similar VPN configuration.

N.B. This works for pfSense 2.1. In pfSense 2.2 they completely changed the IPSec backend, so things are a little different at the frontend.

pfSense configuration

In System -> User Manager set up a suitable user as needed, and under Effective Privileges add User – VPN – IPsec xauth Dialin for that user.

Then go to VPN -> IPsec and set up the mobile IPsec client configuration as follows.

VPN: IPsec

Tunnels: Phase 1 (Mobile Client)

General information

  • Disabled off
  • Internet Protocol IPv4
  • Interface WAN
  • Description Remote access VPN [modify as needed]

Phase 1 proposal (Authentication)

  • Authentication method Mutual PSK + Xauth
  • Negotiation mode aggressive
  • My identifier My IP address
  • Peer identifier Distinguished name MyIdentifier [modify as needed]
  • Pre-Shared Key MyPresharedKey [modify as needed]
  • Policy Generation Default
  • Proposal Checking Default
  • Encryption algorithm 3DES
  • Hash algorithm SHA1
  • DH key group 2 (1024 bit)
  • Lifetime 28800

Advanced Options

  • NAT Traversal Force
  • Dead Peer Detection on 10 seconds 5 retries

Tunnels: Phase 2 (Mobile Client)

  • Disabled off
  • Mode Tunnel IPv4
  • Local Network LAN subnet (NAT/BINAT None)
  • Description [empty]

Phase 2 proposal (SA/Key Exchange)

  • Protocol ESP
  • Encryption algorithms AES auto, Blowfish auto, 3DES, CAST128
  • Hash algorithms MD5, SHA1
  • PFS key group off
  • Lifetime 3600

Advanced Options

  • Automatically ping host [empty]

Mobile clients

  • IKE Extensions on

Extended Authentication (Xauth)

  • User Authentication LocalDatabase
  • Group Authentication none

Client Configuration (mode-cfg)

  • Virtual Address Pool on Network: / 24 [modify as needed]
  • Network List off
  • Save Xauth Password off
  • DNS Default Domain on [modify as needed]
  • Split DNS off
  • DNS Servers on Server #1: [modify as needed]
  • WINS Servers off
  • Phase2 PFS Group off
  • Login Banner on Warning: don't be naughty! [modify as needed]

Pre-Shared Keys

  • Identifier MyIdentifier [modify as needed, should match Peer identifier above]
  • Pre-Shared Key MyPresharedKey [modify as needed, should match Pre-Shared Key above]

Firewall: Rules

In Firewall -> Rules, go to the IPsec tab and make sure there’s a rule to allow all IPv4 traffic from anywhere to anywhere.

OS X configuration

In System Preferences -> Network, add a new interface of type VPN, VPN Type Cisco IPSec, and Service Name of your choice.

Server Address is the public IP of your firewall. Account Name is the pfSense user you set up earlier.

In Authentication Settings, Shared Secret is the pre-shared key you created on pfSense earlier, and Group Name is the identifier you created on pfSense earlier.

iOS configuration

In Settings -> VPN, add a new VPN configuration of type IPSec.

Description is up to you. Server is the public IP of your firewall. Account is the pfSense user you set up earlier. Group Name is the identifier you created on pfSense earlier. Secret is the pre-shared key you created on pfSense earlier.

Soundtrack, sound effects and ident audio design for Propsplanet game demo

I was recently asked to start providing audio for Propsplanet, a maker of 3D models for game developers. I started with a video called Fantasy Journey, a game demo which gives examples of how their 3D models could be used.

I wrote the soundtrack music to accompany the video; I created all the in-game sound effects; and I also designed the audio for the Propsplanet ident at the start.

Here’s the end result:

Tilt-shift video of kites on Parliament Hill with moody soundtrack

Some time ago I made a short video of kites being flown at sunset on Parliament Hill at Hampstead Heath, London, then processed the video using a tilt-shift app on my iPhone.

Subsequently I decided to write a moody little soundtrack to go with the video using the Sculpture physical modelling synth in Logic Pro.

I think the result is a nice little “mood piece”:

Time-lapse film of spider plant flower with ethereal soundtrack

My spider plants (chlorophytum comosum) grow new flowers each day which only last for a few hours. Yesterday morning I happened to notice that a new flower was about to bloom, so I thought this would be a good opportunity to try out the time-lapse mode on my iPhone’s camera. So I attached my olloclip 4-IN-1 to use one of its macro lenses, clamped the iPhone on my microphone stand, and initiated a time-lapse video.

I thought the end result was worthwhile, so I processed it in iSupr8 on my iPhone to get that nice “super 8 mm” effect. I was then inspired to write an accompanying soundtrack which was suitably ethereal, so I fired up Logic Pro X on my Mac and spent quite some time playing around with the Sculpture synth, which is a criminally underrated physical modelling synthesizer that’s wonderful for this sort of work. I also used a variety of delay and reverb plugins, plus some excitation to enhance the sonic “shimmer”.

The finished video is entitled So Special, and here it is:

Making a short sci-fi film and composing the soundtrack for it

Earlier this year, I was invited to join a group who had decided to enter the SCI-FI-LONDON 48 Hour Film Challenge which, as the name suggests, requires you to make a science-fiction film from scratch within 48 hours during a weekend. As you can probably imagine, this is indeed very challenging, but very satisfying and great fun.

I had been under the impression that I was mainly there to do the soundtrack, but in addition to that I ended up acting in the lead role! This was my first real experience of acting, and I have to admit that I enjoyed it and found it very interesting. I’d definitely give it another go if the opportunity presented itself.

I also contributed ideas for the overall concept and story, as did all the members of the team.

I didn’t receive a rough edit of the film until about midnight on the Sunday night, which meant that I only had about twelve hours of working through the night to get the soundtrack written and recorded. I’d already created some patches on my synthesizers and jammed some musical ideas and phrases, so I was as prepared as I could be. I brewed a strong pot of coffee and got on with it.

Our film is called 600 Days After, and here it is:

600 Days After on Vimeo.

This was our first experience of working together on a whole film, and we were quite pleased with the end result. There’s a lot about it that I’m pretty happy with, especially the way the music helps to create the dark atmosphere which gels with the story and my role in it. I’m definitely looking forward to the next one!

Arturia MicroBrute and MicroBrute SE monophonic analogue synthesizers

When Arturia brought out the monophonic analogue synthesizer known as the MiniBrute with its relatively unusual and quite aggressive Steiner-Parker filter, punchy envelope generator, and surprisingly wide range of other exciting noise-sculpting possibilities, I was very tempted to buy one but somehow managed to resist.

Then the MicroBrute came out with a lower price tag, a built-in sequencer and semi-modular capabilities, and my level of temptation increased accordingly.

When a special limited edition version of the MicroBrute came out, I knew I couldn’t resist any longer and ordered the white version. I liked the MicroBrute SE so much that I had also ordered a regular black MicroBrute within 48 hours. The white one became known as Good Brute and the black one as Evil Brute.

Having two conveniently tiny and sonically powerful monosynths with the inputs and outputs of a semi-modular system raises all sorts of interesting possibilities. They can be used entirely separately, with two separate sequences playing on their respective sequencers; they can be used separately but with some connectivity between the two with patch cables (e.g. using the LFO on one machine to control pulse width modulation on both machines); or they can be linked together to function as one doubly powerful two-oscillator monosynth.

I decided to film a video of Good Brute vs. Evil Brute demonstrating all of these possibilities and more. During the course of the video I build sounds and sequences to make the parts of a song, and at the end I play the whole thing whilst jamming a lead part over the top. If you don’t care about the geeky stuff and just want to see/hear the final performance, skip ahead to 13:35 in the video:

00:00 percussion parts; 04:00 bass parts; 07:22 bleeps; 09:00 lead sound; 13:35 final performance with all parts playing.

Audio hardware and software used: Arturia MicroBrute, Arturia MicroBrute SE, Apple Mac, Ableton Live, Audio Hijack Pro.

Video hardware and software used: iPhone 5s, Apple Mac, Final Cut Pro X.

SysAdmin fame at last!

I was interviewed for a careers feature in the esteemed PC Pro magazine, and my article has been printed in the latest edition:

Matt Brock - Linux system administrator

I think they’ve done a great job of editing my original monologue into a compelling description of the excitement, challenges and rewards of administering computer systems and managing infrastructure, and I hope it helps to encourage college graduates and other potentially interested individuals into the field of system administration.

In the meantime, I’ll continue to enjoy my fifteen minutes of fame…