Monitoring PERC RAID controllers and storage arrays on Dell PowerEdge servers with Debian and Ubuntu

If you have a Dell PowerEdge server with a RAID array then you’ll probably want to be notified when disks are misbehaving, so that you can replace the disks in a timely manner. Hopefully this article will help you to achieve this.

These tools generally rely on being able to send you email alerts otherwise their usefulness can be somewhat limited, so you should make sure you have a functioning MTA installed which can successfully send email to you from the root account. Setting up an MTA is beyond the scope of this article, so hopefully you already know how to do that.

Monitoring with SMART

Firstly, it’s probably worth installing smartmontools to perform SMART monitoring on the storage array (though, to be honest, in all my years of sysadmin I’ve still yet to receive a single alert from smartd before a disk failure… but anyway).

apt-get install smartmontools

Comment out everything in /etc/smartd.conf and add a line akin to the following:

/dev/sda -d megaraid,0 -a -m foo@bar.com

Replace /dev/sda if you’re using a different device, and replace foo@bar.com with your email address. Restart smartd:

service smartd restart

If you check /var/log/syslog, you should see that the service has successfully started up and is monitoring your RAID controller. You should now in theory receive emails if smartd suspects impending disk failure (though don’t bet on it).

If you want to get information about your controller, try this command:

smartctl -d megaraid,0 -a /dev/sda

That should show you hardware details, current operating temperatures, error information, etc.

Monitoring and querying the RAID controller

So, let’s get down to the proper monitoring tools we really need. Dell’s “PERC” RAID controllers are apparently supplied by LSI, and there’s a utility which LSI produce called megacli for managing these cards, but it seems as if megacli is fairly unfriendly and unintuitive. However, I haven’t needed to use megacli because I’ve had success using a friendlier tool called megactl. So, my instructions are for installing and using megactl, but if that doesn’t work for you then you’ll probably need to try and figure out how to install and use megacli, in which case I wish you the best of luck.

To install megactl, firstly follow these straightforward instructions to install the required repository on Debian or Ubuntu, then install the tools:

apt-get install megaraid-status

This will automatically start the necessary monitoring daemon and ensure that it gets started on boot. This daemon should send a root email if it detects any problems with the array.

The following command can be used to show the current state of the array and its disks, in detail:

megaraidsas-status

To get more information and instructions on megactl and related tools, this page is a good starting point.

Monitoring with Nagios

In order to get alerted via Nagios, it seems that the check_megasasctl plugin for Nagios will do the trick, so long as you have megactl installed as described above. I haven’t actually tried this in Nagios myself yet, so I can’t vouch for it.

Dairy- and gluten-free pork pie recipe

This is quite a departure from the usual sort of content I produce on my blog, but I wanted to keep a record of this recipe so I could repeat it in future. Also, I thought it might be of benefit to others wanting to create similar pies, so here it is.

To create this recipe I originally started with the Doves Farm recipe for a gluten-free pork pie, then I made quite a few modifications of my own. Do feel free to modify the recipe further according to your tastes and needs (though please note you attempt this recipe at your own risk, and I’m not to be held responsible if you somehow manage to poison or injure yourself).

I use a cupcake tray for the pies so I get lots of little pies rather than one big one. The pies are nice when warm from the oven, but even nicer when eaten cold later.

Ingredients

  • Plain white gluten-free flour (Doves Farm if possible) – 300 g
  • Dairy-free margarine – 125 g
  • Water – 150 ml
  • Dairy- and gluten-free sausage meat (or sausages) – 450 g
  • Unsmoked bacon (lardons if possible) – 100 g
  • Large egg – 1
  • Dried sage
  • Salt and pepper

Oven temperature and cooking time

  • 180 °C
  • 1 hour

Procedure

  1. Put the sausage meat into a bowl; or, if you have sausages, remove the cases and empty the meat into a bowl.
  2. Chop the lardons or bacon into very small pieces (using scissors is fine with lardons) and mix into the sausage meat.
  3. Add salt, pepper and dried sage to the meat (be fairly generous with the seasoning; some trial and error will probably be required to get this absolutely right).
  4. Put the flour into a separate bowl with the egg and a couple of pinches of salt.
  5. Put the water and margarine into a saucepan and bring to the boil.
  6. Mix the water and margarine into the flour mixture and knead into dough.
  7. Grease the cupcake tray with margarine.
  8. Take a piece of the dough and press into one of the cups in the tray to form the outer layer of crust (make sure it’s not too thick).
  9. Take some of the meat mixture and fill the pastry cup with it:

    dough, outer layer of crust

  10. Take a piece of dough, press flat, and place over the meat mixture to form the top of the pie (ensure that the top piece of dough neatly joins the dough at the sides – add or remove dough as needed):

    dough on top

  11. Repeat this process until all the cups are filled, or until the dough or meat mixture runs out:

    filled tray

  12. Bake in a preheated oven at 180 °C for 1 hour.

You should end up with something like this:

finished pies

And here are a couple of the tasty finished pies alongside some delicious salad from my father’s garden (and some coleslaw from Waitrose):

finished pies with salad

Comparison of keyboards for iPad mini with Retina display

At any given time I’m effectively on 24/7 support for a number of clients, but I don’t always want to carry my laptop with me wherever I go. I therefore decided it would be a good idea to buy a keyboard for my iPad mini with Retina display in order to have a light, very portable hardware solution suitable for most support situations without having to carry my much bigger, heavier laptop around.

Logitech Ultrathin

I firstly tried out a Logitech Ultrathin clip-on keyboard cover. The original version of this product (which is the version they’re still selling on Amazon, so be careful) apparently works well for the original iPad mini, but not for the iPad mini with Retina display (sometimes erroneously referred to as the iPad mini 2) because the Retina version is slightly thicker than the original iPad mini, and thus the viewing angle with the original Logitech Ultrathin is too steep.

I therefore tried the updated Ultrathin which has been redesigned with a flexible multi-angle stand to solve the problem with the viewing angle:

 

My initial impressions were very good, although I wasn’t keen on the fact that you can’t just fold the iPad down onto the keyboard when you’ve finished like you can when closing a laptop to put it to sleep – instead, you have to pull the iPad out of the stand then slot it into the flap at the rear, which is an awkward solution and doesn’t feel very robust.

A much bigger problem for me, however, is the lack of a CTRL (Control) key on the keyboard. This is unlikely to bother most regular users, but my support work mainly involves me connecting to Linux servers via SSH for which the CTRL key is heavily used, so this was a fairly major problem. (Apparently the original version of the Ultrathin had a CTRL key but it didn’t actually work much of the time, so the same problem potentially applies.)

ZAGG ZAGGkeys Cover

After finding Avi Freedman’s very helpful post regarding this issue, I decided to return the Logitech Ultrathin and buy a ZAGG ZAGGkeys Cover instead:

 

For me, this is a much better product. The way it clips on at the back makes it feel much more like a small laptop or netbook solution – there’s a much wider range of viewing angles, the keyboard is bigger because it stretches all the way to the rear of the device, and you can close the iPad onto the keyboard properly and without hassle when you’ve finished, making for a much more robust protective solution when you’re transporting it.

Even more importantly for me, it has a CTRL key in the right place, and as an added bonus – also unlike the Logitech – it has a Tab key you can hit without having to mess around with modifiers. Not only is the keyboard larger and fitted with all the keys I require, but it also feels a bit more intuitively laid out than the one on the Logitech. The keys are very solid, responsive and generally nice to use.

There’s one additional bonus: the keyboard is backlit, a feature not present on the Logitech. This wasn’t a deal-breaker for me, but there could be situations in which this is bit of a life-saver.

There are only two problems with the ZAGG keyboard. Firstly, when the iPad is tilted all the way back and you have it on your lap, sometimes it tips over due to the keyboard being much lighter than the iPad. Secondly, it can be rather awkward to touch controls right at the bottom of the iPad screen because the keyboard comes right up to the bottom of the screen. However, I don’t regard either of these problems as particularly serious when weighed up against all the benefits of this product.

Conclusion

All in all I’m very pleased with the ZAGG ZAGGkeys Cover. When it’s attached, the iPad mini feels like a proper little mini-laptop and is a really feasible solution for technical support and other situations on the go. This is definitely the better product for system administrators and developers, and for regular users it may well also be preferable to the Logitech Ultrathin.

Security hardening on Ubuntu Server 14.04

Recently I’ve been involved with a project where I needed to perform some security hardening on Amazon Web Services EC2 instances running Ubuntu Server 12.04, so I used this excellent guide as a starting point, then I added, removed and modified things as needed.

I decided to take those procedures and modify them for Ubuntu Server 14.04 now that this new LTS version has been released. Some of the procedures from 12.04 no longer need to be performed, and some needed to be changed. The following guidelines are what I’ve ended up with. You might find these guidelines useful to varying extents on other Linux distributions, but there will be potentially very significant differences depending on which distro you’re using.

Assume that all these operations need to be performed as root, which you can either do with sudo or by logging in as root first. (I’ve noticed that Ubuntu users seem particularly averse to logging in as root, apparently preferring instead to issue an endless series of commands starting with sudo, but I’m afraid that kind of extra hassle is not for me, so I just log in as root first.)

Harden SSH

I generally regard it as a very sensible idea to disable any kind of root login over SSH, so in /etc/ssh/sshd_config change PermitRootLogin to no.

If SSH on your servers is open to the world then I also advise running SSH on a non-standard port in order to avoid incoming SSH hacking attempts. To do that, in /etc/ssh/sshd_config change Port from 22 to another port of your choice, e.g. 1022. Note that you’ll need to update your firewall or EC2 security rules accordingly.

After making changes to SSH, reload the OpenSSH server:

service ssh reload

Limit su access to administrators only

It generally seems like a sensible idea to make sure that only users in the sudo group are able to run the su command in order to act as (or become) root:

dpkg-statoverride --update --add root sudo 4750 /bin/su

Improve IP security

Add the following lines to /etc/sysctl.d/10-network-security.conf to improve IP security:

# Ignore ICMP broadcast requests
net.ipv4.icmp_echo_ignore_broadcasts = 1

# Disable source packet routing
net.ipv4.conf.all.accept_source_route = 0
net.ipv6.conf.all.accept_source_route = 0 
net.ipv4.conf.default.accept_source_route = 0
net.ipv6.conf.default.accept_source_route = 0

# Ignore send redirects
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0

# Block SYN attacks
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 5

# Log Martians
net.ipv4.conf.all.log_martians = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1

# Ignore ICMP redirects
net.ipv4.conf.all.accept_redirects = 0
net.ipv6.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0 
net.ipv6.conf.default.accept_redirects = 0

# Ignore Directed pings
net.ipv4.icmp_echo_ignore_all = 1

Load the new rules:

service procps start

PHP hardening

If you’re using PHP, these are changes worth making in /etc/php5/apache2/php.ini in order to improve the security of PHP:

  1. Add exec, system, shell_exec, and passthru to disable_functions.
  2. Change expose_php to Off.
  3. Ensure that display_errors, track_errors and html_errors are set to Off.

Apache hardening

If you’re using Apache web server, it’s worth making sure you have the following parameters set in /etc/apache2/conf-enabled/security.conf to make sure Apache is suitably hardened:

ServerTokens Prod
ServerSignature Off
TraceEnable Off
Header unset ETag
FileETag None

For these to take effect you’ll need to enable mod_headers:

ln -s /etc/apache2/mods-available/headers.load /etc/apache2/mods-enabled/headers.load

Then restart Apache:

service apache2 restart

Install and configure ModSecurity

If you’re using Apache, the web application firewall ModSecurity is a great way to harden your web server so that it’s much less vulnerable to probes and attacks. Firstly, install the necessary packages:

apt-get install libapache2-mod-security2

Prepare to enable the recommended configuration:

mv /etc/modsecurity/modsecurity.conf-recommended /etc/modsecurity/modsecurity.conf

Then edit /etc/modsecurity/modsecurity.conf:

  1. Set SecRuleEngine to On to activate the rules.
  2. Change SecRequestBodyLimit and SecRequestBodyInMemoryLimit to 16384000 (or higher as needed) to increase the file upload size limit to 16 MB.

Next, install the Open Web Application Security Project Core Rule Set:

cd /tmp
wget https://github.com/SpiderLabs/owasp-modsecurity-crs/archive/master.zip
apt-get install zip
unzip master.zip
cp -r owasp-modsecurity-crs-master/* /etc/modsecurity/
mv /etc/modsecurity/modsecurity_crs_10_setup.conf.example /etc/modsecurity/modsecurity_crs_10_setup.conf
ls /etc/modsecurity/base_rules | xargs -I {} ln -s /etc/modsecurity/base_rules/{} /etc/modsecurity/activated_rules/{}
ls /etc/modsecurity/optional_rules | xargs -I {} ln -s /etc/modsecurity/optional_rules/{} /etc/modsecurity/activated_rules/{}

To add the rules to Apache, edit /etc/apache2/mods-available/security2.conf and add the following line near the end, just before </IfModule>:

Include "/etc/modsecurity/activated_rules/*.conf"

Restart Apache to active the new security rules:

service apache2 restart

Install and configure mod_evasive

If you’re using Apache then it’s a good idea to install mod_evasive to help protect against denial of service attacks. Firstly install the package:

apt-get install libapache2-mod-evasive

Next, set up the log directory:

mkdir /var/log/mod_evasive
chown www-data:www-data /var/log/mod_evasive

Configure it by editing /etc/apache2/mods-available/evasive.conf:

  1. Uncomment all the lines except DOSSystemCommand.
  2. Change DOSEmailNotify to your email address.

Link the configuration to make it active in Apache:

ln -s /etc/apache2/mods-available/evasive.conf /etc/apache2/mods-enabled/evasive.conf

Then activate it by restarting Apache:

service apache2 restart

Install and configure rootkit checkers

It’s highly desirable to get alerted if any rootkits are found on your server, so let’s install a couple of rootkit checkers:

apt-get install rkhunter chkrootkit

Next, let’s make them do something useful:

  1. In /etc/chkrootkit.conf, change RUN_DAILY to "true" so that it runs regularly, and change "-q" to "" otherwise the output doesn’t make much sense.
  2. In /etc/default/rkhunter, change CRON_DAILY_RUN and CRON_DB_UPDATE to "true" so it runs regularly.

Finally, let’s run these checkers weekly instead of daily, because daily is too annoying:

mv /etc/cron.weekly/rkhunter /etc/cron.weekly/rkhunter_update
mv /etc/cron.daily/rkhunter /etc/cron.weekly/rkhunter_run
mv /etc/cron.daily/chkrootkit /etc/cron.weekly/

Install Logwatch

Logwatch is a great tool which provides regular reports nicely summarising what’s been going on in the server logs. Install it like this:

apt-get install logwatch

Make it run weekly instead of daily, otherwise it gets too annoying:

mv /etc/cron.daily/00logwatch /etc/cron.weekly/

Make it show output from the last week by editing /etc/cron.weekly/00logwatch and adding --range 'between -7 days and -1 days' to the end of the /usr/sbin/logwatch command.

Enable automatic security updates

N.B. Be warned that enabling automatic updates can be potentially dangerous for a production server in a live environment. Only enable this for a server in such an environment if you really know what you are doing.

Run this command:

dpkg-reconfigure -plow unattended-upgrades

Then choose Yes.

Enable process accounting

Linux process accounting keeps track of all sorts of details about which commands have been run on the server, who ran them, when, etc. It’s a very sensible thing to enable on a server where security is a priority, so let’s install it:

apt-get install acct
touch /var/log/wtmp

To show users’ connect times, run ac. To show information about commands previously run by users, run sa. To see the last commands run, run lastcomm. Those are a few commands to give you an idea of what’s possible; just read the manpages to get more details if you need to.

Edit: I recently threw together a quick Bash script to send a weekly email with a summary of user activity, login information and commands run. To get the same report yourself, create a file called /etc/cron.weekly/pacct-report containing the following (don’t forget to make this file executable) (you can grab this from GitHub if you prefer):

#!/bin/bash

echo "USERS' CONNECT TIMES"
echo ""

ac -d -p

echo ""
echo "COMMANDS BY USER"
echo ""

users=$(cat /etc/passwd | awk -F ':' '{print $1}' | sort)

for user in $users ; do
  comm=$(lastcomm --user $user | awk '{print $1}' | sort | uniq -c | sort -nr)
  if [ "$comm" ] ; then
    echo "$user:"
    echo "$comm"
  fi
done

echo ""
echo "COMMANDS BY FREQUENCY OF EXECUTION"
echo ""

sa | awk '{print $1, $6}' | sort -n | head -n -1 | sort -nr

Things I haven’t covered

There are some additional issues you might want to consider which I haven’t covered here for various reasons:

  1. This guide assumes your Ubuntu server is on a network behind a firewall of some kind, whether that’s a hardware firewall of your own, EC2 security rules on Amazon Web Services, or whatever; and that the firewall is properly configured to only allow through the necessary traffic. However, if that’s not the case then you’ll need to install and configure a firewall on the Ubuntu server itself. The recommended software for this on Ubuntu is ufw.
  2. If you’re running an SSH server then you’re often told that you must install a tool such as fail2ban immediately if you don’t want your server to be hacked to death within seconds. However, I’ve maintained servers with publicly-accessible SSH servers for many years, and I’ve found that simply moving SSH to a different port solves this problem far more elegantly. I monitor logs in order to identify incoming hacking attempts, and I haven’t seen a single one in the many years I’ve been doing this. However, using this “security by obscurity” method doesn’t mean that such an attack can’t happen, and if you don’t watch your logs regularly and respond quickly to them as I do, then you would be well advised to install fail2ban or similar as a precaution, in addition to moving your SSH server to another port as described above.
  3. Once you’ve hardened your server, you’re advised to run some vulnerability scans and penetration tests against it in order to check that it’s actually as invincible as you’re now hoping it is. This is a topic which requires a post all of its own so I won’t be covering it in any detail here, but a good starting point if you’re not already familiar with it is the excellent Nmap security scanner.

A successful migration from WordPress to Ghost

Over the years my blog has been dragged kicking and screaming through a variety of different blogging and hosting platforms. Though some tweaking and tinkering has invariably been required to survive these transitions, thankfully my blog posts remain fairly intact and just as (hopefully) informative as they’ve always been.

Originally my blog came into existence when I decided to take the more technical or generally-informative posts from a LiveJournal I had at the time and import them into WordPress.com. After getting frustrated with the limitations of WordPress.com, I installed the WordPress software on my own server and migrated my blog to that.

My own WordPress installation served me well for quite a while, but eventually I got tired of the constant hacking attempts and bot traffic which generally come with a self-hosted WordPress blog. To deal with this I firstly made the decision to migrate all my blog comments to Disqus. This process took a while and required a lot of fiddling to get things right, but it was worth it because I have many excellent comments on my blog, some of them going all the way back to when the posts were still on LiveJournal. Then I decided to move away from WordPress altogether, so I installed the Ghost software on my server and migrated all my posts into that. The Ghost install is perhaps a little less straightforward than WordPress as it requires you to run a Node.js app, but if you haven’t come across that before then it’s a skill worth learning.

After WordPress, Ghost is quite a breath of fresh air. It’s super-fast and it has a clean, simple interface with an excellent Markdown editor which does pretty much everything I need. Playing with themes is not quite so straightforward and does require a little more technical expertise, but I’m sure that will become easier over time.

As I’ve just updated the design on my consultancy website, I decided this week that my Ghost installation needed some improvement to be brought into line with my website, so I installed the Solar Theme by Matt Harzewski and tweaked the CSS until I had the colours and layout more or less how I wanted them.

Hopefully Ghost will continue to serve as a decent blogging option for some time to come. It’s already very good now, and I look forward to seeing how it develops in the future.

Runner-up for Weapon of Choice in the Socially Awesome Sysadmin Awards

I have to admit I was a little gratified and fairly amused to see that I was runner-up for the “Weapon of Choice Award” in the “Socially Awesome Sysadmin Awards” as a result of using the obscure Twitter client YoruFukurou. Unfortunately I stopped using YoruFukurou a while back as it wasn’t being updated, and I switched to Tweetbot instead as it has all the powerful functionality and configurability that I like, plus comprehensive mute filters with a regex option, decent syncing across the platforms I use, and a nice clean interface.

I’m not sure how I feel about being described as a “serial awards bridesmaid” though! Still, I guess lots of nominations are good, and perhaps I’ll win one eventually…

Useful Unix commands

For a long time I’ve maintained a memory aid in the form of a list of useful commands which can be used on the command line for Linux, OS X, BSD, Solaris, etc., so I thought I’d list them in a blog post in case they come in useful for others. Most of these will run on any Unix-type operating system, though I’ve usually indicated where a command is OS-specific.

  • For Debian/Ubuntu distributions, many of these commands are available via the APT package manager.
  • For Red Hat/CentOS/Fedora, many of these commands are available via the yum package manager, though particularly in the case of CentOS I’d recommend adding the EPEL repository to increase the availablility of useful tools which are otherwise missing.
  • For OS X, install the excellent Homebrew package manager, then many of these commands will be available for install.

This is a fairly arbitrary list which I add to when I forget to use something old, or when I come across something new which is useful. Some of these commands will probably be familiar to you, and some probably won’t. I’ve added links where applicable. Please feel free to throw amendments or additional suggestions my way.

The list of commands

  • ab – website benchmarking; usually comes with Apache HTTP server
  • afconvert – powerful audio file converter; OS X only
  • cpulimit – provides a simple way of limiting CPU usage for specific processes
  • curl – powerful URL transfer tool for testing web pages and other services
  • dc – CLI-based calculator
  • ditaa – converts ASCII art to PNG
  • dmidecode – reports system hardware as described in the BIOS
  • exiftool – for manipulating Exif data on image files
  • fio – IO benchmarking tool
  • goaccess – simple and powerful web log analyser and interactive viewer
  • html2text.py – converts HTML to Markdown; this is not the html2text which comes with e.g. Homebrew
  • htop – like top but nicer and more informative
  • http-server – start a web server in the current directory on port 8001, using Node.js; package needs to have been installed using: npm install http-server -g
  • httping – ping a host using HTTP instead of ICMP
  • iftop – like top but for network traffic
  • ike-scan – find and probe IPSec VPNs
  • iotop – like top but for disk IO
  • jp2a – converts JPEGs to ASCII art
  • lshw – simple and powerful way of getting hardware info; Linux only
  • lsof – for finding which processes are using which files and network ports, amongst other things
  • mitmproxy/mitmdump – nice HTTP sniffer proxy; usually installed via pip
  • mtr – handy graphical combination of ping and traceroute
  • mountpoint – check whether a directory is a mount point
  • multitail – tail multiple log files in separate panes in the same window
  • ncat – like nc/netcat but newer and with extra options; comes with nmap
  • nethogs – quick real-time display of how much bandwidth individual processes are using; Linux only
  • ngrep – for intelligently sniffing HTTP and other protocols
  • nl – add line numbers to input from file or stdin
  • nmap – comprehensive port/vulnerability scanner
  • nping – advanced ping tool for TCP/HTTP pinging; comes with nmap
  • opensnoop – watch file accesses in real time; OS X only
  • parallel – like xargs but better
  • paste – merge multi-line output into single line with optional delimiters
  • pen – simple but effective command line load balancer
  • pgrep/pkill – easy grepping for/killing of processes using various criteria
  • photorec – recover lost documents, videos, photos, etc. from storage media
  • pidstat – flexible tool for obtaining statistics on processes, very useful for understanding resource usage for particular processes
  • psk-crack – for cracking VPN preshared keys obtained with ike-scan; comes with ike-scan
  • printf – for reformatting; very useful for things like zero padding numbers in bash
  • pstree – shows a tree of running processes
  • pv – provides a progress bar for piped processes
  • python -m SimpleHTTPServer – start a web server in the current directory on port 8000, using Python
  • qlmanage -p – Quick Look from the command line; OS X only
  • s3cmd – CLI tool for Amazon S3 administration
  • scutil – for changing system settings including various forms of hostname; OS X only
  • seq – generates a sequence of numbers
  • siege – website benchmarking; more options than ab
  • sslscan – see which ciphers and SSL/TLS versions are supported on secure sites
  • stress – to artificially add load to a system for testing purposes
  • subnetcalc – IPv4/IPv6 subnet calculator
  • tcptraceroute – like traceroute but TCP
  • tee – for directing output to a file whilst watching it at the same time
  • time – gives info on how long a command took to run
  • timeout – run a command with a time limit, i.e. kill it if it’s still running after a certain time
  • tmutil – get more control over Time Machine from the command line; OS X only
  • tree – file listing in tree format
  • trickle – simple but effective bandwidth throttling
  • trimforce – turns on TRIM support for third-party SSDs; OS X only
  • watch – prepend to a command line to see continously updating output
  • wget – nice client for web downloads and archiving websites
  • xmlstarlet – powerful tool for manipulating and reformatting XML

How to use Flickr favourites as your screensaver in OS X Yosemite, Mavericks and Mountain Lion

Mountain Lion was an improvement on Lion, which I had mixed feelings about when it was released. Unfortunately, however, Apple apparently decided that RSS is a dead technology (although it seems to be creeping back into Safari), and consequently the handy RSS screensavers were removed, which means there was no simple way of creating a screensaver out of one’s Flickr favourites in Mountain Lion, and this remains the case in Mavericks and Yosemite.

Having come up with an effective solution for how to get Flickr favourites as a screensaver in Mountain Lion which also works in Mavericks and Yosemite, I thought I’d share the method for the benefit of those who are not so used to fiddling with the deeper technological aspects of their Mac. I’ve gone into quite a lot of detail for those who are less technically-minded, but those of a more technical bent can just skip ahead accordingly.

Yahoo! keeps changing the Flickr website and API which keeps breaking some of my instructions, so I’m constantly modifying this post so that it stays up to date for getting your favourites from the new Flickr website to use as a screensaver in Yosemite, Mavericks and Mountain Lion. I last updated these instructions on 28th October 2014 and they tested out fine on that date.

Set up the RSS feed

Thanks to Rudie for emailing me to point out that Flickr has apparently started providing large versions of images in its RSS feeds, so this part is now much simpler than it was before.

Firstly you need to get your Flickr ID. I’ve gone back to using idGettr for this after it stopped working for a while (presumably also due to constant Flickr API changes) but then became operational again. So, follow the simple instructions at idGettr, and what you need to take note of is the Flickr ID, which looks something like 39653492@N07.

Then simply add that to the end of the Flickr photo RSS feed address so you end up with something like this, but with your own Flickr ID at the end instead of mine:

https://api.flickr.com/services/feeds/photos_faves.gne?id=39653492@N07

Paste this link into a temporary place so you can come back to it shortly – a temporary document in TextEdit is as good a place as any.

Create the folders you need on your Mac

From here on it is necessary to venture into the dark underbelly of OS X via the use of the Terminal. So, launch the Terminal app using your preferred launch method. If you’re not sure how to do this, type ⌘-Space to bring up Spotlight, then type “terminal”, then choose “Terminal” in “Applications”, then hit Enter to launch it. Then we make sure we have the folders we will need, so type (or copy and paste) the following line into the Terminal:

mkdir -p ~/Pictures/Flickr_faves ~/Library/Scripts ~/Library/LaunchAgents

Create the script to download the images

The first thing we need to do next is determine what your short username is. On OS X, every user has a long username and a short username. For example, my long username might be “Matt Brock” and my short username might be “mattbrock”. If you already know for sure what your short username is then that’s great. If not, then just type the following into the Terminal and it will tell you:

echo $USER

Make a note of this somewhere if you’re not sure you can remember it. Next we create the script which will get the latest favourites and clear out old ones, so type:

nano ~/Library/Scripts/get_Flickr_faves.sh

This will bring you into a text editor. Paste in the following text:

#!/bin/bash

if ! cd ~/Pictures/Flickr_faves ; then
  logger "get_Flickr_faves: failed to cd to ~/Pictures/Flickr_faves; exiting"
  exit 1
fi

if ! curl -s "http://api.flickr.com/" > /dev/null 2>&1 ; then
  logger "get_Flickr_faves: couldn't connect to Yahoo API; exiting"
  exit 1
fi

curl -s "URL" | grep 'rel="enclosure"' | awk -F '"' '{print $6}' | xargs -L1 curl -s -O

find . -mtime +1 -exec rm -f {} +

NO_IMAGES=$(ls | wc -l | sed "s/ //g")

logger "get_Flickr_faves: completed; $NO_IMAGES images"

Then use the arrow keys on your keyboard to move around. You need to replace URL with the long RSS feed address which you pasted into a temporary location earlier – you can use regular copy and paste, plus the arrow keys and the delete key to do all this. Make sure you don’t change anything else, and make sure you keep the quotation marks around the RSS feed address. Once that’s done, type Ctrl-O, then Enter to confirm, to save the changes. Then type Ctrl-X to exit the text editor. Then type the following to give your new script permission to run:

chmod 755 ~/Library/Scripts/get_Flickr_faves.sh

You can now run the script once to download your Flickr favourites:

~/Library/Scripts/get_Flickr_faves.sh

Hopefully that won’t produce any errors, and if you go into Finder to look in the “Flickr_faves” folder in your Pictures folder, you should see local copies of your Flickr favourites in there.

Tell your Mac to update the images once per day

Next we need to tell your Mac to run the script we just created once a day in order to update the images from your favourites on Flickr. So, run the following command:

nano ~/Library/LaunchAgents/uk.co.mattbrock.Flickr_faves.plist

This will again bring you into a text editor. This time, paste in the following:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
"http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
  <key>Label</key>
  <string>get_Flickr_faves</string>
  <key>Program</key>
  <string>/Users/USERNAME/Library/Scripts/get_Flickr_faves.sh</string>
  <key>StartCalendarInterval</key>
  <dict>
    <key>Minute</key>
    <integer>15</integer>
    <key>Hour</key>
    <integer>15</integer>
  </dict>
  <key>Debug</key>
  <true/>
  <key>AbandonProcessGroup</key>
  <true/>
</dict>
</plist>

N.B. This assumes that your home directory is in /Users. If you’ve moved your home directory then you’ll need to alter the Program string to point to the alternative location instead of /Users/USERNAME. If you don’t know what this means then ignore it because it almost certainly doesn’t apply to you.

Replace USERNAME with your short username, then type Ctrl-O, Enter, then Ctrl-X to save and exit the editor. This is a set of instructions to tell your Mac to run the script we created earlier once a day in order to make sure your Flickr favourites are kept up to date locally. In order to get your Mac to read and act upon these instructions, we need to run the following:

launchctl load ~/Library/LaunchAgents/uk.co.mattbrock.Flickr_faves.plist

Hopefully that won’t produce any errors. We’re almost there now.

Configure the screensaver on your Mac

So all we need to do now is point the slideshow screensaver at the folder we created earlier to store the images from your Flickr favourites. Go into Desktop & Screen Saver in your System Preferences, click the Screen Saver tab, then choose one of the Slideshows (I prefer “Classic”), then under “Source” choose “Choose Folder…”, then navigate to the Flickr_faves folder in your Pictures folder. It’s up to you whether you choose “Shuffle slide order” – personally I prefer to have that on, as I like a bit of randomness. Then choose a time under “Start after” to determine how soon you’d like the screensaver to start up. Alternatively/additionally, you can click Hot Corners and choose a hot corner to trigger your new screensaver on demand. And that should be it! If all has gone well, your screensaver will now show you all your latest Flickr favourites. Enjoy.

Troubleshooting

You should be able to see the results of this automated script by launching Terminal and then typing:

grep get_Flickr_faves /var/log/system.log

This should show you the times when the task has run and what happened when it tried. When it’s been successful you will see something like the following:

Sep  4 15:17:56 mymac me[27308]: get_Flickr_faves: completed; 37 images

P.S. You can now get the Bash script and launchd XML configuration from GitHub if you so desire.

Creating a two-node CentOS 6 cluster with floating IP using Pacemaker and Corosync

These are my old instructions for creating a Linux cluster with floating IP on versions of CentOS prior to 6.4.

If you’re using CentOS 6.4 or higher, you need my updated post for creating a cluster with CMAN and Pacemaker.

Installation and configuration

Install the required packages and prepare the configuration file:

yum install pacemaker
cp /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf

Change bindnetaddr to your network address (e.g. 192.168.1.0) in
/etc/corosync/corosync.conf. Create /etc/corosync/service.d/pcmk and put
the Pacemaker startup config into it:

service {
        name: pacemaker
        ver: 1 
}

Start corosync and pacemaker:

service corosync start
service pacemaker start

Make sure they start on boot:

chkconfig corosync on
chkconfig pacemaker on

Repeat all of the above on the second node. Then disable STONITH on the primary node:

crm configure property stonith-enabled=false

Add a floating IP on the primary node (change IP address and netmask as
needed):

crm configure primitive ClusterIP ocf:heartbeat:IPaddr2 params ip=192.168.1.110 cidr_netmask=24 op monitor interval=30s

Disable quorum on the primary node (necessary for running only two nodes):

crm configure property no-quorum-policy=ignore

Prevent resources from moving back after a node recovers:

crm configure rsc_defaults resource-stickiness=100

Use the commands in the Administration section below to ensure that the
floating IP is on the correct machine – if this is not the case then
temporarily shut down Pacemaker on whichever machine has the floating IP so
that it moves across to the desired node:

service pacemaker stop

Administration

Display the current configuration:

crm configure show

Status monitor:

crm status

Show which node has the cluster IP:

crm resource status ClusterIP

Reset the config to default:

cibadmin -E --force

Verify that there are no problems with the current config:

crm_verify -L

Creating a two-node CentOS 6 cluster with floating IP using CMAN and Pacemaker

Originally I was using Heartbeat to create two-node Linux clusters with floating IPs, but when Heartbeat stopped being developed I needed to figure out how to use Corosync and Pacemaker for this instead. Somewhat annoyingly, Linux HA stuff has changed yet again in CentOS 6.4, so now it’s necessary to use CMAN and Pacemaker instead.

This is quite a lot more in-depth than the simple configuration that was originally required for Heartbeat. Anyway, based on my recent experiences, here’s a very quick guide for if you find yourself in a similar situation. This works for me on CentOS 6.4 and higher, but it won’t work on earlier versions of CentOS.

If you’re looking for the old instructions for creating a cluster with Pacemaker and Corosync, they’re here.

Installation and initial configuration

Install the required packages on both machines:

yum install pacemaker
cp /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf

Set up and configure the cluster on the primary machine, changing newcluster , server1.example.com and server2.example.com as needed:

ccs -f /etc/cluster/cluster.conf --createcluster newcluster
ccs -f /etc/cluster/cluster.conf --addnode server1.example.com
ccs -f /etc/cluster/cluster.conf --addnode server2.example.com
ccs -f /etc/cluster/cluster.conf --addfencedev pcmk agent=fence_pcmk
ccs -f /etc/cluster/cluster.conf --addmethod pcmk-redirect server1.example.com
ccs -f /etc/cluster/cluster.conf --addmethod pcmk-redirect server2.example.com
ccs -f /etc/cluster/cluster.conf --addfenceinst pcmk server1.example.com pcmk-redirect port=server1.example.com
ccs -f /etc/cluster/cluster.conf --addfenceinst pcmk server2.example.com pcmk-redirect port=server2.example.com

Copy /etc/cluster/cluster.conf from the primary machine to the secondary machine in the cluster.

It’s necessary to turn off quorum checking, so do this on both machines:

echo "CMAN_QUORUM_TIMEOUT=0" >> /etc/sysconfig/cman

Start the services

Start up the services on both machines:

service cman start
service pacemaker start

Make sure both services start on reboot:

chkconfig cman on
chkconfig pacemaker on

Configure and create floating IP

Configure the cluster on the primary machine:

pcs property set stonith-enabled=false
pcs property set no-quorum-policy=ignore

Create the floating IP on the primary machine, changing the IP address and server name as needed:

pcs resource create livefrontendIP0 ocf:heartbeat:IPaddr2 ip=192.168.100.100 cidr_netmask=32 op monitor interval=30s
pcs constraint location livefrontendIP0 prefers server1.example.com=INFINITY

Cluster administration

To monitor the status of the cluster:

pcs status

To show the full cluster configuration:

pcs config