Securing OpenVPN against Logjam

I dearly hope that by now readers are well aware of Logjam.

Logjam affects not only SSL traffic over the web, as most of the Internet will have us believe. It affects any kind of traffic that relies on TLS. That includes SMTP traffic, SSH traffic, OpenVPN traffic, among others. There are quick guides available on how to secure several different types of traffic affected by Logjam, including this one provided by the folks who discovered the issue. Cloudfare has a nice write-up about the Logjam issue as well.

I’m here to talk about OpenVPN and how to protect VPN traffic from being affected by Logjam. I had trouble finding information about OpenVPN in relation to Logjam.

If a DHPARAM key smaller or equal to 1024 bits in size is being used, a new key of at least the size 2048 bits must be generated and used. The command openssl dhparam -out dhparams.pem 2048 generates a new key of size 2048.

OpenSSL must be updated to the latest version available. At the time this is being written, the latest version available of OpenSSL is 1.0.2a. In the 1.0.2a version of OpenSSL, the EXPORT class of cipher suites is disabled by default. It is the Achilles’ heel exploited by Logjam in particular. This means that OpenSSL will by default refuse connections which attempt to use any of the EXPORT grade cipher suites.

The configuration of the OpenVPN server must be examined. In particular, attention must be paid to the “tls-cipher” setting. If this setting is defined in the configuration, particular attention must be paid on whether any EXPORT grade cipher suites are defined. If any EXPORT grade cipher suites are defined, they must be removed. This section on OpenVPN Hardening provides a secure list of ciphers to boot. If the setting is left out from the configuration, a look at the output of the openvpn --show-tls should show whether weak, EXPORT grade ciphers are accepted by default.

According to this blog post by OpenSSL about Logjam, OpenSSL plans to release 1.0.2b which will reject connections that use a DHPARAM key <= 768 bits in size. Once available, servers and clients should be updated quickly to use it.

Disable Newrelic per-directory

This will be the tl;dr version of this post because roughly nobody gives two hoots about the story that led to my discovery of this. Note that this example considers the PHP agent for Newrelic.

If your desire is to disable Newrelic on a particular directory, you can fulfil it by dropping a .htaccess file inside that particular directory with the following contents:

&lt;IfModule php5_module&gt;
    php_value newrelic.enabled false
&lt;/IfModule&gt;

Three caveats:

  • The name of the module inside the <IfModule > tag is important. The command apachectl -t -D DUMP_MODULES | grep -i php will help you find the name of the module installed on your server.
  • You must have Newrelic enabled inside the INI file for Newrelic.
  • I may be wrong about this but you cannot be a smart aleck and only selectively enable Newrelic on directories. It only works the other way around.

GHOST vuln and Digital Ocean

Yesterday, right after I had the displeasure of dragging myself into work with a head attached to my body that hurt oh so much, I found out to my slight dismay about the GHOST vulnerability (and of course of the ops teams working to patch the servers). I scrambled to find out what in the world it was all about. This Qualys Security Advisory has a very scary rundown of the bug.

I have a Debian machine, in the cloud, with DigitalOcean, that I use for running all sorts of crazy things. I patched the box up, though there sure was a long list of updates. The thing you come to hate about Windows, if you use it, is that every update, security or feature-wise, to the OS requires a complete reboot, even on servers. Contrarily, it’s the thing you love about Linux, if you use it. But this bug was different. In a number of discussions about the GHOST vulnerability I read, it was advised, for a successful completion of the patch process, to fully reboot affected servers. It sounded ridiculous but after peeling away the veneer of absurdity, it made all the sense in the world. The GHOST vulnerability affected “glibc” which is a very important part of the Linux OS. An unnamed number of applications, services, and OS components use “glibc”, and as such, there was no surefire way of knowing that everything running on a Linux system that depended on “glibc” was restarted or reloaded in a way that caused it to use the updated version of “glibc” — except for a complete reboot.

I did that. I forgot to mention I run OpenVPN on my machine which uses the TUN/TAP interface drivers on the system. When the system got back to its feet, which was rather quick, I couldn’t get OpenVPN to start. It bailed out after coughing up this:


SIOCSIFADDR: No such device : ERROR while getting interface flags: No such device
SIOCSIFDSTADDR: No such device : ERROR while getting interface flags: No such device
SIOCSIFMTU: No such device failed!

Naturally, the “tun” interface wasn’t up. I couldn’t remember, when I first set up OpenVPN, whether I set up the “tun” interface myself. But a peek at the syslog told me something:

kernel: [ 228.358361] tun: Unknown symbol ipv6_proxy_select_ident (err 0)
kernel: [ 228.360982] Loading kernel module for a network device with CAP_SYS_MODULE (deprecated). Use CAP_NET_ADMIN and alias netdev- instead

Confused, I looked around and found this bug report. Somebody, probably the afflicted, found out that DigitalOcean was playing silly by not changing the kernel version to boot from to the one updated to. The solution was simple: Boot the server from the DigitalOcean panel with the latest kernel version selected from the settings page for the droplet. That fixed it.

In passing, I’d like to mention this article here which makes a case that WordPress can be used to exploit the vulnerability owning to the way WordPress handles ping-back comments.

My favourite Homebrew apps

I have been a die-hard Linux advocate the past 11 years. Four years ago I moved to Mac OS X because I felt increasingly frustrated over the state of the Linux desktop. I preferred using Linux on the desktop to using Windows because it beat having to use Windows every time. Mac OS X, however, presented a much better alternative, so I jumped ship. I haven’t looked back since. The fact that Unix is at the base of Mac OS X made it an attractive option for me. As I have said the past four years, it is hard not to use an operating system that offers a great UI on top of one of the best operating systems of the millennium.

I continue to use Linux to this day on servers, though. I find there is no better alternative for an operating system than Linux on a server. Having spent a good deal of time using Linux, you become accustomed to certain favourite applications and tools that sadly aren’t all readily available on Mac OS X. You can always run Linux as a virtual machine on your Mac, but it would be really splendid if you were to have natively those applications available on your Mac. For such applications, on Mac OS X there are two options:

On my first ever MacBook, I tried fink. It worked well for me that time, but it has been so long since then that I cannot really remember why I let it go. Instead, I welcomed Homebrew.

If you are coming to Mac OS X with a Linux background, Homebrew is the tool you cannot afford to miss out on. At the heart of it, it is a package manager for OS X. For me, it is more a way to get Linux applications to run natively on OS X. You can even create your own Homebrew packages if you’re savvy. I haven’t never needed to try, though.

I thought I’d share the applications I use on Homebrew regularly.

colordiff

Life without diff is unthinkable. I diff a lot. It really helps to make my diff-heavy life colourfull. On all my Linux servers colordiff is available so whenever I want to use diff, colordiff is what comes to mind.

dnscrypt-proxy

I use dnscrypt-proxy for two purposes: to of course secure my DNS traffic; and to bypass the silly restrictions imposed by my ISP which do not allow me to use any third-party DNS services, such as those offered by Google DNS or OpenDNS.

gnu-typist

I have always loved typing tutors. In fact, finding a typing tutor and learning to touch type on one was the best thing I did as a high-school student. It changed the way I used computers forever. I can’t find for the life of me the particular DOS-based typing tutor I started out with. I wish I could. But for all intents and purposes, GNU Typist is pretty good at what it does.

gnupg

If you don’t know how indispensable GnuPG (or PGP) is in this age, I feel sorry for you.

htop

I call it top that is high on weeds. Once you go htop, top looks very bland and boring.

nmap

I cannot imagine using a Linux or Unix system without nmap on it. Incidentally, I have long history of using nmap.

openssl

Sadly, the openssl application that ships with OS X isn’t as readily updated as the frequency of openssl security advisories demand.

tcpflow

I’m a big fan of tcpdump. Luckily, OS X ships with a native tcpdump implementation. Setting up Wireshark is a bit of a pain though as you first, rather ironically, need the X server running on OS X. For protocol and traffic analysis, tcpflow is great.

wget

If you show me someone who doesn’t need wget on shell, I will show you a liar. Somehow, OS X comes with curl, but not with wget. Although, you can use a combination of bash aliases and some switches to curl to achieve wget, it is not really wget until it is wget.

slurm

This is a recent discovery, though given the pliable nature of my memory, I have already forgotten where I discovered it. It is a nerdy little application that displays the real-time breakdown of statistics of network interfaces.

If you’ve got a favourite application on Homebrew, I’d like to hear from you.

Getting Wireshark to work on OS X Yosemite

Half a month back I upgraded two of my Macs to Yosemite. One was my relatively new MacBook Air at home, which had Mavericks on it. The other was my slightly old MacBook Pro at work, which had Mountain Lion. To my relief, the upgrade went briskly on both systems. (Here’s a little secret: I used this guy’s excellent advice to make sure the upgrade did not take too long to finish.) Not only that, to my surprise, Yosemite improved the OS X experience–both UI and performance wise–greatly. I was afraid putting Yosemite on my MacBook Pro, in particular, might slow it down, forcing me to clean the system and attempt a fresh install. I took backups of course, as should you, but cleaning up an entire system and setting it from scratch isn’t a happy thought. As a rule, I prefer not to do upgrades for major OS X (or iOS) releases. In my experience, a clean install almost always is the better option. Upgrades across major versions of the OS are risky to do. They also additionally–if they do succeed–run the risk of slowing the OS or parts of it down afterwards because of corrupted configurations and whatever mess that was created during the upgrade. The more data you have on your system that you want to upgrade, the greater the risk of a failed or botched upgrade.

Yosemite proved otherwise. For me, at least. I have friends who, unfortunately, have reported issues after upgrading to Yosemite, but they’ve all had older Macs than I do. I can only speak for myself.

Some things did break, though. Like MacVim, which I love, but which I won’t talk about right now. On my work MacBook Pro, I had Wireshark installed that I was using from time to time to dissect network traffic. It’s a great tool. On Yosemite, it stopped working. I found suggestions from strangers on the Internet about re-installing Wireshark, dejected responses from people who did only to find it didn’t make a difference. I then came across a consensus on reinstalling X11/XQuartz instead. People shouted that it worked.

Before I went ahead and reinstalled X11/XQuartz on Yosemite, I fell upon a small gem which explained why X11/XQuartz needed to be reinstalled and how reinstalling it could be avoided. It said that Wireshark was expecting X11/XQuartz to be inside /usr when in fact in Yosemite it was now under /opt. A simple solution was to create a symbolic link inside /usr with the following harmless command on Terminal:

sudo ln -s /opt/X11 /usr/X11

Sure enough, that did cause Wireshark to start. But it took an awful lot of time to show up. X11 apps are slow and crappy in terms of responsiveness on Macs, but they don’t take that long to load. When it did show up on the screen, it failed to detect any interfaces. Now that seemed rather odd. I looked at the system logs through Console.app and figured out that it was a rather silly permission issue. The easiest but not the recommended solution was to run Wireshark as root:

sudo wireshark

I needed to get real work done and couldn’t afford to spend any more time than I had trying to figure out a better way to run Wireshark, so I went ahead with running it as root. But you shouldn’t. As it happens, there are ways to a better solution that involve setting up Wireshark with privilege separation. The Wireshark wiki has an article about it. There is a section near the bottom about OS X which does not read very positively. But there’s a solution there. If you’re finicky about running apps with root privileges–and you should be–, you should go through it. I need to play with it. When I have it all pieced together, I will write again.

Enjoy dissecting packets and analysing network traces!

When pushing a big repo over Git to Assembla fails

At work, Assembla is used extensively, not only to track tasks but also to store DSCM repositories, such as Git. I don’t like Assembla for several reasons, and I don’t like hosting Git repositories on Assembla even more.

Some months back I had to initialise and push to Assembla a Git repository that was almost 800MB big. Like any Git hosting provider, Assembla supports both the Git (SSH) and the HTTPS protocol for interacting with repositories. While I always prefer Git over SSH, there are times when you don’t have your SSH key around or don’t want to use it. In those cases, I simply use the HTTPS protocol, which is rather convenient. However, while pushing this particularly big repository over HTTPS to Assembla, I encountered the familiar “The remote end hung up unexpectedly” error message from Git. My initial thoughts were to blame a flaky network for the error. However, no matter what I tried and how many times, I kept getting that dreaded error.

I went around Assembla looking for any help, but couldn’t find anything that was really relevant to my problem. Luckily, however, I did find a knowledge base article inside Altassian’s documentation, which described the exact same problem. You may read it up here:

If you would rather read a quick rundown, then carry on.

When Git encounters packed objects greater than 1MB in size, it uses “Transfer-Encoding: chunked” in POST requests. Not every web server handles transfer encoding by default, particularly not nginx, unless it is set up to do so. Assembla by co-incidence uses nginx and apparently, the nginx configuration they have to handle Git over HTTPS isn’t set up to use transfer encoding properly. That explained why in my case, where the repository was big and therefore so were the packed objects, Assembla’s Git+HTTPS protocol couldn’t handle the push.

What was the workaround?

I could use Git over SSH and avoid the problem altogether.

Or, as the Altassian article pointed out, the post buffer size for Git could be changed to match the size of the repository. When I changed the post buffer size to 800MB with:

git config --global http.postBuffer 800000000

the next push, while it took its sweet time, worked. :)

Why take the Ice Bucket Challenge?

I first heard about the Ice Bucket Challenge when I briefly read a post shared on Facebook about Bill Gates taking on the challenge. I then read about it again where Elon Musk was soliciting help of his children to undertake the same challenge. To me, it seemed a pointless little thing. What I didn’t know was that the challenge had a much deeper meaning, which, sadly, many of the people taking the challenge neglected to mention. You could say that my ignorance is mostly to blame for it, and nothing else, and I’d partly agree with you. However, I talked to a number of people who knew about the challenge but had no clue what it was about. It’s a real shame that despite the challenge taking the Internet by storm, very few places mentioning the challenge actually took the time to explain or even mention the real deal behind the challenge.

The Ice Bucket Challenge, in and of itself, has no real significance. It serves only two purposes:

  • To raise awareness about a life-threatening disease called ALS;
  • To encourage people to take on the challenge, motivate others to follow suit, and donate to ALSA, a non-profit organisation dedicated to fighting the ALS disease.

I thought I would write a little and talk about what ALS really is.

ALS stands for “Amyotrophic Lateral Sclerosi”. It is also known as the “Lou Gehrig’s Disease”. It is a disease that affects nerve cells in the brain and the spinal cord. The human body comprises of many different kind of nerves. The nerves that provide voluntary movement of muscles as well as muscle power are called “motor neurons”. When you wish to move your limbs, motor neurons are sent from your brain to your spinal cord, and from the spinal cord to your muscles so that your muscles may move to your whims. ALS affects the generation and nourishment of those motor neurons. Under ALS, the motor neurons progressively degenerate, and eventually die. When the motor neurons die, your brain loses the ability to initiate and control any kind of voluntary muscle movement. Eventually, the person affected with ALS runs a high risk of becoming completely paralysed, leading to their death.

ALS is a life-threatening disease. It has no treatments. There are no cures. There are no ways to halt or reverse the progress of the disease. However, there is one drug which has been proven to slow down the progress of ALS, even if moderately.

The Ice Bucket Challenge is there to raise awareness about the disease. I will probably never take the challenge, because it amounts to nothing, except to serve as a lark to boast about among friends, unless it succeeds in bringing awareness about ALS to the forefront. If you wish to donate, here’s how.

Sorting by IP on nth field with sort on Linux

I found a colleague at work struggling with sorting some data in Excel on Windows (ugh). He wasn’t getting anywhere. The data was in the following format (note that I’ve obfuscated the IP addresses):

user279 Actual_IP   192.000.000.10
user243 Actual_IP   192.000.000.103
user294 Actual_IP   192.000.000.11
user316 Actual_IP   192.000.000.112
user291 Actual_IP   192.000.000.115
user277 Actual_IP   192.000.000.12
user294 Actual_IP   192.000.000.121
user273 Actual_IP   192.000.000.13
user300 Actual_IP   192.000.000.130
user285 Actual_IP   192.000.000.135
user263 Actual_IP   192.000.000.138
user279 Actual_IP   192.000.000.14
user279 Actual_IP   192.000.000.15
user287 Actual_IP   192.000.000.16
user244 Actual_IP   192.000.000.165
user216 Actual_IP   192.000.000.17
user272 Actual_IP   192.000.000.171
user259 Actual_IP   192.000.000.179
user292 Actual_IP   192.000.000.18
user275 Actual_IP   192.000.000.180
user295 Actual_IP   192.000.000.19

He wanted to sort by IP addresses on the third column. Being a crazy Unix/Linux CLI guy, I instantly thought about how I would do it with Linux. First of all, I wanted to figure out a way to sort based on data in the third column only. The man page for sort(1) revealed the -k switch. The -k switch defines the key to use for sorting. The key refers to a part of the string on a line on which sort runs. By default, the entire line is used as a key, causing sort of sort by the entire line. It works in conjunction with the -t switch which defines the field separator character. By default, the field separator character is a whitespace character (or any number of continuous whitespaces). Because the data I had was separated by spaces, I had no need to tweak it with the -t switch. With the following, I told sort to sort on the third key:

$ sort -k3,3 data.csv  

user279 Actual_IP   192.000.000.10
user243 Actual_IP   192.000.000.103
user294 Actual_IP   192.000.000.11
user316 Actual_IP   192.000.000.112
user291 Actual_IP   192.000.000.115
user277 Actual_IP   192.000.000.12
user294 Actual_IP   192.000.000.121
user273 Actual_IP   192.000.000.13
user300 Actual_IP   192.000.000.130
user285 Actual_IP   192.000.000.135
user263 Actual_IP   192.000.000.138
user279 Actual_IP   192.000.000.14
user279 Actual_IP   192.000.000.15
user287 Actual_IP   192.000.000.16
user244 Actual_IP   192.000.000.165
user216 Actual_IP   192.000.000.17
user272 Actual_IP   192.000.000.171
user259 Actual_IP   192.000.000.179
user292 Actual_IP   192.000.000.18
user275 Actual_IP   192.000.000.180
user295 Actual_IP   192.000.000.19

The 3,3 told sort to start the key at the third field and ending at the third field. Since there are no fields after the third field in my data set, using -k3 would’ve achieved the same results. However, after comparing the data-set with the result of the sort command, I found no changes. After analysing the result for a while, I finally understood what was going on. I looked closely at the man page for sort again, particularly at the available sorting operations. What caught my attention was the -V switch, or in long form --version-sort. It sorted based on version numbers in the key. Looking closely at the IP address column, I realised the IP addresses looked very much like extended version numbers. Running with the -V switch gave me this:

$ sort -k3,3 -V data.csv

user279 Actual_IP   192.000.000.10
user294 Actual_IP   192.000.000.11
user277 Actual_IP   192.000.000.12
user273 Actual_IP   192.000.000.13
user279 Actual_IP   192.000.000.14
user279 Actual_IP   192.000.000.15
user287 Actual_IP   192.000.000.16
user216 Actual_IP   192.000.000.17
user292 Actual_IP   192.000.000.18
user295 Actual_IP   192.000.000.19
user243 Actual_IP   192.000.000.103
user316 Actual_IP   192.000.000.112
user291 Actual_IP   192.000.000.115
user294 Actual_IP   192.000.000.121
user300 Actual_IP   192.000.000.130
user285 Actual_IP   192.000.000.135
user263 Actual_IP   192.000.000.138
user244 Actual_IP   192.000.000.165
user272 Actual_IP   192.000.000.171
user259 Actual_IP   192.000.000.179
user275 Actual_IP   192.000.000.180

Voila.

The Python netifaces module

Three years ago I wrote about a couple of methods for obtaining public interface IPs on a system using Python. Last week I had the need again to find IP addresses of all public interfaces on a given server using Python. Accidentally, I came across the netifaces Python module. It is succinct and provides a clean interface. Getting addresses of network interfaces in a portable manner is an extremely difficult task, but this library gets most of the job done. All one needs to do is pip install netifaces and start using it.

As a bonus, I have a pice of code to extract a list of IPs from a system’s ethX interfaces:

import netifaces

def get_public_ip_list():
    interfaces = netifaces.interfaces()
    public_ip_list = []
    for interface in interfaces:
        # Only keep ethX interfaces.
        if not interface.startswith(“eth”):
            continue
        # Discard interfaces that are up but without any IPs.
        addrs = netifaces.ifaddresses(interface).get(netifaces.AF_INET)
        if not addrs:
            continue

        ips = [addr.get(“addr”) for addr in addrs]
        try:
            public_ip_list.append(ips[0])
        except IndexError:
            pass

    return public_ip_list

How to silence nodes on Sensu

Sensu is an open source monitoring framework, written purely in Ruby, that we use heavily at Cloudways to monitor not only the servers of our customers but our own as well. Sensu is very extensive. If combined with Graphite, the pair can be used to cover a wide variety of monitoring and reporting/graphing scenarios.

While Sensu has a helpful API, what’s not very helpful is its documentation. For a lack of a better word, its documentation is sparse. While the v0.12 of its documentation attempts to fill in the gaps, it continues to fall short.

Recently, a need arose which required Sensu clients, which are servers that are running the Sensu agent, to be silenced when the servers on which they were running were stopped. Looking through the clients API, I could only find a way to delete a client, which theoretically could work, only, there wasn’t an API to re-register a client again. The documentation vaguely said that the clients registered themselves the first time they came alive.

The Sensu CLI, which isn’t an official part of Sensu, is a wonderful command line Sensu client. This client, interestingly, provides a means of silencing a client and un-silencing it. Playing with its source, I was finally able to figure out how it was doing it.

The way Senus makes possible silencing and un-silencing of clients is through the Stashes API. The important bit is the format of the path parameter. If it takes the form of /silence/{{ client_name }}, it will silence the client identified by client_name. Here’s a Python snippet to make it clear how to use the Stashes API to that effect:

#!/usr/bin/env python

import time
import json
import requests

client = “YOUR_CLIENT_NAME”

url = “http://localhost:4567/stashes/silence/%s” % client

def silence():
    payload = {
        “timestamp”: time.time()
    }

    r = requests.post(url, data=json.dumps(payload))
    print r.status_code
    print r.text

def unsilence():
    r = requests.delete(url)
    print r.status_code
    print r.text

The code should be self-explanatory. The silence() method makes a POST request to the Stashes API. The timestamp value identifies the current time in epoch. The unsilence method, in contrast, makes a DELETE request to the same Stashes API.

Note that by default the silence operation will silence the given client indefinitely. If you wish to silence it for a defined period, an expires parameter can be provided as part of the payload.