The stack behind this blog

I talked about moving my blog hosted on WordPress.com to my custom domain and hosting it on my server. It was a move I had wanted to make for a long time.

Today, I am going to don my technical hat and get down to the nitty-gritty of my current setup. If you are not very technical-minded, I’d still request you to read through it, on the off chance that you may learn something useful. After all, I have been told that I’m good at explaining overly technical concepts in simple words.

What is a stack

A stack, in every-day life concepts, is a pile of objects arranged neatly on top of each other. In computer programming concepts, a stack is a particular kind of data structure used to store and manipulate data. In systems administration and server concepts, a stack is a pile of different technologies arranged neatly on top of each other. The latter is the type of stack I will be talking about. More to the point, I will be talking about what technologies I use behind blog.ayaz.pk to serve you this article.

Cloud provider

blog.ayaz.pk is hosted on a super-charged Linode cloud server. I mention super-charged because very recently Linode did a super cool thing and upgraded their infrastructure to support faster and efficient access and performance across all their customer servers. This included SSD hard-disks, faster CPUs, faster network pipes, more RAM, and an increased monthly bandwidth limit — without additional costs. I couldn’t be happier.

Operating System

It’s 64-bit Debian GNU/Linux. Having been a Slackware aficionado for years, I have now become a big fan of Debian. If I wasn’t using Debian so much, I may probably have gone with ArchLinux. Nevertheless, for now, it’s Debian.

Server Stack

A website’s stack is a very interesting topic to read about. It is also mind-blowingly technical. Allow me, then, to blow your mind. This is the stack that service blog.ayaz.pk:

Varnish

Varnish is a web application cache. It is also known as a web application accelerator. It sits in front of your web servers, and caches the hell out of it. It is mind-numbingly fast. When the Varnish guys say that Varnish “typically speeds up delivery with a factor of 300 – 1000x”, they aren’t kidding. I’ve done benchmarks on blog.ayaz.pk with and without Varnish, and I can say, safely, that with Varnish, my mind indeed became numb after being exposed to its awesomeness.

Nginx

Nginx is a light-weight beast of a web server. It’s goddamn fast and efficient! When I was planning moving my blog, I initially thought of using Apache as a web server. However, I decided I would go with Nginx. And I am glad I did. It is light on server resources, its worker model is robust and efficient, it’s simple to configure, and very flexible.

PHP-FPM

PHP-FPM is an alternative PHP FastCGI implementation that is designed to serve bigger and busier websites. It gels so seamlessly with Nginx that I had simply no choice to fall for it.

MySQL

MySQL is MySQL, the world’s most popular open source database.

WordPress

This blog is powered by the wonderful WordPress blogging platform. I dabbled with the idea of throwing together a minimal blogging system with Python and Flask, but decided against it because in this day and age, a blogging platform is more than system for managing blog posts. WordPress works extremely well.

This is it!

Pretty cool, eh? If you need help with individual configuration of the technologies used across the stack, be sure to give me a nudge and I’d be more than happy to help you.

Tell vim to write with sudo

It not uncommon to find yourself editing a file on a Linux/Unix system as a normal user who has sudo privileges to realise that the file is in read-only mode and in order to write to it you will have to discard the current changes, exit the file, and re-open it with sudo. I discovered this neat little vim sudo trick that I believe is going to save a lot of people a lot of annoyance.

:w !sudo tee %

If you’re interested in how it works, be sure to read the StackOverflow answers linked to on the page.

If you use Emacs instead, you can still use it:

sudo apt-get remove -y emacs
sudo apt-get install -y vim
sudo ln -s /usr/bin/vim /usr/bin/emacs
emacs file
:w !sudo tee %

Migrating ayaz.wordpress.com to blog.ayaz.pk

I started blogging almost ten years ago (circa 2004). Every time I look at that stat, I feel overwhelmed. I wrote well before 2004, but didn’t publish most of it, and whatever I did publish on a personal website I maintained got lost when I lost access to that website. Sadly, local copies of my writings which I kept on my first ever laptop, an IBM ThinkPad T20 running Slackware, met with an unfortunate fate when the hard-disk on the laptop crashed beyond repair. Blogger launched in August of 1999, while WordPress appeared first in May of 2003, neither of which were popular then. In mid of 2004, I signed up on Blogger, and started writing on my blog there. This was my first ever blog.

In March of 2006, while ranting about the government’s decision to block Blogger, I wrote my final post saying goodbye to blogger and moved to WordPress.

There, I stayed the next almost seven years, blogging regularly on every topic under the belt I could care to write on.

Today, I am writing to tell you that I’ve moved from WordPress’s hosted platform to my very own personal website. My blog will now live at:

blog.ayaz.pk

You won’t need to change a thing in order to continue reading what I write. The ayaz.wordpress.com address for my blog, which you have been using, will continue to exist. However, now, it will obsequiously forward you to my new address, blog.ayaz.pk. For the most part, you won’t feel it. If you read my blog through feed readers, or if you were kind enough to subscribe to be notified via email of my blog posts, you won’t have to change anything. Whatever you have set-up, it will continue to work as is.

I’m grateful to you for reading my writings. I’m thankful to have you as my reader. I hope to continue to write for a long time to come, and my wish is only that I succeed in writing what would bring pleasure and enjoyment for you.

The Heartbleed bug!

This is big. Why is it big? SSL is part of the core infrastructure on which the Internet as we know it today runs. Millions, if not more, of people who browse the Internet and use any manner of services on the Internet rely on SSL to keep their data secure and confidential. With this bug, anybody with tools, know-how and motive can cause an affected website or service to leak enough private data to be able to decrypt secure information. It is big because the very reason why we use SSL today is defeated.

How does the bug work?

The technical details of how the bug works are, well, rather technical. You can read about them here:

The bug exploits a memory overrun issue in a function of OpenSSL known as heartbeat. When exploited, an attacker can read upto 64k of memory contents on every request. That’s a lot of memory. This memory could contain any type of information, from private login credentials to, worse of all, the actual SSL private key. If an attacker can cause an affected server to leak its SSL private key, SSL on that server is defeated. They can record any or all SSL traffic and use the private key to decrypt the traffic at leisure. That is rather scary!

Mitigation

Fixing this issue is a multi-step process. It is also why its impact is so huge. The following is a list of migitation procedures that must be followed:

  • Immediately patch OpenSSL and related SSL libraries on affected servers and systems.
  • Re-generate and re-issue all SSL certificates and private keys.
  • Revoke all existing SSL certificates.
  • Ask customers/users to change any login credentails they use related to the servers and systems in question.

How to check for Heartbleed bug

These are some tools you can use to check if your webserver is vulnerable:

CVE-2014-0160 has technical information about the bug and which versions are affected.

Is blog.ayaz.pk affected?

I got lucky. Only about a week ago did I decide to move my blog from ayaz.wordpress.com over to my Linode. Since I wasn’t using any SSL certs on my Linode, I bought one after I patched OpenSSL et al on the server. Phew!

How to write MCollective Agents to run actions in background

The Marionette Collective, or MCollective in short, is a cutting-edge tech for running system administration tasks and jobs in parallel against a cluster of servers. When the number of servers you have to manage grows, the task of managing them, including keeping the OS and packages installed on them updated, becomes without a doubt a nightmare. MCollective helps you drag yourself out of that nightmare and into a jolly dream, where you are the king, and at your disposal is a powerful tool, by merely wielding which you can control all of your servers in one go. I’ll probably do a bad job of painting MCollective in good light, so I’d recommend you read all about it on MCollective’s main website.

Like every good tool worth its while, MCollective gives you the power to extend it by writing what it calls agents to run custom code on your servers to perform any kind of job. You can read about agents here. MCollective is all in Ruby, so if you know Ruby, which is a pretty little programming language by the way, you can take full advantage of MCollective. Incidentally, a part of my day job revolves around writing MCollective agents to automate all sorts of jobs you can think of performing on a server.

For a while I have been perplexed at the lack of support for being able to run agents in the background. Not every job takes milliseconds to finish itself. Most average-level jobs, in terms of what they have to do, take anywhere from seconds to, even longer, minutes. And since I write an API which uses MCollective agents to execute jobs, I often run into the problem of having the API block while the agent is taking its sweet time to run. As far as I’ve looked, I haven’t found support within MCollective for asynchronously running actions.

So, I got down to thinking, and came up with a solution, which you could call a bit of a hack. But insofar as my experience testing it has been, I’ve yet to face any issues with it.

I’ve an example hosted on GitHub. It’s very straightforward, even if crude. The code is self explanatory. At the heart of it is creating your actions in agents to fork and run as childs, without having the parent wait for the childs to reap, and having one extra action for each agent to fetch the status of the other actions. So, essentially, the approach has to be taken for every agent you have, but only for those actions which you wish to run asynchronously. With agents in place, all you’ll need to do is call the agents thus:

$ mco rpc bg run_bg -I node -j
{"status": 0, "result": "2434"}

and repeatedly fetch the status of the async action thus:

$ mco rpc bg status pid=2434 operation=run_bug -I node -j
{"status": 0, "result": ""}

It’s a solution that works. I’ve tested it over a month and a half now over many servers without any issues. I would like to see people play with it. The code is full of comments which explain how to do what. But if you have any questions, I’d love to entertain them

Fastly’s CDN accessible once again from Pakistan.

Three days ago I wrote about a particular subnet of Fastly.com’s CDN network becoming null-routed in Pakistan. Since the affected subnet was from which Fastly served content to users from Pakistan, websites, such as GitHub, Foursquare, The Guardian, Apache Maven, The Verge, Gawker, Wikia, even Urban Dictionary and many others, which off-load their content to Fastly’s CDN networks stopped opening for users inside Pakistan.

However, as of today, I can see that the null-route previously in place has been lifted as mysteriously as it was placed. The subnet in question, 185.31.17.0/24, is once again accessible. I have tested from behind both TransWorld and PTCL. While I don’t know why it was blocked in the first place and why it has been made accessible again, whether it was due to an ignorant glitch on someone’s part, or whether it was intentional, I am glad that the CDN is visible again.

If you are observing evidence otherwise, please feel free to let me know.

185.31.17.0/24 subnet of Fastly’s CDN null-routed in Pakistan?

I rely heavily on GitHub and Foursquare every day, the former for work and pleasure, and the latter for keeping a track of where I go through the course of a day. Since yesterday, though, I have been noticing that pages on GitHub have been taking close to an eternity to open, if not completely failing. Even when the page loads, all of the static content is missing, and many other things aren’t working. With FourSquare, I haven’t been able to get a list of places to check in to. Yesterday, I wrote them off as glitches on both Foursquare and GitHub’s network.
 
It was only today that I realized what’s going on. GitHub and Foursquare both rely on Fastly’s CDN services. And, for some reason, Fastly’s CDN services have not been working within Pakistan.
 
The first thing I did was look up Fastly’s website and found that it didn’t open for me. Whoa! GitHub’s not working, Foursquare’s not loading, and now, I can’t get to Fastly.
 
I ran a traceroute to Fastly, and to my utter surprise, the trace ended up with a !X (comms administratively prohibited) response from one of the level3.net routers.
 
$ traceroute fastly.com
traceroute: Warning: fastly.com has multiple addresses; using 216.146.46.10
traceroute to fastly.com (216.146.46.10), 64 hops max, 52 byte packets
[...]
 6 xe-8-1-3.edge4.frankfurt1.level3.net (212.162.25.89) 157.577 ms 158.102 ms 166.088 ms
 7 vlan80.csw3.frankfurt1.level3.net (4.69.154.190) 236.032 ms
 vlan60.csw1.frankfurt1.level3.net (4.69.154.62) 236.247 ms 236.731 ms
 8 ae-72-72.ebr2.frankfurt1.level3.net (4.69.140.21) 236.029 ms 236.606 ms
 ae-62-62.ebr2.frankfurt1.level3.net (4.69.140.17) 236.804 ms
 9 ae-22-22.ebr2.london1.level3.net (4.69.148.189) 236.159 ms
 ae-24-24.ebr2.london1.level3.net (4.69.148.197) 236.017 ms
 ae-23-23.ebr2.london1.level3.net (4.69.148.193) 236.115 ms
10 ae-42-42.ebr1.newyork1.level3.net (4.69.137.70) 235.838 ms
 ae-41-41.ebr1.newyork1.level3.net (4.69.137.66) 236.237 ms
 ae-43-43.ebr1.newyork1.level3.net (4.69.137.74) 235.998 ms
11 ae-91-91.csw4.newyork1.level3.net (4.69.134.78) 235.980 ms
 ae-81-81.csw3.newyork1.level3.net (4.69.134.74) 236.211 ms 235.548 ms
12 ae-23-70.car3.newyork1.level3.net (4.69.155.69) 236.151 ms 235.730 ms
 ae-43-90.car3.newyork1.level3.net (4.69.155.197) 235.768 ms
13 dynamic-net.car3.newyork1.level3.net (4.53.90.150) 236.116 ms 236.453 ms 236.565 ms
14 dynamic-net.car3.newyork1.level3.net (4.53.90.150) 237.399 ms !X 236.225 ms !X 235.870 ms !X

Now, that, I thought, was most odd. Why was level3 prohibiting the trace?

I went looking for a support contact at Fastly to try and get anything that could explain what was going on. I found their IRC chat room on FreeNode (I spend a lot of time on FreeNode), and didn’t waste time dropping into it. The kind folks there told me that they’d had reports of one of their IP ranges being null-blocked in Pakistan. It was the 185.31.17.0/24 range. I did some network prodding about, and confirmed that that indeed was the subnet I couldn’t get to from within Pakistan.

$ ping -c 1 185.31.18.133
PING 185.31.18.133 (185.31.18.133): 56 data bytes
64 bytes from 185.31.18.133: icmp_seq=0 ttl=55 time=145.194 ms
--- 185.31.18.133 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 145.194/145.194/145.194/0.000 ms

$ ping -c 1 185.31.16.133
PING 185.31.16.133 (185.31.16.133): 56 data bytes
64 bytes from 185.31.16.133: icmp_seq=0 ttl=51 time=188.872 ms
--- 185.31.16.133 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 188.872/188.872/188.872/0.000 ms

$ ping -c 1 185.31.17.133
PING 185.31.17.133 (185.31.17.133): 56 data bytes
--- 185.31.17.133 ping statistics ---
1 packets transmitted, 0 packets received, 100.0% packet loss

They also told me they’d had reports of both PTCL and TWA where the range in question was null-routed. They said they didn’t know why it had been null-routed but would appreciate any info the locals could provide.

This is ludicrous. After Wi-Tribe filtering UDP DNS packets to Google’s DNS and OpenDNS servers (which they still do), this is the second absolutely preposterous thing that has pissed me off.

Docker

Docker is almost the rage these days. If you don’t know what Docker is, you should head on over www.docker.io. It’s a container engine, designed to run on Virtual Machines, on bare-metal physical servers, on OpenStack clusters, on AWS instances, or pretty much any other form of machine incarnation you could think of. With Docker, you can easily create very lightweight containers that run your applications. What’s really stunning about Docker is that you can create a lightweight container for your application in your development environment once, and have the same container run at scale in a production environment.

The best way to appreciate the power Docker puts at your fingertips is to try it out for yourself. If you wish to do it, I would recommend the browser-based interactive tutorial on Docker’s website.

While it is easy to build Docker containers manually, the real power of Docker comes from what is known as a Dockerfile. A Dockerfile, using a very simple syntax that can be learnt quickly, makes possible automating the process of setting up and configuring the container’s environment to run your application.

On a weekend I finally took out time for myself and sat down to embrace Docker, not only through the interactive tutorial on Docker’s website, but also on my server. I was half lucky in that I didn’t need to have to set Docker up on my local system or VM, because Linode just happened to very recently introduce support for Docker. I started playing around with Docker commands on the shell manually, and slowly transitioned to writing my own Dockerfile. The result: I wrote a Dockerfile to run a small container to run “irssi” inside it. Go ahead and check it out, please. If you have a system with Docker running on it (and I really think you should have one), you can follow the two or three commands listed on the README file to build and run my container. It is that easy!

iPad mini retina image retention

I first read about image retention on iPad mini retina on Marco Arment’s blog. His iPad mini retina had a slight case of image retention which he discovered by [creating and] running an image retention test on his iPad. I used the word slight to describe Marco’s case because his was a minor problem, something had he not run the test explicitly would not have noticed during normal use. Because it didn’t seem something that would get in the way of enjoying the beautiful screen of the new iPad mini, I didn’t give it much thought.

The very first thing, after hooking it up online, I did on my new iPad mini retina was run Marco’s image retention test. It passed, which elated me and squashed what little fears I had. In hindsight I forgot to run the test for ten minutes, hastily choosing a minute instead. I basked in the magnificence of the retina screen and the weightlessness of the device for two whole weeks. It was the perfect tablet: light-weight, just the right size, with a beautifully sharp and crisp screen, a lot of computing power packed inside a small form factor, and a lovely OS to make it all work seamlessly. Then, one unfortunate night after work when I pulled out the iPad from inside the drawer I keep it in when away, I was dreadfully shocked to look at the mess the screen had become. The image retention clearly visible on the screen was horrible. There were crooked lines everywhere, and swiping on the screen caused them to flicker grotesquely. If Marco saw it, he would jump up his chair.

Home screen on my iPad mini with severe image retention.

Home screen on my iPad mini with severe image retention.

I managed to get the iPad returned to Apple. To my surprise, and a little disappointment, Apple outright refunded me the device.

Macro in his post explained why he thought the issue was there. Apple buys retina panels from a couple of manufacturers. Panels from at least one manufacturer exhibit image retention. I think Apple is fully aware of it, and it’s the reason why iPad mini with retina displays are in short supply.

I loved that thing. I cannot emphasise that enough. I will buy it again, when the next batch from manufacturing hits the market.

→ Don’t ask a non-drinker why they don’t drink

storm_cloaks, commenting on a Reddit thread:

As a non-drinker, answering the “why don’t you drink?” question is always annoying. Generally speaking, I think it’s poor etiquette to ask someone why they don’t drink, and it saddens me that most people don’t feel the same. First, its really none of their business. Second, asking someone to justify a personal choice at a party is a total killjoy, and it clearly creates a separation between the non-drinker and the drinker that’s asking. Going out of your way to point out the fact that someone’s different from you, *especially in a situation that’s supposed to be festive is totally ridiculous, if not offensive. I understand the curiosity, but it would be rude and odd if I asked people at a party “why do you drink?”

I love that explanation. I couldn’t have put it any better myself. On several occasions I have been asked by friends and acquaintances why I don’t drink. 90% of the those times, I have been made fun of and called a pussy for not drinking. I have never understood why people who drink or want to drink feel compelled to ask why the people around them don’t. It’s OK to offer somebody a drink, but not at all OK to ask them why they don’t. It is OK if you are truly curious why somebody doesn’t drink, but I feel that people who drink don’t honestly care about the reason why somebody doesn’t drink. I mean, are they looking for reasons to justify quitting? Or looking for lack of justifiable reasons to justify their drinking?