Migrating ayaz.wordpress.com to blog.ayaz.pk

I started blogging almost ten years ago (circa 2004). Every time I look at that stat, I feel overwhelmed. I wrote well before 2004, but didn’t publish most of it, and whatever I did publish on a personal website I maintained got lost when I lost access to that website. Sadly, local copies of my writings which I kept on my first ever laptop, an IBM ThinkPad T20 running Slackware, met with an unfortunate fate when the hard-disk on the laptop crashed beyond repair. Blogger launched in August of 1999, while WordPress appeared first in May of 2003, neither of which were popular then. In mid of 2004, I signed up on Blogger, and started writing on my blog there. This was my first ever blog.

In March of 2006, while ranting about the government’s decision to block Blogger, I wrote my final post saying goodbye to blogger and moved to WordPress.

There, I stayed the next almost seven years, blogging regularly on every topic under the belt I could care to write on.

Today, I am writing to tell you that I’ve moved from WordPress’s hosted platform to my very own personal website. My blog will now live at:


You won’t need to change a thing in order to continue reading what I write. The ayaz.wordpress.com address for my blog, which you have been using, will continue to exist. However, now, it will obsequiously forward you to my new address, blog.ayaz.pk. For the most part, you won’t feel it. If you read my blog through feed readers, or if you were kind enough to subscribe to be notified via email of my blog posts, you won’t have to change anything. Whatever you have set-up, it will continue to work as is.

I’m grateful to you for reading my writings. I’m thankful to have you as my reader. I hope to continue to write for a long time to come, and my wish is only that I succeed in writing what would bring pleasure and enjoyment for you.

The Heartbleed bug!

This is big. Why is it big? SSL is part of the core infrastructure on which the Internet as we know it today runs. Millions, if not more, of people who browse the Internet and use any manner of services on the Internet rely on SSL to keep their data secure and confidential. With this bug, anybody with tools, know-how and motive can cause an affected website or service to leak enough private data to be able to decrypt secure information. It is big because the very reason why we use SSL today is defeated.

How does the bug work?

The technical details of how the bug works are, well, rather technical. You can read about them here:

The bug exploits a memory overrun issue in a function of OpenSSL known as heartbeat. When exploited, an attacker can read upto 64k of memory contents on every request. That’s a lot of memory. This memory could contain any type of information, from private login credentials to, worse of all, the actual SSL private key. If an attacker can cause an affected server to leak its SSL private key, SSL on that server is defeated. They can record any or all SSL traffic and use the private key to decrypt the traffic at leisure. That is rather scary!


Fixing this issue is a multi-step process. It is also why its impact is so huge. The following is a list of migitation procedures that must be followed:

  • Immediately patch OpenSSL and related SSL libraries on affected servers and systems.
  • Re-generate and re-issue all SSL certificates and private keys.
  • Revoke all existing SSL certificates.
  • Ask customers/users to change any login credentails they use related to the servers and systems in question.

How to check for Heartbleed bug

These are some tools you can use to check if your webserver is vulnerable:

CVE-2014-0160 has technical information about the bug and which versions are affected.

Is blog.ayaz.pk affected?

I got lucky. Only about a week ago did I decide to move my blog from ayaz.wordpress.com over to my Linode. Since I wasn’t using any SSL certs on my Linode, I bought one after I patched OpenSSL et al on the server. Phew!

How to write MCollective Agents to run actions in background

The Marionette Collective, or MCollective in short, is a cutting-edge tech for running system administration tasks and jobs in parallel against a cluster of servers. When the number of servers you have to manage grows, the task of managing them, including keeping the OS and packages installed on them updated, becomes without a doubt a nightmare. MCollective helps you drag yourself out of that nightmare and into a jolly dream, where you are the king, and at your disposal is a powerful tool, by merely wielding which you can control all of your servers in one go. I’ll probably do a bad job of painting MCollective in good light, so I’d recommend you read all about it on MCollective’s main website.

Like every good tool worth its while, MCollective gives you the power to extend it by writing what it calls agents to run custom code on your servers to perform any kind of job. You can read about agents here. MCollective is all in Ruby, so if you know Ruby, which is a pretty little programming language by the way, you can take full advantage of MCollective. Incidentally, a part of my day job revolves around writing MCollective agents to automate all sorts of jobs you can think of performing on a server.

For a while I have been perplexed at the lack of support for being able to run agents in the background. Not every job takes milliseconds to finish itself. Most average-level jobs, in terms of what they have to do, take anywhere from seconds to, even longer, minutes. And since I write an API which uses MCollective agents to execute jobs, I often run into the problem of having the API block while the agent is taking its sweet time to run. As far as I’ve looked, I haven’t found support within MCollective for asynchronously running actions.

So, I got down to thinking, and came up with a solution, which you could call a bit of a hack. But insofar as my experience testing it has been, I’ve yet to face any issues with it.

I’ve an example hosted on GitHub. It’s very straightforward, even if crude. The code is self explanatory. At the heart of it is creating your actions in agents to fork and run as childs, without having the parent wait for the childs to reap, and having one extra action for each agent to fetch the status of the other actions. So, essentially, the approach has to be taken for every agent you have, but only for those actions which you wish to run asynchronously. With agents in place, all you’ll need to do is call the agents thus:

$ mco rpc bg run_bg -I node -j
{"status": 0, "result": "2434"}

and repeatedly fetch the status of the async action thus:

$ mco rpc bg status pid=2434 operation=run_bug -I node -j
{"status": 0, "result": ""}

It’s a solution that works. I’ve tested it over a month and a half now over many servers without any issues. I would like to see people play with it. The code is full of comments which explain how to do what. But if you have any questions, I’d love to entertain them

Fastly’s CDN accessible once again from Pakistan.

Three days ago I wrote about a particular subnet of Fastly.com’s CDN network becoming null-routed in Pakistan. Since the affected subnet was from which Fastly served content to users from Pakistan, websites, such as GitHub, Foursquare, The Guardian, Apache Maven, The Verge, Gawker, Wikia, even Urban Dictionary and many others, which off-load their content to Fastly’s CDN networks stopped opening for users inside Pakistan.

However, as of today, I can see that the null-route previously in place has been lifted as mysteriously as it was placed. The subnet in question,, is once again accessible. I have tested from behind both TransWorld and PTCL. While I don’t know why it was blocked in the first place and why it has been made accessible again, whether it was due to an ignorant glitch on someone’s part, or whether it was intentional, I am glad that the CDN is visible again.

If you are observing evidence otherwise, please feel free to let me know. subnet of Fastly’s CDN null-routed in Pakistan?

I rely heavily on GitHub and Foursquare every day, the former for work and pleasure, and the latter for keeping a track of where I go through the course of a day. Since yesterday, though, I have been noticing that pages on GitHub have been taking close to an eternity to open, if not completely failing. Even when the page loads, all of the static content is missing, and many other things aren’t working. With FourSquare, I haven’t been able to get a list of places to check in to. Yesterday, I wrote them off as glitches on both Foursquare and GitHub’s network.
It was only today that I realized what’s going on. GitHub and Foursquare both rely on Fastly’s CDN services. And, for some reason, Fastly’s CDN services have not been working within Pakistan.
The first thing I did was look up Fastly’s website and found that it didn’t open for me. Whoa! GitHub’s not working, Foursquare’s not loading, and now, I can’t get to Fastly.
I ran a traceroute to Fastly, and to my utter surprise, the trace ended up with a !X (comms administratively prohibited) response from one of the level3.net routers.
$ traceroute fastly.com
traceroute: Warning: fastly.com has multiple addresses; using
traceroute to fastly.com (, 64 hops max, 52 byte packets
 6 xe-8-1-3.edge4.frankfurt1.level3.net ( 157.577 ms 158.102 ms 166.088 ms
 7 vlan80.csw3.frankfurt1.level3.net ( 236.032 ms
 vlan60.csw1.frankfurt1.level3.net ( 236.247 ms 236.731 ms
 8 ae-72-72.ebr2.frankfurt1.level3.net ( 236.029 ms 236.606 ms
 ae-62-62.ebr2.frankfurt1.level3.net ( 236.804 ms
 9 ae-22-22.ebr2.london1.level3.net ( 236.159 ms
 ae-24-24.ebr2.london1.level3.net ( 236.017 ms
 ae-23-23.ebr2.london1.level3.net ( 236.115 ms
10 ae-42-42.ebr1.newyork1.level3.net ( 235.838 ms
 ae-41-41.ebr1.newyork1.level3.net ( 236.237 ms
 ae-43-43.ebr1.newyork1.level3.net ( 235.998 ms
11 ae-91-91.csw4.newyork1.level3.net ( 235.980 ms
 ae-81-81.csw3.newyork1.level3.net ( 236.211 ms 235.548 ms
12 ae-23-70.car3.newyork1.level3.net ( 236.151 ms 235.730 ms
 ae-43-90.car3.newyork1.level3.net ( 235.768 ms
13 dynamic-net.car3.newyork1.level3.net ( 236.116 ms 236.453 ms 236.565 ms
14 dynamic-net.car3.newyork1.level3.net ( 237.399 ms !X 236.225 ms !X 235.870 ms !X

Now, that, I thought, was most odd. Why was level3 prohibiting the trace?

I went looking for a support contact at Fastly to try and get anything that could explain what was going on. I found their IRC chat room on FreeNode (I spend a lot of time on FreeNode), and didn’t waste time dropping into it. The kind folks there told me that they’d had reports of one of their IP ranges being null-blocked in Pakistan. It was the range. I did some network prodding about, and confirmed that that indeed was the subnet I couldn’t get to from within Pakistan.

$ ping -c 1
PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=55 time=145.194 ms
--- ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 145.194/145.194/145.194/0.000 ms

$ ping -c 1
PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=51 time=188.872 ms
--- ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 188.872/188.872/188.872/0.000 ms

$ ping -c 1
PING ( 56 data bytes
--- ping statistics ---
1 packets transmitted, 0 packets received, 100.0% packet loss

They also told me they’d had reports of both PTCL and TWA where the range in question was null-routed. They said they didn’t know why it had been null-routed but would appreciate any info the locals could provide.

This is ludicrous. After Wi-Tribe filtering UDP DNS packets to Google’s DNS and OpenDNS servers (which they still do), this is the second absolutely preposterous thing that has pissed me off.


Docker is almost the rage these days. If you don’t know what Docker is, you should head on over www.docker.io. It’s a container engine, designed to run on Virtual Machines, on bare-metal physical servers, on OpenStack clusters, on AWS instances, or pretty much any other form of machine incarnation you could think of. With Docker, you can easily create very lightweight containers that run your applications. What’s really stunning about Docker is that you can create a lightweight container for your application in your development environment once, and have the same container run at scale in a production environment.

The best way to appreciate the power Docker puts at your fingertips is to try it out for yourself. If you wish to do it, I would recommend the browser-based interactive tutorial on Docker’s website.

While it is easy to build Docker containers manually, the real power of Docker comes from what is known as a Dockerfile. A Dockerfile, using a very simple syntax that can be learnt quickly, makes possible automating the process of setting up and configuring the container’s environment to run your application.

On a weekend I finally took out time for myself and sat down to embrace Docker, not only through the interactive tutorial on Docker’s website, but also on my server. I was half lucky in that I didn’t need to have to set Docker up on my local system or VM, because Linode just happened to very recently introduce support for Docker. I started playing around with Docker commands on the shell manually, and slowly transitioned to writing my own Dockerfile. The result: I wrote a Dockerfile to run a small container to run “irssi” inside it. Go ahead and check it out, please. If you have a system with Docker running on it (and I really think you should have one), you can follow the two or three commands listed on the README file to build and run my container. It is that easy!

iPad mini retina image retention

I first read about image retention on iPad mini retina on Marco Arment’s blog. His iPad mini retina had a slight case of image retention which he discovered by [creating and] running an image retention test on his iPad. I used the word slight to describe Marco’s case because his was a minor problem, something had he not run the test explicitly would not have noticed during normal use. Because it didn’t seem something that would get in the way of enjoying the beautiful screen of the new iPad mini, I didn’t give it much thought.

The very first thing, after hooking it up online, I did on my new iPad mini retina was run Marco’s image retention test. It passed, which elated me and squashed what little fears I had. In hindsight I forgot to run the test for ten minutes, hastily choosing a minute instead. I basked in the magnificence of the retina screen and the weightlessness of the device for two whole weeks. It was the perfect tablet: light-weight, just the right size, with a beautifully sharp and crisp screen, a lot of computing power packed inside a small form factor, and a lovely OS to make it all work seamlessly. Then, one unfortunate night after work when I pulled out the iPad from inside the drawer I keep it in when away, I was dreadfully shocked to look at the mess the screen had become. The image retention clearly visible on the screen was horrible. There were crooked lines everywhere, and swiping on the screen caused them to flicker grotesquely. If Marco saw it, he would jump up his chair.

Home screen on my iPad mini with severe image retention.

Home screen on my iPad mini with severe image retention.

I managed to get the iPad returned to Apple. To my surprise, and a little disappointment, Apple outright refunded me the device.

Macro in his post explained why he thought the issue was there. Apple buys retina panels from a couple of manufacturers. Panels from at least one manufacturer exhibit image retention. I think Apple is fully aware of it, and it’s the reason why iPad mini with retina displays are in short supply.

I loved that thing. I cannot emphasise that enough. I will buy it again, when the next batch from manufacturing hits the market.