Shell One-liners to Generate Random Passwords

I’ve been on the lookout for ways to generate random passwords from the shell. One of the ways to do that, as I’ve learned from people, is to run the slappaswd utility a couple of times, get enough random output, and then just concatenate them together to form a long enough password. Crude, but suffices, until you realise that the slappaswd utility comes from OpenLDAP and you’d rather not pollute your computer with that filth. What next?

These passwords aren’t for to be remembered by humans; I’d probably store these passwords in an encrypted text file and just copy-paste them wherever they’re required. So they can be completly random. This makes our job so much easier.


The first step to generating a random password is to get enough entropy. Fortunately, on Linux and most *NIX systems, you’ll have either /dev/random and /dev/urandom to help you along. Just start by dumping enough bytes from it into stdout.

Something to take note of though - historically, /dev/random used to be a source of completely random bytes captured from the environment, electrical noise, system events, etc., and /dev/urandom would be a pseudorandom number generator that could either be cryptographically secure, or not. That difference has blurred somewhat these days, and you should consult the documentation for your operating system to see where those bytes are coming from.

You can use something like Haveged to inject more deterministically generated entropy (yes, I get the irony there) into the system.

First Shot

Once you’ve decided on a source of random bytes, the password generation is simple - just get enough bytes, and convert them into an ASCII string. The best way to do that is to simply base64-encode raw bytes, like so:

$: dd if=/dev/urandom bs=32 count=1 2>/dev/null | base64 -w0

This is as simple as it can get - read the PRNG (/dev/urandom) with dd, get 1 block (count) of 32 bytes (bs), redirect stderr to /dev/null to suppress dd‘s status report, and base64-encode the resulting output. The -w0 switch is to prevent base64 from inserting newlines in the output after every few characters (the default is 76 characters).

Although having symbols in your password makes it more secure (more possible characters to choose from when brute-forcing the password), it’s possible to remove the non-alphanumeric characters in the password. Just pipe the output to tr, like so:

$: dd if=/dev/urandom bs=32 count=1 2> /dev/null | base64 -w0 | tr -cd "[[:alnum:]]"

I really recommend you just use the first example, and increase the bs value given to dd to get longer passwords. However, you might want to get fancier, and because it’s shell, there are ways to.

Entropy, Redux

You might want to have relatively short (30-60 character) passwords that are generated from massive amounts of entropy, say over 8 kilobytes of random data. While this is overkill, you could actually do it by throwing a hash function into the mix.

Here’s what the pipe would have to do:

  1. Get a massive amount of random data and dump it into stdout, possibly with dd if=/dev/urandom bs=8192 count=1.
  2. Hash that data to reduce the number of bytes in the output. You shouldn’t really think in the direction of MD5 at all - choose between sha{1,224,256,384,512}sum, and strip all extra data from the output
  3. The standard sha*sum commands output the hex digest of the hash bytes, so you’ll have to convert them back into raw bytes.
  4. Now just run base64 to encode the data, and pull out the symbols if you want to.

Now I’m not sure about what the math will look like here, but something tells me this extra effort is pretty much useless. 32 bytes of purely random data will require the same brute-force effort to crack as 32 bytes of hash generated from 8 kilobytes of purely random data. In any case, I’ve outlined the pipe below.

Password Generation, Redux.

I’ll explain the pipe in bits.

First, the hash. I’m going to use MD5 to demonstrate here because the hashes are nice and short, but you should never use MD5 in practice because the hashes are nice and short.

Here’s how the ouput of the md5sum command (and all the sha*sum commands) look like:

$: echo hello | md5sum
b1946ac92492d2347c6235b4d2611184 -

You have the hash, followed by a couple of spaces, and then the filename. Since it’s reading from stdin, the filename is simply -.

To get just the hash, pipe it to cut, like so:

$: echo hello | md5sum | cut -d " " -f 1

You could just base64-encode this string, like so:

$: echo hello | md5sum | cut -d " " -f 1 | base64 -w0

Notice that there are no symbols in this string? That’s because you’re encoding bytes that are already alphanumeric, and base64 encodes alphanumeric ASCII values to alphanumeric ASCII values.

Another way to do this would be to convert the hash into its raw bytes, and then base64-encode that. You’d get a shorter password (half as many characters; hex-encoding a byte takes two bytes), but you’d get symbols. You’ll be trading password length for a bigger alphabet space.

To convert the hash to raw bytes, you can use Perl, Python, shell builtins, sed and what not, but the simplest way is to simply use xxd, a command that comes with vim. Here’s how to base64-encode the raw bytes of the hash:

$: echo hello | md5sum | cut -d " " -f 1 | xxd -r -p | base64 -w0

And that’s basically it. Use a longer hash function and some actual random input, and you’re done:

$: dd id=/dev/urandom bs=16384 count=1 2> /dev/null | sha384sum | cut -d " " -f 1 | xxd -r -p | base64 -w0

Shell Shortcuts

To make this easier, you can just create a small function and put that into your ~/.zshrc or ~/.bashrc. This is what I’ve got:

function genpasswd () {
local RPASSWD=$(dd if=/dev/urandom bs=$1 count=1 2> /dev/null | base64 -w0)
if [[ $2 = "-n" ]]; then
echo $RPASSWD | tr -cd "[[:alnum:]]"

This function takes one mandatory argument, and one optional one. The first argument is the number of random bytes you want, and the second is -n if you want symbols to be stripped. Works like so:

$: genpasswd 32

And if you want no symbols, then:

$: genpasswd 32 -n


Till next time!

KDE Infrastructure on DigitalOcean

KDE’s server inventory is a mixed bag. We have a few physical machines that were donated to us. There’s some sponsored colocation. We also rent a couple of big machines from Hetzner, divvy them up into smaller containers with lxc and host services there. Today, I can announce that we’re adding droplets from DigitalOcean to that bag.

Not only would a full-blown cloud infrastructure on something like AWS be prohibitively expensive, our situation doesn’t merit such an infrastructure. Our more powerful servers are dedicated as build slaves for our CI system, and a server with 32GB of RAM and dual-redundant SSDs with ZFS on Linux based storage is currently used for the code repositories, and will soon be used to host Phabricator.

We could, however, do with something in between cloud “compute” resources and a physical server that we manually manage, and DigitalOcean’s droplets fit the bill right there. DO’s droplets are small - we can dedicate a 1GB droplet to hosting websites, which would allow us to isolate web hosting from other services while not wasting resources we’d never use. They’re also standard KVM machines, which allows us the level of manual control we’d like.

There are some additional aspects of using DigitalOcean that I like:

  • All our existing servers are either in continental Europe or the United States. DigitalOcean has datacenters in Asia that I’m particularly looking forward to making use of, to service our contributors from Asia Pacific (particularly India) better.
  • Depending on demand, we could bring up new servers or shut down existing ones at short notice. While we don’t do something along these lines now, once we have the capability I could see us pre-emptivly adding temporary server capacity to handle high-traffic events, like a new Krita release.

But this isn’t the best part of this post.

Once we realised we’d have some use for DigitalOcean’s offerings, we went and asked them if they’d be willing to sponsor us under their programme for supporting open-source software projects. To our utter delight, they were very enthusiastic about supporting us and set us up with an account and a lot of free credits to start us out.

So in the next few months, expect to have KDE’s existing online services to get more reliable as we add failovers, and new services to spring up as we start putting plans for the additional capacity into action.

Till next time!

Brexit Is Not Happenning

David Cameron is an astounding genius. Of course, you’d expect nothing less from an alumni of Eton and Oxford, but to see his genius in action in such a grand public scale makes it no less incredible.

This meme was doing the rounds on Facebook yesterday. But I beg to differ - this wasn’t a rage-quit. This was a chess move so calculated and so devastating that the full extent of its consequences will take a long time to become apparent.

Biggest Rage Quit of 2016

With his resignation, David Cameron ensured that the person who succeeds him is going to commit political suicide. And someone will have to succeed him, because someone will have to become the next Prime Minister of the United Kingdom. And that person will have to die. Whether or not that person is Boris Johnson remains to be seen.

Here’s the deal. The referendum was set up to be advisory, not binding. The British Parliament is under no legal obligation to follow through on the results on the referendum, but if the government does not follow up, the entire premise of democracy falls apart. Therefore, Britain has to leave the EU, and to do that, someone will have to inform the European Commission by sending them a notice under Article 50 of the Lisbon Treaty. That someone was going to be David Cameron, until he decided to resign and let his successor do it.

On the face of it, that would be an honour for his pro-Leave successor. But of course, things aren’t so simple:

  • If the successor follows through and invokes Article 50, Scotland and Northern Ireland breaks away and joins the EU. The United Kingdom is no longer united. A mountain of laws and regulations need to be torn up and new ones written in its place. The English economy collapses. The public quickly loses patience, and the blood is now on the successor’s hands. His career is over, as is the United Kingdom as we know it.

  • If the successor does not follow through and fails to invoke Article 50, the premise of the democratic government falls apart. The governance of the UK is no longer democratic, and the entire establishment is a farce. The successor’s career is over, as is probably the entire government’s. It doesn’t end there, however. The next person on the chair also finds himself in the same conundrum, as the next. Until the will of the British people change and they decide to stay - and make it known in another referendum - this cycle continues.

It appears I’m not alone in this theory. Someone commented exactly along these lines on a Guardian article, which is what led me to think twice - hey, I thought this line of thought was an effect of too much House of Cards, but it appears I might not be completely crazy after all - and take this out of my mind and put it to paper. Well, the Internet.

The next few months are going to be very interesting.

Message Passing - Our Telegram-IRC Relay Service

Yesterday, we launched a service to relay messages between IRC and Telegram, and we’re syncing 4 channels for KDE and 5 more for Kubuntu at this moment. The sync is two-way, so whatever people say on Telegram appears on the IRC channel, and vice versa. Almost everything works - almost being everything except that files and stickers shared on Telegram don’t appear on IRC.

So here’s a blog post about how we did it. An exclusive behind the scenes look at how KDE Sysadmin conducts their business, if you will.

Server Set-Up

The server that runs IRC services (a bouncer, bots for Zabbix, Bugzilla etc.) is an LXC container running Ubuntu 14.04.4 LTS. We don’t use docker or other application container services to virtualise apps - since we run such a diverse set of services written in different languages on the server, we just confine apps to their own users, and try to keep the dependencies confined to the user as far as possible.

To run the Telegram-IRC relay, we chose to use TeleIRC. TeleIRC fits our bill perfectly. It has all the features we want. It’s also written in JavaScript, which means it comes from the same community that brought us examples of groundbreaking engineering such as left-pad and is-positive-integer. Isolating this service from the rest of the system is critical, at least from a security perspective.

Step 1: Node.js

Ubuntu 14.04.4 LTS carries Node.js version 0.10 in its repositories. It’s too old and won’t do, as we found out much after actually installing teleirc, because npm won’t complain when you’re installing but teleirc will refuse to start because os has no method homedir().

So we’ll have to manually obtain a current version of Node.js. Thankfully, we can use the Node Version Manager to obtain a binary build of a current version of Node.js directly from the Node website.

Start with a fresh user - let’s call it teletubby - and log in. Because it’s Node, the “recommended” method for installing nvm is by curl-ing a script and piping it to bash. It’s an idea that’s brilliantly simple and has provably zero security flaws. Of course, because we’re KDE and we like to do things the hard way, I decided to forego this easy install procedure and do things manually:

teletubby (~) $: mkdir TeleIRC
teletubby (~) $: cd TeleIRC
teletubby (~/TeleIRC) $: git clone

At this point it’ll clone the nvm repository to ~/TeleIRC/nvm. If you’re feeling particularly adventurous, you can use this as-is (using the current master), but I like to stay on the stable branch. nvm‘s repo makes that easy - all you need to do is to checkout the latest stable tag:

teletubby (~/TeleIRC/nvm) $: git checkout `git describe --abbrev=0 --tags`

At this point, you’ll need to add the nvm set-up script to your Bash profile, so add the following lines to ~/.profile:

export NVM_DIR="$HOME/TeleIRC/nvm"
[ -s "$NVM_DIR/" ] && . "$NVM_DIR/"

Log out and log back in (or start a new login shell), and you should be able to use the nvm command right away.

You’ll actually have to install a version of Node.js now. The current LTS branch of Node.js is the v4.4 branch (the current release as of writing is v4.4.4). You can do:

teletubby (~) $: nvm ls-remote

to see what versions of Node.js are available for you to install. To install the 4.4 branch, just do:

teletubby (~) $: nvm install v4.4

And when it’s done installing, run node to make sure you get a Node.js prompt.

Step 2: Telegram Bot and IRC Account

To get the Telegram Bot account, you’ll have to talk to @BotFather.

Start by asking for a new bot:


BotFather is pretty conversational. It’ll first ask you for the full name you want to give to the bot, and then an username. Our service is called KDE IRC Relay Service, and the username is IrcsomeBot. Note that your bot’s username must end with either bot or _bot.

Once you give it a full name and an username, it’ll give you an API key. Keep this key secure. If you lose it, you can generate a new one, but make sure it’s never compromised.

To let your bot see every message that’s said in the group (so that it can read the messages and relay them to IRC), you’ll have to disable privacy for the bot:

<Me>: /setprivacy
<BotFather>: Choose a bot to change group messages settings.
<Me>: @IrcsomeBot
<BotFather>: 'Enable' - your bot will only receive messages that either start with the '/' symbol or mention the bot by username.
<BotFather>: 'Disable' - your bot will receive all messages that people send to groups.
<BotFather>: Current status is: ENABLED
<Me>: Disable
<BotFather>: Success! The new status is: DISABLED. /help

Note that to denote the channel now, we’re using the notation @username, not just username. We’ll prefix the username with the @ sign everywhere we want to refer to the bot from now on.

All the essential Bot account set-up is complete, so it’s time to move on to the IRC account creation. This differs from IRC network to IRC network, but for Freenode it’s pretty simple. Start by picking a nickname that your bot will use, logging on to with any IRC client as said nickname, and running the following command at the server window:

/msg NickServ REGISTER password

You’ll get an email from Freenode with another /msg command you’ll need to type to confirm your registration. Do that, and you’re done.

Step 3: TeleIRC

Once node is up and running, and you’ve got your accounts, it’s time to install TeleIRC.

Installing TeleIRC is incredibly easy. Just run:

teletubby (~/TeleIRC) $: npm install teleirc

And check that ~/TeleIRC/node_modules/teleirc/bin/teleirc exists. At this point, you might want to add $HOME/TeleIRC/node_modules/.bin to your PATH environment variable, so that you can run the teleirc command without prefixing it with a path.

The first thing to do is to generate a configuration file, and edit it. Generate the sample config by running:

teletubby (~/TeleIRC) $: teleirc --genconfig

This will create a sample configuration file and drop it at ~/.teleirc/config.js. Edit it with your favourite text editor. It’s well commented, so you shouldn’t have any problem setting things up. The Telegram API Key goes into a variable defined near the top of the file, and the IRC settings go towards the bottom.

For Freenode, the default IRC config generated doesn’t have spaces for a nick and a password, so refer to the snippet below for what to set:

config.ircOptions = {
userName: 'FreenodeUserName',
realName: 'The Real Name',
nick: 'yournick',
password: 'yourpassword',
port: 7000,
secure: true,
sasl: true

Freenode allows SSL connections over port 7000, and you’ll need to have SASL enabled because otherwise TeleIRC won’t be able to authenticate your nick and password.

Of course, don’t forget to actually set up the channel-group mappings.

Once you’re done, start teleirc by running:

teletubby (~/TeleIRC) $: teleirc

Nothing should bomb, except a bunch of warning about chat_ids not being found. That’s not an error. Now just go to every group where you want the bot to be present, add the bot’s username and say something. The relay should just start working.

Step 4: Running Forever

It turns out you can actually do this without messing with your init scripts. All you need is another npm package called forever, and Cron. Yes, Cron.

Start by installing forever:

teletubby (~/TeleIRC) $: npm install forever

Check if it runs:

teletubby (~/TeleIRC) $: forever start `which teleirc`
teletubby (~/TeleIRC) $: forever list

You should see a table with an entry for teleirc, along with a path to the log file, which you can cat to read teleirc‘s output. The entry should also have a number associated with it (0, if it’s the only service running). You can control it by running:

teletubby (~/TeleIRC) $: forever restart 0   # to restart the service
teletubby (~/TeleIRC) $: forever stop 0 # to stop the service

Once you’ve verified that things run, and you can control the process, it’s time to make sure that the service runs at system start. Start by creating a script with the following content:

forever start `which teleirc`

Then, run:

teletubby (~/TeleIRC) $: crontab -e

Your text editor should open with a crontab file. You’ll need to add one line to the bottom:

@reboot /full/path/to/your/start/

And make sure that the cron service is enabled. That’s it!

In Conclusion

So the service is up and running, and any KDE IRC channel that wants to be mirrored with a Telegram group just needs to file a Sysadmin Task on our Phabricator instance.

I hope this helped you out if you’re looking for ways to bridge your IRC and Telegram channels together. Until next time!

Summer Is Coming

It’s been a long time since I blogged - it’s already nearly 5 months into the year and I haven’t put any words to the keyboard yet - and I have a metric tonne of things to share.

I’ll just jump right in.

CKI was a hoot and a half this year. The Lakshmi Narayan Mittal Institute of Information Technology - in Jaipur - hosted India’s very own Akademy in the first week of March. Because I stay just two hours away from Jaipur, I just had to be there, and so proposed two talks. By February both had been accepted, and it suddenly dawned on me that a year after starting to contribute to KDE I’d finally meet some of the people - whom I only knew from their IRC nicks - in person.

CKI turned out to be a mixed bag for me. Both the talks went incredibly well. Or so I hear - I was drugged out of my mind because I fell ill the moment I reached the venue. My memory of the event is just limited to the talks themselves, a hurried photoshoot (because the sun was very very roasty), and a visit to the medical center on campus where they confirmed I had an elevated blood pressure, a temperature of 104.5 degrees Fahrenheit, and a racing pulse. Of the two and a half days I spent giving talks and meeting so many people, I seem to remember only 3 hours worth of events. Which is surprising, since I only slept for a total of 3 hours over the entire weekend. Yes, I had insomnia too, because the fever and the hypertension clearly weren’t enough.

Quick change of plans, an early morning double-decker train ride to Delhi, a last-minute first-class ticket on the Calcutta Rajdhani Express, and post-event I race home to get checked by specialist doctors. It turns out I have a pretty severe upper respiratory tract infection. I live to tell the tale.

Sagar, Shivam and co. put together an incredible event though, worthy of every single bit of praise you can throw at them. There are pictures. Lots of them. Here.

Server Love

I managed to get a ton of work done on the Sysadmin side of KDE.

First, we managed to kill of for good. It used to run ChiliProject - which has been discontinued and no longer provides security updates - and used to be a constant source of headaches for the syasadmins, with the seemingly endless HTTP 500 ISEs we’d generate. After it went down in the middle of CKI - and resulted in a few embarrassing moments in the middle of talks - we decided it had to go.

One of projects.ko‘s more important features was that it would generate an XML file with metadata about all of KDE’s projects that could be used by multiple teams: the CI guys to do automated build testing, kdesrc-build to build KDE projects from source, the i18n guys to properly map translation branches and so on. So when I was at home recuperating from my infection I wrote a set of scripts that generated this data from another source - the sysadmin/repo-metadata.git repository - that we set up just for this purpose. The first weekend after I returned to college, Ben and I killed off projects and replaced it with our homebrew solution. It worked at the first try.

Of course, with my college blocking SSH I had to use scm as a jump host and that let to another set of funny things - but that’s another story.

Killing projects wasn’t enough though - kde_projects.xml is a gargantuan file that takes nearly 10 seconds to generate, and drains a lot of bandwidth. So I started working on an API service that should replace it, and I hope to get it finalised over the summer.

Finally, we finished the minimum required feature-set for Propagator (for KDE’s purposes), but we’re pushing to make it a fully featured product that other folk could use too. One of my CKI talks was centered around Propagator, and spoke of what’s next. And this brings me to the next topic.

Google Summer of Code

KDE, as usual, is participating as an organisation in Google Summer of Code this year. Sysadmin got lucky this year - of the 37 slots that we got from Google, we were able to devote 3 to Sysadmin.

One of those slots is going to be for a project which I’ll be mentoring. The project is a mish-mash of stuff to improve things on the Sysadmin side - but most of it is centered around making incremental improvements to Propagator. I’m excited to be able to work with Priya Satbhaya on this. Ubuntu 16.04 LTS is here, I have very few things left to take care of on the Ops bit of DevOps, and Priya will take care of the Dev angle. At any rate, we should be able to start dogfooding Propagator mid-July onwards.

I’m also mentoring a second project, and that project excites me for a different reason. The project is based around adding a staging area for doing file operations on discontinuous selections in Dolphin. I’m excited about this because it’ll be a terrific feature to have, and because I know nothing about Dolphin’s codebase myself. This is going to be very different mentoring experience for me, because with Arnav Dhamija I’m going to be learning from my student, not the other way round. Of course, I have Pinak to help me out, and the entire 3000+ strong Kommunity to help Arnav out with advice on code. I have my hopes set pretty high on this project too.

My Summer Plans

Finally, I have some summer plans of my own. I’ll be based out of Gurgaon during June and July, working as a DevOps Engineering Intern at 1mg (formerly HealthkartPlus), where I’ll be working on tests and deployment automation, and other challenges that crop up while I’m around.

And because I’ll be in the Delhi area and making some bucks, I hope to be able to take a few holidays. I really want to ride on the Gatimaan Express (India’s only 100mph train) and see Agra afresh. I may also probably pop over to Landour-Mussourie and try to meet Ruskin Bond. Much of my sanity in my misspent childhood was preserved only by reading his stories, and trying to write in his style. I want his autograph.

So I guess I’ve written enough now. It’s 5 AM IST, and I should get some sleep. Until next time!