moomou

(ノ≧∇≦)ノ ミ ┸┸

Developer Experience

Posted at — Aug 22, 2017

Commandline Productivity and Automation (aka make it easy to repeat)

Developers tend to repeat themselves. A lot. This can be as innocuous as running a test manually after you update a test file or as insidious as deploying a newly built binary into production manually. Nobody likes to repeat themselves over and over again. And for good reasons. Not only does it take time to repeatedly figure out all the parameters in a tricky command line program, it also increases stress when you are trying to fix outages as quickly as possible.

The obvious solution to these problems is of course automation. But more or often than not, I avoid automation thinking whatever I am doing is an one off or does not occurs frequently enough to warrant scripting. Finally sensing this is slowing down my productivity, I decided to start automating as much as possible, no matter how small.

In this post, I will share 3 broad automation strategies I have adopted. First, automating via simple shell scripts when possible. Second, creating a Swiss army knife tool to help make automation easier while keeping boilerplates to a minimum. Finally, using tools such as Docker and Chef to bootstrap cloud VMs with minimal manual configuration.

Simple shell scripts

Creating a shell script is the simplest form of automation and arguably also the most effective with immediate productivity gain. After some experimentation, I devised and follow a simple rule: if I need to type more 3 parameters to invoke a complicated command on the shell, I will write a shell script. This simple stratgey saves me time and also gives me a place to document various parameters.

As a concrete example, I have been writing build_container.sh and run_container.sh to help streamline my Docker workflow.

# build_container.sh
#!/bin/sh -xe

docker build -t $1 .
# removes <none> images
docker rmi $(docker images --filter "dangling=true" -q --no-trunc)
# run_container.sh
#!/bin/sh -xe

docker run \
    -p 9000:9000 \
    -v REPO_PATH:/CONTAINER_ROOT/REPO_PATH \
    sample_image \
    bash

As an added bonus, I have been learning more about shell programming (and all its quirks!) as I am writing more of these scripts.

Swiss army knife

While automating with one off shell scripts for each specialized task is effective, the effort tend to be duplicated across repos and boilerplates start to add up, leading to increased tech debts and overall increase in complexity.

To overcome this, I stole the idea of s, a Swiss army knife command line tool, from Smyte. Succinctly, s is a collection of scripts that automates various developer tasks, including deployment, remote server automation, running tests, compilation, and myraids of other things. More than a collection of scripts, s provides a common framework to centralize tool development and minimizes boilerplates.

Some sample s invocation

# running tests
s test.run ABC

# deploying code
s deploy.roll --commit ABC

Modelling after s, I created a tool called m. To bootstrap m, I use python-fire from Google. python-fire is an amazing python library that will turn any python script into a command line program with minimum fuzz and handles command line arguments in an intuitive manner.

One of the major downside of s as implemented at Smyte is speed - importing modules in python is not free and these add up as more and more scripts get added. In m, I address this by loading dependencies only at invocation of each subcommand. Doing this has allowed to me to keep top level m invocation at ~250ms on my 2013 Mac Air.

You can checkout m on my github here.

Environment bootstrapping

The last missing automation piece is environment bootstrapping on new machines or vms. With more demanding workload, I started to use AWS and Google Cloud to alleviate the load on my mac air.

For the longest time, I would manually run apt-get install for every single package I use on each newly provisioned VMs. While that was painful, it occurred rarely enough I put up with it. But this has taken an mental toll on my workflow. I would put off tasks because I did not want to repeat the drudgery.

To arrest this mental blockage, I started building docker containers for tricky development environments with all dependencies pre-installed as well as automatting VM bootstrapping with chef. Checkout my chef bootstrap scripts here.

Conclusion

Developer experience has more often than not been an afterthought for me. Realizing this is slowing down my productivity and hurting my enjoyment of developing, I started addressing these productivity killers by employing more shell scripts, building m, as well as using chef and docker to avoid manual environment setup.

These simple steps have tremendously improved my productivity and gave me the freedom to spend more time building and less time on fighting my toolchains.

Am I done here? Of course not. There are many more impromvents and new tools I plan to look into to further improve and simplify my development workflow. Another closely related topic is deployment, which I have not yet addressed.

For example, chef could be replaced with packer to avoid costly boostrapping and use a preconfigured machine. A friend has also suggested nixos as an interesting option to explore.

As for deployment, I plan to spin up a personal kubernetes cluster to ease server administrationg and streamline deployment.

Much to be done.