Programmers often talk about how easy it is to interact with the shell from Ruby, using the %x[] or backtick syntax to fire off work.

These conveniences are great for most situations, but they start to fall down when presented with unusual inputs or strange pipeline gymnastics.

Spaces in Filenames

For example, a script to echo out the contents of a file might look like this.

puts `cat #{ARGV[0]}`

This seems okay at first glance, if a little nonidiomatic. Upon closer inspection, however, we can see how it would fail when presented with a file whose name includes a space.

The simplest solution to this problem is to use a command-execution method that allows us to pass explicit arguments.

puts exec("cat", ARGV[0])

By not conflating the command to be executed and its arguments, this code avoids the issue we saw with unusual filenames. Unfortunately, it has lost the ability to effectively harness the power of the Unix pipeline.

Supporting Pipes

Let’s reformulate that first example and add a pipe to the mix.

puts `cat #{ARGV[0]} | sort`

This code works fine as long as the file we’re working with has no spaces in its filename. As of Ruby 1.9, it’s much simpler to deal with this scenario in a robust way. We can use Open3::pipeline_r to explicitly chain together several processes.

require 'open3'
Open3.pipeline_r(["cat", ARGV[0]], ["sort"]) do |output|


The Open3 module has a number of different methods to help you chain together different commands along the Unix pipeline, depending on your explicit needs. In this example I used pipeline_r to demonstrate the common scenario of passing the output of the sequence of shell commands to a Ruby method. Give Open3 a perusal and see if it can help you make your tools more versatile.

A Sea of Views

Ruby on Rails projects tend to generate a large number of view files. In a recent, fairly-young project I found we had nearly 300 different files under app/views. On an older project we had over 1,400. These files are often intertwined, sometimes in non-obvious ways. When it comes time to make a change to such an application’s front-end, it can be difficult to locate exactly what file generated the markup seen in the browser.


To alleviate this pain, I wrote a simple wrapper around ActionView::PartialRenderer#render that bracketed the method’s output with html comments indicating which file had been rendered.

<!-- begin: app/views/user/_bio.html.haml -->
<div class='bio'>Ed's Bio</div>
<!-- end: app/views/user/_bio.html.haml -->

Adding Context

After throwing the code up on github, hinrik pointed out that knowing the file containing the markup in question was only half of the equation. Often the difficulty in working with nested view files is knowing where a view was called from. At his request, I extended my wrapper to indicate the view’s inclusion point.

<!-- begin: app/views/user/_bio.html.haml  (from app/views/user/show.html.haml:4) -->
<div class='bio'>Ed's Bio</div>
<!-- end: app/views/user/_bio.html.haml  (from app/views/user/show.html.haml:4) -->

Who This Helps

This was originally written to help me get a quick overview of the views structure of an existing project. I created the code as a gem and added it to the :development group. It turned out to be useful for the other developers on the team, but the real winners were designers. The designers who worked on this project also worked on several other projects simultaneously, not all Ruby on Rails. This tool enabled them to zero-in on the spots they needed to adjust without having to have a complete mental map of the project’s view structure.

The Code

rails_view_annotator has been published as a gem, and its source is hosted on github. Installation is as simple as adding gem 'rails_view_annotator' to a project’s Gemfile.

It has also been listed on under Rails Instrumentation.

Ever have to map all the keys of a Hash, or Hash-like Ruby object? For example, you have a Hash mapping keys to the binary contents of several files, and you want to present those file contents as Base64.

Imagine we have a Hash that looks something like this:

pictures_hash = {
  'goblins' => PICTURE_OF_GOBLINS,
  'kittens' => PICTURE_OF_KITTENS

If you were just doing this mapping inline, it might look like this:

pictures_hash.inject({}) do |accumulate, (key, binary)|
  accumulate[key] = Base64.encode64(binary)

Instead of typing out almost-identical code over and over again, here’s an implementation as a simple block-receiving method on Hash itself.

class Hash
  def map_values
    inject(self.dup) do |a, (k,v)|
      a[k] = yield(v)

Now the code to Base64 encode the file contents becomes this much simpler snippet.

pictures_hash.map_values do |binary|

While reading Yehuda Katz’ blog post concerning mental models about Ruby’s behavior, I was a bit rankled by his reference to implicit locals created by running regular expressions with match clauses.

I was nearly certain that the dollar-sign prefix on the variables ensured the variables were global, but had to verify for myself.

Surprisingly, I found that the matches truly were local.

class RegGlobalTester
  def hello
    [ $1, "hello".match(/h/), goodbye, $1 ]
  def goodbye
    [ $1, "goodbye".match(/(g)/), $1 ]
 => [nil, #<MatchData "h">, [nil, #<MatchData "g" 1:"g">, "g"], nil]

Previously, I demonstrated how to set up an encrypted store in Mac OS, but didn’t describe why you might want to do such a thing.


Using an encrypted store like this can help keep your company’s sensitive passwords safe should your computer be compromised.
We use Capistrano to deploy to production servers, and in our deploy.rb cap variables are read from environment variables, and then forwarded on to the production machines.

set :live_production_mysql_pass, ENV['PROD_MYSQL_PASS']

Now obviously, we don’t want these sensitive variables set all the time, only when it’s time to deploy. To achieve this, we simply execute a shell script, stored on the secure volume, which sets the appropriate variables.

Store Your Secrets

When the secure volume is mounted, a custom .bash_profile is loaded which bootstraps your original ~/.bash_profile and adds your secrets to the environment:

source ~/.bash_profile
export PROD_MYSQL_PASS='top-secret'
echo "Exported secure environment, please close this terminal when you're through"
export PS1="SECURE $PS1"

When this script is executed, the environment variables within it are set, and a warning to close the terminal is emitted.
In order to automate the steps of mounting the secure volume, launching the interactive shell, and unmounting the volume, we have to use a couple of different tools.

Mount, Execute, Eject

The first is hdid which mounts a volume by filename. The output of this command provides both the device file, like /dev/disk1 and the mount point, like /Volumes/Vault.

The mount point is used to construct the path to the custom bash profile which initializes sensitive environment variables in a new interactive shell, launched with a pristine environment. This results in an isolated terminal with mingled secure and user-specific environment variables and settings. Additionally, the secure volume remains mounted for the lifespan of this secure session, allowing your deploy to read files like production ssh keys, or any other restricted resource, from this secure location.

Finally, when the interactive terminal exits, hdiutil ejects the disk device for the mounted volume.

Set It Up

The code to do all of this is stored in a bash function. Add the following lines to your ~/.bash_profile or equivalent and set the VAULT_DMG variable to the name of your encrypted disk image.

# Read Secure Volume credentials for deploy
function sv () {
    SECURE_MOUNT_DEVICE=`echo -e $SECURE_MOUNT_INFO | cut -d ' ' -f1`;
    SECURE_MOUNT_PATH=`echo -e $SECURE_MOUNT_INFO | cut -d ' ' -f2`;
    bash --init-file $SECURE_MOUNT_PATH/.bash_profile;
    hdiutil eject $SECURE_MOUNT_DEVICE &> /dev/null;