Archive for the ‘Uncategorized’ Category

Elasticsearch, Ruby and Unicorn


We have been using Ruby on Rails as well as Elasticsearch for a while. To avoid downtime during deployment, we have been using Unicorn more or less configured like this blog post describes. While running a single instance of Elasticsearch was pretty trivial with Karel’s new Elasticsearch Ruby Gem – moving to a Clustered setup forced us to understand the configuration of the Gem a bit better. I thought I’d sum up a few lessons learned here just in case it might be useful to someone:

There’s quite a few options you can pass into the Elasticsearch client that allow you to leverage the clustered setup, my setup ended up looking like this:

elasticsearch_hosts = ENV['ELASTICSEARCH_HOSTS'].split(/,\s*?/)
require "#{Rails.root}/lib/wrappers/elastic_client_wrapper.rb"
ELASTIC_CLIENT = url: elasticsearch_hosts,
log: Rails.env == 'development',
transport_class: MyApp::ElasticClientWrapper,
randomize_hosts: true,
retry_on_failure: true,
reload_connections: true,
reload_on_failure: true,
transport_options: {
request: { open_timeout: 1, timeout: 45 }

The :url (or could be :hosts paramater), is basically an array of hostnames for the elasticsearch cluster that I load from the environment.

I’ve also specified :transport_class which refers to my custom wrapper client that I use to handle errors (just to make sure the entire app doesn’t crash if the search engine cluster becomes unavailable – this might be a bit overkill, but previous experience has thought me to try to wrap as many external services as possible like this).

The :randomize_hosts, :retry_on_failure, :reload_connections and :reload_on_failure options are all better described here – but you should at least understand them, and set them to true or specify a custom numeric value where appropriate.

Finally :transport_options are important. If you do not specify an open_timeout – the default implementation of Net::HTTP which is used by Faraday, will hang forever if it cannot open a connection to one of the servers. You should really test shutting down one of the nodes in the cluster and ensure your clients are not hanging.

While the :reload_connections option will enable the client to try to reload hosts info from the cluster, if you are replacing nodes for some reason or another, you will probably end up changing the value of the ENVIRONMENT variable or .yml your Rails app are using for the initial config. If you are using the no-downtime deployment setup for Unicorn, you need to make sure you actually reload the connection settings when reloading Unicorn. Similar to how you might reload ActiveRecord connections after forking Unicorn workers, I used this setup in my unicorn.rb

after_fork do |server, worker|


Liberated devs don’t blog


Okay, this might turn into a weird post. But here it goes:

I was talking to Ole Morten the other day, and he mentioned he hadn’t blogged in a while.  As you can see from my blog – there’s not been much going on here either. What Ole Morten and me have in common – is a passion for new technology and entrepreneurship. If bureaucracy and politics are slowing us down – we are miserable – if we can spin up new products built on cutting edge technology – we are happy. Simple as that.

So it occurred to me that most of my blogging has been going on when I haven’t been entirely happy with the state of my current project. I’ve been looking to (and blogging about) Agile, Kanban and Systems Thinking to help build the right things faster. I wanted to change the status quo – I wanted to move faster, wanted to pick the right battles, wanted to make the right choices. But then – when I first got the chance to do all of this – I suddenly stopped blogging.

It’s not like I haven’t learned anything new. I’ve learned tons. When you ship software every day, I believe you learn so much more and so much faster than when you do the odd deploy twice a year.

I could write about continuous deployment or auto-scaling on Amazon. I could talk about setting up your infrastructure programmatically with Chef or launching a new product in days with Heroku. I could talk about Twitter Bootstrap, Ruby on Rails or Node.js. I could talk about anarchy, or just following your instinct. I could talk about how nice it is for every developer to have full access and full responsibility of the production environment. I could talk about how much better a software project is without and architect. I could talk about how much fun it is being an entrepreneur. I could talk about how you can actually find some fun Ruby contracting gigs out there that will allow you to bootstrap your startup.

But I don’t. At least not yet. Maybe I’m too busy, or maybe my subconscience haven’t fully processed all these new concepts yet, so that I’m not ready to write about them. But I think – most of all – I’m having so much fun, so much fun that I don’t feel the urge to blog about something.

A blog post can no longer be an outlet for my current frustrations – because there aren’t that many frustrations these days. A blog post won’t help me reach out to like-minded people so that we can pick the battles together – because there aren’t that many battles left. A blog post simply doesn’t make that much sense in my current context – or at least it fulfils a very different purpose than it used to.

So I might start blogging again – who knows – but it will at least be for different reasons than before.

Ps. I’d love your comments on this topic. What are you using your blog to achieve? Does your blogging reflect your current frustrations and aspirations? Instead of blogging about it – have you tried to find a way to actually live some of those dreams?

Too grown up for Heroku?


Cheat sheet for my Too Grown up for Heroku? tutorial at Roots

Pre-requisites for Ubuntu 10.04

Skip this section unless you are starting with a blank Ubuntu Image

  1. sudo apt-get install git-core
  2. install rbenv (
  3. install ruby-build as an rbenv plugin
  4. sudo apt-get install curl
  5. add to .bashrc not .bash_profile
  6. sudo apt-get install zlib1g-dev
  7. sudo apt-get install libssl-dev
  8. rbenv install 1.9.2-p290
  9. rbenv rehash
  10. gem install bundler

Setup Chef

This will get chef setup and installed

  1. Go to
  2. Click on “Free Trial”
  3. Fill in your details
  4. Verify email
  5. git clone git:// chef-repo-demo
  6. cd chef-repo-demo/
  1. list organizations
  2. select your organization
  3. generate knife config
  4. mkdir .chef
  5. nano .chef/knife.rb
  6. paste in generated knife config
  7. on opscode: change password -> get private key -> get new private key
  8. regenerate validation key (will download it) move validation key and private key to match what knife.rb specifies i.e. (.chef/).
  9. gem install chef
  10. (rbenv rehash)
  11. run “knife node list” or some other command to verify your setup is working

Set up a WebServer

This will set up a Webserver on AWS and deploy our example app to it


Create a security group

  1. EC2 -> Security Groups -> Create ->
  2. Add HTTP and SSH inbound (to
  3. Apply rule changs

Launch an instance

  1. -> choose your region -> Ubuntu 10.04 LTS EBS BOOT
  2. Choose type (bigger will install faster, I recommend medium for this tutorial)
  3. Give it a name
  4. Create new key pair
  5. Download the .pem key and add it to ~/.ssh/amazon/your_key.pem
  6. sudo chmod 0400 ~/.ssh/amazon/your_key.pem
  7. ssh-add ~/.ssh/amazon/your_key.pem
  8. Verify with ssh-add -l
  9. Choose the security group you made earlier
  10. Watch it launching

Make sure Chef installs the correct version of Ruby

  1. Copy the Gist into .chef/install_ruby_193-p125.erb
  2. knife bootstrap --distro ubuntu10.04-gems --template-file .chef/bootstrap/install_ruby_193-p125.erb --node-name demoprep-web1 -x ubuntu -i ~/.ssh/amazon/demoprep.pem --sudo
  3. Wait for it to install all packages etc. you should see a long trace in your terminal
  4. Verify with ‘knife node list’ (you can also ssh ubuntu@public.dns if you want to log into the machine)

Run the chef-client once

  1. knife ssh name:demoprep-web1 -a ec2.public_hostname -x ubuntu -i ~/.ssh/amazon/demoprep.pem "sudo chef-client"

Install nginx and unicorn

  1. knife cookbook site install nginx
  2. knife cookbook site install unicorn
  3. knife cookbook site install apt
  4. knife cookbook site install git
  5. knife cookbook site install ssh_known_hosts
  6. knife cookbook site install chef-client
  7. create roles/webserver.rb as
  8. knife role from file roles/webserver.rb
  9. comment out default-site.erb section
  10. knife node run_list add demoprep-web1 'role[webserver]'
  11. knife cookbook upload runit apt bluepill ohai build-essential yum chef-client git nginx ssh_known_hosts unicorn
  12. run the chef client again (see above)
  13. check NGINX is running by going to the public dns in your browser

Checkout example app

  2. add group :development do “capistrano” end to the Gemfile
  3. bundle
  4. capify .
  5. cheat and do git checkout origin/chef
  7. edit deploy.rb

.ssh-keys and git

  1. if you don’t have .ssh/id_rsa: ‘ssh-keygen -t rsa -C “” ‘
  2. ‘ssh -T’

Some more tweaks

  1. knife cookbook create permissions
  2. chown -r ubuntu:ubuntu /var/www
  3. knife upload permissions
  4. knife role from file roles/webserver.rb
  5. create data_bags/ssh_known_hosts/github.json as
  6. replace the value of rsa with the rsa key you get when you do cat ~/.ssh/known_hosts | grep git
  7. knife data bag create ssh_known_hosts
  8. knife data bag from file ssh_known_hosts data_bags/ssh_known_hosts/github.json
  9. knife cookbook create bundler
  10. add ‘gem_package “bundler”‘ to default.rb
  11. knife cookbook upload bundler
  12. add bundler to roles/webserver.rb
  13. knife role from file roles/webserver.rb
  14. run chef-client


  1. cap deploy:setup
  2. cap deploy
  3. cap unicorn:start
  4. cap nginx:restart
  5. check the public url in your browser


  1. knife cookbook site install monit
  2. add “recipe[monit]”, to roles/webserver.rb
  3. add nginx-monit.conf.erb to cookbooks/nginx/templates/default from gist:
  4. add monitc(nginx-monit) to nginx default.rb see gist:
  5. knife cookbook upload nginx monit
  6. knife role from file roles/webserver.rb
  7. run chef-client


Add a Munin server and make it monitor our web server

  1. knife cookbook site install munin
  2. create an environment: environments/productions.rb from
  3. knife environment from file environment/production.rb
  4. knife node edit demoprep-web1 – change environment from _default to production
  5. create a new role roles/muninserver.rb – as
  6. knife role from file roles/muninserver.rb
  7. in AWS Console add 4949 access to the appropriate security group (could be same as above)
  8. launch a new instance on AWS (same security group, keypair etc.)
  9. knife bootstrap --distro ubuntu10.04-gems --template-file .chef/bootstrap/install_ruby_193-p125.erb --node-name demoprep-web1 -x ubuntu -i ~/.ssh/amazon/demoprep.pem --sudo
  10. tweak cookbooks/munin/attributes/default.rb as of
  11. change from fqdn to ipaddress in cookbooks/munin/templates/default/munin.conf.erb
  12. knife cookbook upload apache2 munin
  13. create data_bags/users/munin.json from gist:
  14. knife data bag create users
  15. knife data bag from file users data_bags/users/munin.json
  16. knife node run_list add demoprep-muninserver ‘role[muninserver]’
  17. knife node edit demoprep-muninserver
  18. change environment from _default to production
  19. knife ssh name:demoprep-muninserver -a ec2.public_hostname -x ubuntu -i ~/.ssh/amazon/demoprep.pem "sudo chef-client"
  20. verify you can log in to the muninserver with munin/test
  21. wait for a few minutes for the index to be generated
  22. add “recipe[munin::client]”, to the webserver role
  23. knife role from file roles/webserver.rb
  24. knife ssh name:demoprep-web1 -a ec2.public_hostname -x ubuntu -i ~/.ssh/amazon/demoprep.pem "sudo chef-client"
  25. wait for a few minutes, check the munin graphs


Make chef-client run as a daemon:

knife ssh name:NODE_NAME -a ec2.public_hostname -x ubuntu -i ~/.ssh/amazon/demoprep.pem "sudo chef-client -d -P /var/run/chef/ -L /var/log/chef-client.log -c /etc/chef/client.rb -i 300 -s 20"

Kanban – the book


I have quite recently had the pleasure of reading David Anderson’s latest book on Kanban. I thought I had a pretty good understanding of Kanban before I started reading it. However, I still had some unanswered questions – some – which for a while might have reduced my confidence in persuading my peers to just try it. After reading the book I got most of my questions answered, and at the time of writing we are just starting to experiment with a little bit of Kanban on my current project. Happy days!

Whether you know nothing about Kanban or consider yourself fairly well educated on the topic I think the book is definitely worth reading. Instead of giving a full summary of the book, I thought I should give you some of the answers to the questions I got answered from reading the book (as I understood it, although I might also be a bit influenced by other sources):

  • How do you really get started? Take your current process, visualize it (on a board) and apply WIP limits. Try measuring lead time, and maybe create a cumulative flow diagram. For longer term success, you should get some buy-in / agreement from up- and downstream stakeholders though.
  • What is some of the major selling points? Create a predictable performing team. Reduce lead time. Optimize throughput. Expose bottlenecks.
  • How do you deal with blocked stories? Kanban should force the team to swarm on a blocked story, with the result of resolving it. It might need to be escalated, but then again a manager should see the importance of helping to resolve the issue.
  • When do you release, plan, ..? Kanban allows you to decouple input cycles from output cycles. That is, you could release every Monday if you want, but perhaps only have meetings to fill up the input queue every two weeks. You could have a retrospective every third Friday if you want. Or you could even trigger these events on an as needed basis.
  • How do you become predictable when stories might vary in size, priority ..? Classes of Service! For example: a ‘standard story’ will be finished in 14 days on average. ‘Expedite stories’ will be finished in 10 days on average, but you are only allowed to have one expedite story in play at any one time and so forth. David describes other interesting examples of Classes of Service as well, I highly recommend you to read this.

I think I should stop here and leave someting for you to read as well 😀