Elasticsearch, Ruby and Unicorn

20/10/2014

We have been using Ruby on Rails as well as Elasticsearch for a while. To avoid downtime during deployment, we have been using Unicorn more or less configured like this blog post describes. While running a single instance of Elasticsearch was pretty trivial with Karel’s new Elasticsearch Ruby Gem – moving to a Clustered setup forced us to understand the configuration of the Gem a bit better. I thought I’d sum up a few lessons learned here just in case it might be useful to someone:

There’s quite a few options you can pass into the Elasticsearch client that allow you to leverage the clustered setup, my setup ended up looking like this:


elasticsearch_hosts = ENV['ELASTICSEARCH_HOSTS'].split(/,\s*?/)
require "#{Rails.root}/lib/wrappers/elastic_client_wrapper.rb"
ELASTIC_CLIENT = Elasticsearch::Client.new url: elasticsearch_hosts,
log: Rails.env == 'development',
transport_class: MyApp::ElasticClientWrapper,
randomize_hosts: true,
retry_on_failure: true,
reload_connections: true,
reload_on_failure: true,
transport_options: {
request: { open_timeout: 1, timeout: 45 }
}

The :url (or could be :hosts paramater), is basically an array of hostnames for the elasticsearch cluster that I load from the environment.

I’ve also specified :transport_class which refers to my custom wrapper client that I use to handle errors (just to make sure the entire app doesn’t crash if the search engine cluster becomes unavailable – this might be a bit overkill, but previous experience has thought me to try to wrap as many external services as possible like this).

The :randomize_hosts, :retry_on_failure, :reload_connections and :reload_on_failure options are all better described here – but you should at least understand them, and set them to true or specify a custom numeric value where appropriate.

Finally :transport_options are important. If you do not specify an open_timeout – the default implementation of Net::HTTP which is used by Faraday, will hang forever if it cannot open a connection to one of the servers. You should really test shutting down one of the nodes in the cluster and ensure your clients are not hanging.

While the :reload_connections option will enable the client to try to reload hosts info from the cluster, if you are replacing nodes for some reason or another, you will probably end up changing the value of the ENVIRONMENT variable or .yml your Rails app are using for the initial config. If you are using the no-downtime deployment setup for Unicorn, you need to make sure you actually reload the connection settings when reloading Unicorn. Similar to how you might reload ActiveRecord connections after forking Unicorn workers, I used this setup in my unicorn.rb


after_fork do |server, worker|
ELASTIC_CLIENT.transport.reload_connections!
end

Liberated devs don’t blog

23/07/2012

Okay, this might turn into a weird post. But here it goes:

I was talking to Ole Morten the other day, and he mentioned he hadn’t blogged in a while.  As you can see from my blog – there’s not been much going on here either. What Ole Morten and me have in common – is a passion for new technology and entrepreneurship. If bureaucracy and politics are slowing us down – we are miserable – if we can spin up new products built on cutting edge technology – we are happy. Simple as that.

So it occurred to me that most of my blogging has been going on when I haven’t been entirely happy with the state of my current project. I’ve been looking to (and blogging about) Agile, Kanban and Systems Thinking to help build the right things faster. I wanted to change the status quo – I wanted to move faster, wanted to pick the right battles, wanted to make the right choices. But then – when I first got the chance to do all of this – I suddenly stopped blogging.

It’s not like I haven’t learned anything new. I’ve learned tons. When you ship software every day, I believe you learn so much more and so much faster than when you do the odd deploy twice a year.

I could write about continuous deployment or auto-scaling on Amazon. I could talk about setting up your infrastructure programmatically with Chef or launching a new product in days with Heroku. I could talk about Twitter Bootstrap, Ruby on Rails or Node.js. I could talk about anarchy, or just following your instinct. I could talk about how nice it is for every developer to have full access and full responsibility of the production environment. I could talk about how much better a software project is without and architect. I could talk about how much fun it is being an entrepreneur. I could talk about how you can actually find some fun Ruby contracting gigs out there that will allow you to bootstrap your startup.

But I don’t. At least not yet. Maybe I’m too busy, or maybe my subconscience haven’t fully processed all these new concepts yet, so that I’m not ready to write about them. But I think – most of all – I’m having so much fun, so much fun that I don’t feel the urge to blog about something.

A blog post can no longer be an outlet for my current frustrations – because there aren’t that many frustrations these days. A blog post won’t help me reach out to like-minded people so that we can pick the battles together – because there aren’t that many battles left. A blog post simply doesn’t make that much sense in my current context – or at least it fulfils a very different purpose than it used to.

So I might start blogging again – who knows – but it will at least be for different reasons than before.

Ps. I’d love your comments on this topic. What are you using your blog to achieve? Does your blogging reflect your current frustrations and aspirations? Instead of blogging about it – have you tried to find a way to actually live some of those dreams?

Too grown up for Heroku?

22/04/2012

Cheat sheet for my Too Grown up for Heroku? tutorial at Roots

Pre-requisites for Ubuntu 10.04

Skip this section unless you are starting with a blank Ubuntu Image

  1. sudo apt-get install git-core
  2. install rbenv (https://github.com/sstephenson/rbenv#section_2.1)
  3. install ruby-build as an rbenv plugin https://github.com/sstephenson/ruby-build
  4. sudo apt-get install curl
  5. add to .bashrc not .bash_profile
  6. sudo apt-get install zlib1g-dev
  7. sudo apt-get install libssl-dev
  8. rbenv install 1.9.2-p290
  9. rbenv rehash
  10. gem install bundler

Setup Chef

This will get chef setup and installed

  1. Go to http://www.opscode.com/hosted-chef/
  2. Click on “Free Trial”
  3. Fill in your details
  4. Verify email
  5. git clone git://github.com/opscode/chef-repo.git chef-repo-demo
  6. cd chef-repo-demo/
Keys:
  1. list organizations
  2. select your organization
  3. generate knife config
  4. mkdir .chef
  5. nano .chef/knife.rb
  6. paste in generated knife config
  7. on opscode: change password -> get private key -> get new private key
  8. regenerate validation key (will download it) move validation key and private key to match what knife.rb specifies i.e. (.chef/).
  9. gem install chef
  10. (rbenv rehash)
  11. run “knife node list” or some other command to verify your setup is working

Set up a WebServer

This will set up a Webserver on AWS and deploy our example app to it

Prerequisites

Create a security group

  1. EC2 -> Security Groups -> Create ->
  2. Add HTTP and SSH inbound (to 0.0.0.0)
  3. Apply rule changs

Launch an instance

  1. http://alestic.com/ -> choose your region -> Ubuntu 10.04 LTS EBS BOOT
  2. Choose type (bigger will install faster, I recommend medium for this tutorial)
  3. Give it a name
  4. Create new key pair
  5. Download the .pem key and add it to ~/.ssh/amazon/your_key.pem
  6. sudo chmod 0400 ~/.ssh/amazon/your_key.pem
  7. ssh-add ~/.ssh/amazon/your_key.pem
  8. Verify with ssh-add -l
  9. Choose the security group you made earlier
  10. Watch it launching

Make sure Chef installs the correct version of Ruby

  1. Copy the Gist https://gist.github.com/2027622 into .chef/install_ruby_193-p125.erb
  2. knife bootstrap ec2-PUBLIC_DNS.compute-1.amazonaws.com --distro ubuntu10.04-gems --template-file .chef/bootstrap/install_ruby_193-p125.erb --node-name demoprep-web1 -x ubuntu -i ~/.ssh/amazon/demoprep.pem --sudo
  3. Wait for it to install all packages etc. you should see a long trace in your terminal
  4. Verify with ‘knife node list’ (you can also ssh ubuntu@public.dns if you want to log into the machine)

Run the chef-client once

  1. knife ssh name:demoprep-web1 -a ec2.public_hostname -x ubuntu -i ~/.ssh/amazon/demoprep.pem "sudo chef-client"

Install nginx and unicorn

  1. knife cookbook site install nginx
  2. knife cookbook site install unicorn
  3. knife cookbook site install apt
  4. knife cookbook site install git
  5. knife cookbook site install ssh_known_hosts
  6. knife cookbook site install chef-client
  7. create roles/webserver.rb as https://gist.github.com/2415634
  8. knife role from file roles/webserver.rb
  9. comment out default-site.erb section
  10. knife node run_list add demoprep-web1 'role[webserver]'
  11. knife cookbook upload runit apt bluepill ohai build-essential yum chef-client git nginx ssh_known_hosts unicorn
  12. run the chef client again (see above)
  13. check NGINX is running by going to the public dns in your browser

Checkout example app

  1. https://github.com/erlingwl/sinatra-example
  2. add group :development do “capistrano” end to the Gemfile
  3. bundle
  4. capify .
  5. cheat and do git checkout origin/chef
  6. git@github.com:erlingwl/sinatra-example.git
  7. edit deploy.rb

.ssh-keys and git

  1. if you don’t have .ssh/id_rsa: ‘ssh-keygen -t rsa -C “your@email.com” ‘
  2. ‘ssh -T git@github.com’

Some more tweaks

  1. knife cookbook create permissions
  2. chown -r ubuntu:ubuntu /var/www
  3. knife upload permissions
  4. knife role from file roles/webserver.rb
  5. create data_bags/ssh_known_hosts/github.json as https://gist.github.com/2415879
  6. replace the value of rsa with the rsa key you get when you do cat ~/.ssh/known_hosts | grep git
  7. knife data bag create ssh_known_hosts
  8. knife data bag from file ssh_known_hosts data_bags/ssh_known_hosts/github.json
  9. knife cookbook create bundler
  10. add ‘gem_package “bundler”‘ to default.rb
  11. knife cookbook upload bundler
  12. add bundler to roles/webserver.rb
  13. knife role from file roles/webserver.rb
  14. run chef-client

Deploy

  1. cap deploy:setup
  2. cap deploy
  3. cap unicorn:start
  4. cap nginx:restart
  5. check the public url in your browser

Monit

  1. knife cookbook site install monit
  2. add “recipe[monit]”, to roles/webserver.rb
  3. add nginx-monit.conf.erb to cookbooks/nginx/templates/default from gist: https://gist.github.com/2423078
  4. add monitc(nginx-monit) to nginx default.rb see gist: https://gist.github.com/2423089
  5. knife cookbook upload nginx monit
  6. knife role from file roles/webserver.rb
  7. run chef-client

Munin

Add a Munin server and make it monitor our web server

  1. knife cookbook site install munin
  2. create an environment: environments/productions.rb from https://gist.github.com/2423772
  3. knife environment from file environment/production.rb
  4. knife node edit demoprep-web1 – change environment from _default to production
  5. create a new role roles/muninserver.rb – as https://gist.github.com/2423790
  6. knife role from file roles/muninserver.rb
  7. in AWS Console add 4949 access to the appropriate security group (could be same as above)
  8. launch a new instance on AWS (same security group, keypair etc.)
  9. knife bootstrap ec2-PUBLIC_DNS.compute-1.amazonaws.com --distro ubuntu10.04-gems --template-file .chef/bootstrap/install_ruby_193-p125.erb --node-name demoprep-web1 -x ubuntu -i ~/.ssh/amazon/demoprep.pem --sudo
  10. tweak cookbooks/munin/attributes/default.rb as of https://gist.github.com/2424057
  11. change from fqdn to ipaddress in cookbooks/munin/templates/default/munin.conf.erb
  12. knife cookbook upload apache2 munin
  13. create data_bags/users/munin.json from gist:
  14. knife data bag create users
  15. knife data bag from file users data_bags/users/munin.json
  16. knife node run_list add demoprep-muninserver ‘role[muninserver]’
  17. knife node edit demoprep-muninserver
  18. change environment from _default to production
  19. knife ssh name:demoprep-muninserver -a ec2.public_hostname -x ubuntu -i ~/.ssh/amazon/demoprep.pem "sudo chef-client"
  20. verify you can log in to the muninserver with munin/test
  21. wait for a few minutes for the index to be generated
  22. add “recipe[munin::client]”, to the webserver role
  23. knife role from file roles/webserver.rb
  24. knife ssh name:demoprep-web1 -a ec2.public_hostname -x ubuntu -i ~/.ssh/amazon/demoprep.pem "sudo chef-client"
  25. wait for a few minutes, check the munin graphs

Endnotes

Make chef-client run as a daemon:

knife ssh name:NODE_NAME -a ec2.public_hostname -x ubuntu -i ~/.ssh/amazon/demoprep.pem "sudo chef-client -d -P /var/run/chef/client.pid -L /var/log/chef-client.log -c /etc/chef/client.rb -i 300 -s 20"

Systems Thinking

03/11/2010

It all started when I read The Fifth Discipline – The Art and Practice of The Learning Organization by Peter M. Senge. I was inspired to read it after my brilliant previous colleague Pat Kua did a workshop on Systems Thinking. (Dennis Stevens wrote a better summary about this book than I will ever be able to.)

I then continued by reading Freedom from Command & Control by John Seddon as well as attended a presentation from John Seddon on the same topic. While Peter Senge’s book was a very interesting read – John Seddon’s was the easiest one to grasp and the one that really opened my eyes.

Seddon brilliantly shows how the wrong targets and measurements encourage the wrong behaviour and creates what he calls failure demand. Furthermore managers and governments sometimes specifies how people should execute their work – with the intention that these processes will then produce the best results. However, as you probably already know – it rarely does – and leads to poor results and demotivated people. Especially managing cost often leads to sub-optimizations which is not good for the end result. (I find some of this is a good match with the Beyond budgeting movement.)

I would love to use some of Seddon’s own examples to convince you, but I would encourage you to read his book instead. I will try however, to give you a few examples of where I have recognized these patterns in IT projects myself:

Managing cost:
Many IT projects seem to be managed on cost. This sometimes leads to testing only being applied at the end of a project (because that should mean less man hours needed for testing.) This obviously leads to bugs only being found at the end of the project. The later you find the bugs – the harder it is to fix them – and the more expensive you can argue it is to fix them. One of the reasons they are expensive to fix later would be that the developers might have forgotten everything about that area in the code base – the ones who wrote the code might even be gone. Also there is a huge risk that parts of the application might have to be redesigned etc.. So by managing cost – and trying to reduce the cost of one part of a process – the total cost of the project might end up higher than what it could have been had the testing started day one.. One should look at the project as a whole and not try to optimize parts of it independently. And while you are at it – does the project itself make sense at all – if you look at it from a Systems perspective?

Targets and failure demand:
Measuring velocity is common in Agile projects these days. In Kanban you would often measure cycle time. The problem however, is that we rarely get to measure this end to end. I.e. from when the customer requests a feature until he actually starts using it. Let’s say you only measure cycle time from when a developer picks up a story until it is ready for System Test. The developer might have to make the story pass a few unit and acceptance tests, but bugs found later in the process might be raised as new bug stories. Hence in terms of the statistics it does not make sense for the developer to go that extra mile and build in the amount of quality needed for the story to pass System Testing. The developer keeps coding at a high speed – while new bugs are raised. From a Systems Thinking point of view – i.e. from the end customers point of view – this of course does not make sense. The bugs are essentially failure demand – and reducing the failure demand would probably allow the user to get to use his (bug-free) features earlier and cheaper.

To fix this, you should start by looking at the System that encouraged this behaviour in the first place. Namely remove the targets that created local optimizations. From a Systems Thinking perspective it is not the developer’s fault that he produced a lot of bugs – it was the targets he was measured on. So is it his manager’s fault then? Well, maybe, but how is she measured then?

Compliance:
Starting out with Agile – you might try to follow the book – and hence end up doing for instance estimation. I.e. Scrum and XP says we should do estimation – if it doesn’t work for us – then we are probably doing something wrong so we should read some more about Scrum and see how we could do estimation better.

Wait a minute – if estimation doesn’t give you anything – then you shouldn’t do it! Blindly believing in processes or measuring people on compliance – is not the way to get the good results. The important thing is the end result – not how you got there.

Eye opener
What I realized as many have before me, is that all these different tools, CI, pair programming, estimation, 7 types of waste, product owners etc. are all just tools. You should use them if they make sense in your context, but don’t use them just for the sake of it. It is not like Kanban is always better than Scrum. It is not like stand-ups is something you have to do. It is not like IT is the solution to every problem. It is not like people are stupid just because they do stupid things – they are to a certain extent a product of the system. Reflect on your end goal. Learn to see underlying structures. You will be amazed!

Is this it then?
No it is not. I believe I have finally learned by know – that whenever I discover a better theory or a better tool – there is always something better waiting for me around the next corner. Take a look at Jurgen Appelo’s stuff on Complex Systems or the Chaordic Mindset in Bob Marshall’s the Marshall Model of Organisational Evolution for instance.

Disclaimer
There is much more to Systems Thinking – and I have only barely started to understand it. Feedback is more than welcome!

Going Forward!

29/10/2010

Being a consultant has been great. I have learned a lot, both in terms of process and technology. However if a company is doing great and have no problems recruiting people, it does not make a lot of sense to hire a bunch of consultants. So although you can come across some great projects – the chances are you will either end up trying to fix a broken process or work with legacy technology.

At Forward I will be able to use the optimal technology (of choice!) and work in a very lean/agile environment. Going from concept to production in a matter of hours/days instead of months or years will be liberating! I also believe it will be eye-opening and push my standards even higher.

Being located in Camden, my commute will almost be reduced to a stroll down the street. Life in London just became even better!

Distributed “Kanban”

04/07/2010

The distributed team I am working with currently has been trying out a little bit of Kanban.  We have assigned a WIP limit to the In Development / For Review column(s). That is, if the total number of stories in the In Development + For Review columns adds up to (or more than!) the WIP Limit, you should not pick up a new story, until at least one story has been reviewed and closed off and we are again below the WIP limit. (Ideally, going forward we should obviously add WIP limits to other parts of the process to, but we had to start somewhere.)

While there are lots of interesting things to say about this – such as that it has made us more disciplined and probably reduced lead time (working on the stats) and might make the team more predictable –  I will focus on the distributed side of things in this post.

Communication is a challenge on every distributed project I would say, and hence getting buy-in and understanding of the WIP limits at one location, does not necessarily mean that every location understands or starts respecting the WIP limits immediately. As every other change effort, one needs to use the full range of consulting skills / influence strategies.

What is a concrete challenge though is the time-zone differences. If one team finishes development and puts a story into For Review, then their typical next step would often be to pick up a new story. What so if the WIP limit will be breached? Should they wait until the resources that will do the technical and business reviews of the stories gets in (given they work in a different time-zone)? Or should they break the WIP limit?

The previous might force you to add a bit of a buffer to your WIP limit. And that might make sense. However, initially this is what happened to us: Location B puts story 1 into For Review and picks up story 2. Location A reviews story 1, but it needs more development or bug-fixing. Location B gets in the next day, does some fixes on story 1, and finish development on story 2. They even pick up story 3. Location A reviews story 1 and story 2, but they are both still not accepted. Hence Location B now has 3 stories in play.. This could go on for some time.

Let’s say the capacity of Location B is to work on one story at a time. Does it make sense to add in a buffer of 2++ extra then? I tend to disagree. So what we ended up communicating to Location B was to focus on quality before quantity. Do not rush to pick up new stories, but try to finish stories with significant quality.

What happened was two things:

1. Location A started reviewing stories much quicker. Focus was first on cleaning out the For Review queue – then do some development. (In order to give Location B quicker feedback, and prevent them from picking up too much new work before stories had been successfully reviewed).

2. Location B decided the brilliant thing of doing some local technical review first, before marking the story as For Review. This raised the quality, and took some of the work load off location A. It also meant that Location B are no longer rushing to picking up new work, but focusing on completing stories properly.

These two things allowed us to keep our WIP limit relatively tight. We do have a buffer for the stuff in For Review, but we try to keep it as small as possible.

Kanban – the book

16/06/2010

I have quite recently had the pleasure of reading David Anderson’s latest book on Kanban. I thought I had a pretty good understanding of Kanban before I started reading it. However, I still had some unanswered questions – some – which for a while might have reduced my confidence in persuading my peers to just try it. After reading the book I got most of my questions answered, and at the time of writing we are just starting to experiment with a little bit of Kanban on my current project. Happy days!

Whether you know nothing about Kanban or consider yourself fairly well educated on the topic I think the book is definitely worth reading. Instead of giving a full summary of the book, I thought I should give you some of the answers to the questions I got answered from reading the book (as I understood it, although I might also be a bit influenced by other sources):

  • How do you really get started? Take your current process, visualize it (on a board) and apply WIP limits. Try measuring lead time, and maybe create a cumulative flow diagram. For longer term success, you should get some buy-in / agreement from up- and downstream stakeholders though.
  • What is some of the major selling points? Create a predictable performing team. Reduce lead time. Optimize throughput. Expose bottlenecks.
  • How do you deal with blocked stories? Kanban should force the team to swarm on a blocked story, with the result of resolving it. It might need to be escalated, but then again a manager should see the importance of helping to resolve the issue.
  • When do you release, plan, ..? Kanban allows you to decouple input cycles from output cycles. That is, you could release every Monday if you want, but perhaps only have meetings to fill up the input queue every two weeks. You could have a retrospective every third Friday if you want. Or you could even trigger these events on an as needed basis.
  • How do you become predictable when stories might vary in size, priority ..? Classes of Service! For example: a ‘standard story’ will be finished in 14 days on average. ‘Expedite stories’ will be finished in 10 days on average, but you are only allowed to have one expedite story in play at any one time and so forth. David describes other interesting examples of Classes of Service as well, I highly recommend you to read this.

I think I should stop here and leave someting for you to read as well 😀

Pair Programming Workshop at Agile Spain 2010

14/06/2010

I had the pleasure of facilitating a workshop at Agile Spain last week. Apparently this was the first Agile Spain ever. I must say the people at the conference were very enthusiastic and eager to learn more about Agile. I met lots of interesting people, and really appreciated how welcoming everyone was. A special thanks to those who translated eager Spanish discussions for me.

My workshop consisted of two parts. First my initial presentation called Pair Programming Strategies. My proposal to the conference was intended as a workshop around what we tend to do when our ideal pairing situation comes under pressure from various disturbing forces. My understanding is that conference organizers will post the slides on Slideshare, meanwhile you can take a look at the PDF here: Pair Programming Strategies – Agile Spain 2010.

The second part was an Open Space discussion. It turned out that less than half of the participants had tried pair programming, and only a few were doing it on a daily basis. However, the Open Space format was very adaptive to the participants’ needs, and it seemed we ended up with a set of interesting topics. Here is a quick summary of the discussions, based on the notes I gathered from each group (thanks for writing this down!). (If you participated in the workshop and are reading this blog, please feel free to add some comments. I wish I had taped the summaries everyone gave at the end.)

How do you start (pair programming)

This group was curious about how you actually get started with pair programming. Here are some keywords from the discussion:

  • Physical setup: Big screen / 2 screens. Movable keyboard / 2 keyboards.
  • Write down what you are working on.
  • Driver / copilot / supervisor. Switch roles.
  • Responsibility – Collective code ownership
  • It could be exhausting
  • Good merge with TDD. One writes the test, the other writes the code to pass it (Ping, pong)

Performance impact

This group discussed whether pair programming is an effective way to work or not. Here are some keywords:

  • Just for core functionality or also easy tasks?
  • What about deadlocks?
  • More focus
  • Knowledge transfer
  • Share responsibilities
  • Less or more stressful?
  • Tight schedules

The study referred to in this Wikipedia article which observed “a 15% increase in time, but a 15% decrease in bugs”, was mentioned as an example of a quantitative measure on the effectiveness of pair programming.

Distributed Pairing

This group discussed how or whether it is possible to do distributed pair programming.

  • Network latency is a problem
  • Very high bandwidth required
  • Communication services, such as video/audio must be working all the time
  • Desktop sharing tools
  • Different timezones
  • Is it possible to reproduce benefits of pairing remotely?
  • Try it first with people you have paired with co-located earlier
  • Multiple monitors

If I remember correctly, the group concluded that you needed to create an environment that was as close to a co-located one as possible. Then you would hopefully achieve a flow that was approximately the same as in a co-located setting.

Disagreement

Eventually someone formed this last group, discussing disagreement. They discussed what to do with a deadlock. And suggested one should involve a third person.

I asked everyone to give me a quick feedback on their way out, the result was 17 “+”, 7 “+/-” and 0 “-“. Among suggestions for improvements was to find a room without tables, as well as make it more dynamic. I am quite pleased with the feedback, but will do my best to improve. If you have more feedback, please leave a comment or send me an email (erlingwl (at) gmail (dot) com).

A big thanks to everyone who participated. I had a great time in Madrid 🙂

Beyond Budgeting

31/05/2010

Staying true to my Personal Kanban, I am obliged to blog about all books I read. Fortunately it will be a pleasure to blog about this book I have been reading lately.

Beyond Budgeting is all about the negative impacts of budgets and how several organizations have flourished without them. Getting rid of budgets, that’s a drastic change, and clearly no serious organizations would ever dream of doing that, you might be thinking now. However, the best example from the book is probably the Swedish bank, Handelsbanken, which have been doing very well without budgets for more than 30 years. Furthermore, I have worked for an organization without budgets myself.

There are several drawbacks with budgets:

  • Cost – They cost a lot, many organizations might spend up to 6 months every year on creating budgets for the following year. There might be far better ways to spend your accountants’ time.
  • Politics and bureaucracy – Budgets can make people lose focus, and a lot of time might be spent playing political games and fighting bureaucracy.
  • Lost opportunities – Your budget targets might say you have to increase revenue by 10%. Well, that might force you to work hard, however, it might also distract you from grabbing an opportunity to increase revenue by 50%.

Removing budgets would typically allow you to empower your frontline workers, make information more visible to everyone, defer decisions until a more responsible moment (no more planning 1 year ahead), reduce waste and utilize your accountants’ competence in a much better way as well as grab more opportunities as they come along. This all is clearly music to lean and agile ears.

If you want to know more about how to move on beyond budgeting, I would highly recommend reading the book. It contains several case studies from different organizations that have replaced budgets successfully in more or less similar ways.

Personal Kanban

25/04/2010

A few weeks ago I attended http://skillsmatter.com/podcast/agile-scrum/john-stevenson-kanban-for-just-in-time-training by @JR0cket. He explained how he was using an online Kanban board to manage his own personal studying. Most interesting I found his done state definition; “Blogged”. The idea is that given a reasonable WIP limit, before you can start a new pet project or read a new book, you have to blog about the task you just finished. And that is exactly what I am doing now:

Using the iKan app on my iPhone, my Study Kanban board looks like this (WIP limits in brackets):

Backlog (4) -> Studying (2) -> Write blog post (2) -> Blogged / Done.

Already, this has forced me to focus on just a few topics at a time. Hopefully I will be disciplined enough to try to always write a blog post before moving anything into the Done state. As John mentioned; you force yourself to study a bit harder when you know you have to write a blog post about it later.

If I remember correctly, John had at least two Kanban boards for more or less different processes. I have gone down that road myself as well, with the following additional “Personal Economy etc.” Kanban board:

Ready (5) -> Doing (2) -> Done

I am currently reflecting on the concept of having several Kanban boards. The WIP limits are not very effective if I can get around them just by adding a new board. However, the two processes are significantly different. I am pretty sure people don’t want me to blog about my Tax returns etc. Perhaps having a shared backlog or similar could be useful.

Imagine the following scenario: I want to add a new book to my study backlog, but because it is full I have to finish another task first. If I had a shared backlog between my study and personal stuff, this could force me to prioritize and get started with some boring chores and actually get them done. I guess the only thing stopping me from trying this is my current tool set. Once again it seems a physical board could be the simplest solution. To be continued..