7 min read
Who invented Chaos Engineering?
Chaos Engineering began at Netflix. That makes sense when you consider the complexity of the Netflix technology stack and the way the company have scaled over the last 5 years or so. It built a number of tools to help adopt this chaos-first approach, the most prominent being Chaos Monkey. First launched in 2011 and open-sourced in 2012, Chaos Monkey was a tool that randomly selects instances in production and pulls them down; a little bit like monkeys pulling off your windscreen wipers in a safari park. However, Chaos Monkey became part of a wider suite of tools – called the Simian Army – that were built by Netflix to cause chaos in different part of its infrastructure. Here are the other two components used to simulate chaos:
- Chaos Gorilla causes big trouble by pulling down an entire AWS availability zone
- Latency monkey delays communication, essentially simulating poor network performance
From that point Chaos Engineering grew. A number of large Silicon Valley organizations have adopted a similar approaches. For example, Facebook’s Project Storm simulates data center failures on a huge scale, while Uber uses a tool called uDestroy. Slack has recently spoken in detail on the importance of stress testing their software too; the company is looking to build an engineering team simply to perform Chaos Engineering and improve Slack’s reliability.
One of the most interesting figures in Chaos Engineering is a man called Kolton Andrus. Andrus used to work at Amazon and Google, but today he is the CEO and founder of Gremlin, a startup that “helps engineers build resilient systems”. Essentially, Andrus helped to develop the concept of Chaos Engineering while he was working at Netflix. Gremlin is his vehicle that is making it accessible to others.
Chaos Engineering in practice
Now the conceptual stuff is out of the way, here’s how chaos engineering works. It’s actually quite straightforward: Chaos Engineering simulates all sorts of unpredictable situations and scenarios in order to see how the system responds. It’s effectively a form of stress testing.
As we’ve seen, over the past few years companies have built their own tools to allow them to stress test their infrastructure. But Gremlin is taking the approach of offering this as a service. It’s product is described as ‘resiliency-as-a-service.’
Its’ product is a whole library of ‘attacks’ which can replicate different types of outages within a system. These are what it calls ‘chaos experiments’ that allows you to ‘identify weak points in your system and fix them before they become a problem’. In this sense, Chaos Engineering is a bit like using the principles of penetration testing an applying it to software testing more broadly. By simulating everything that could possibly go wrong it allows you to make much better optimization decisions.
The principles of Chaos Engineering are documented here. This is effectively its ‘manifesto’. There’s a lot in there worth reading, but here are the 5 principles that any sort of testing or experimentation should aspire to:
- Base your testing hypothesis on steady state behavior. Consider your infrastructure holistically, making individual parts work is important but not the priority.
- Simulate a variety of real-world events. This could be hardware or software failures, or simply external changes like spikes in traffic. What’s important is that they’re all unpredictable.
- Test in production. Your tests should be authentic.
- Automate! Testing things could be laborious and require a lot of manual work. Make use of automation tools to do lots of different tests without taking up too much of your time.
- Don’t cause unnecessary pain. While it’s important that your stress-tests are authentic, the impact must be contained and minimized by the engineer.
Why Chaos Engineering now?
Chaos Engineering isn’t particularly new. As you’ve seen, Netflix has been doing it since 2011. But it does feel more urgent and relevant today.
That’s because the complexity of the software infrastructure behind many of the biggest Silicon Valley companies is now mainstream. It’s normal. Cloud isn’t an exotic buzzword any more – it’s a reality (a reality that often has failures). Microservices are common – they’re a commonsense way of building better applications and websites.
Alongside this increased complexity, there is also a growing awareness of how much software outages can cost businesses. In a white paper, Gremlin make a big deal out of how much money is lost due to outages. Gremlin cite BAs system failure in summer 2017, which led to passengers stranded all over the world. This outage was estimated to have cost BA $135 million. It also refers to the Amazon S3 outage in March 2017, which is believed to have cost Amazon’s customers $150 million.
So – outages cost money. Yes, it’s marketing spiel from Gremlin, but it’s also true. It doesn’t take a genius to work out that if you’re eCommerce site is down for an hour, you’re going to have lost a lot of money.
Because software performance is so tied up with business performance, it feels incredibly fragile. That’s why Chaos Engineering is perhaps more important and popular than ever. It’s a way of countering that fragility.
The key challenges of Chaos Engineering
Chaos Engineering poses many challenges to software engineering teams. First and foremost, it requires a big cultural change. If you’re intent on breaking everything, there are no rules about how things should work or what you’re trying to build. Instead you’re looking for the best way to build software that performs for the user.
More practically, Chaos Engineering isn’t that easy to do in a cost-effective manner. Everything Gremlin details in its white paper is very much true – of course outages cost a hell of a lot. But creative destruction and experimentation feels like an expensive route through software projects. It’s not hard to see how it might appear self-indulgent, especially to a company or organization where software isn’t properly understood.
And more to the point, how often do businesses actually do the smart thing when they’re building software? Long term projects are always difficult. So much software evolves pragmatically – often for the worse. Adding in an extra layer of experimentation and detailed testing is a weird mix of bacchanalian and hyper-organized, something that many organizations just couldn’t process or properly understand.
Chaos engineering and the future of software development
Chaos Engineering certainly looks like the future of software development. The only question is whether services like those provided by Gremlin will take off. To understand the true value of stress testing your infrastructure you do need at least a modicum awareness of the complexity of your infrastructure. Indeed, you probably need to have a conversation about what services and dependencies are most business critical. Or rather, which ones most impact the user. That’s something this TechCrunch piece addresses:
“Testing can… be very political. Finding the points of failure in a system might force deep conversations about a particular software architecture and its robustness in the face of tough situations. A particular company might be deeply invested in a specific technical roadmap (e.g. microservices) that chaos engineering tests show is not as resilient to failures as originally predicted.”
This means there is going to be a question mark over the extent to which Chaos Engineering ever really enters the mainstream. How many businesses want to have these conversations? It’s not just about the inclination – it’s also about the time and money. It’s an innovative software engineering approach that really calls people’s bluff when they talk about innovation. It asks difficult questions about how and why you innovate: do you do new things because you think you should? Is this new thing going to be good for the business? And how well will it work for users? Of course these questions are vital when you’re building software. But they rarely make building software easier.