Something that often comes up in the various nosql message boards and irc channels from new and experienced users alike is the broad question of performance. How many ops/sec can I squeeze out of Riak/Mongodb/Cassandra/etc.? How many keys can it hold? How will performance degrade if most of the values I'm keeping are less than 100KB on Tuesday's but on every other Thursday they spike to 500KB. Most of the time I have 80% reads vs 20% writes but I want to know what would happen if that mix changes. Will it shred my disk? Do I have enough I/O for my load? I've seen all those and then some out there in the wild... Ok. Maybe not the alternate Thursday's, but you get my point.
Users need a uniform, simple to use mechanism to test their systems themselves. There are so many floating variables that govern overall system performance that it is hard to get a straight answer from anybody, but more specifically - hard to get an answer that is right for you and your unique needs.
Earlier today I had the pleasure of sitting in on a webinar hosted by Basho, the makers of Riak. Shortly, Basho will release basho_bench (I believe that is the correct name), a framework for benchmarking Riak. This all dovetails nicely with a Basho blog post regarding the inevitable comparisons between various nosql offerings. Beyond having many knobs and levers to tweak for your demanding benchmarking needs, I'll touch on three features that make this tool very useful.
Each baso_bench test is a configurable, simple text file. This will allow standard test patterns to be developed and shared amongst the community for various use cases. Basho_bench is also integrated with the R statistical analysis programming language. All tests dump their results to their own self contained folder which is than used by R to print out eye candy graphs. Oooh... shiny. Most importantly, basho_bench has the ability to change the transport mechanism by which it connects to Riak. Because Riak itself supports multiple access methods (http, protobuf and native erlang client), the framework will allow the basho_bench tool to be extended to support benchmarking on other nosql key/value like systems. I see the glimmer of a thrift interface in the distance... This single feature will go a ways to making basho_bench a standard test suite in the nosql space.
Keep your eyes open for the release of the basho_bench tool in the next week or so.
The following are some of my non-authoritative, off the cuff notes form the presentation. Many of them you should be familiar with from benchmarking in general and some are specific to the options available in this new suite. The full slide stack should be available from Basho in the next week or so.
Performance measured in -
- Throughput - operation/sec
- Latency
Test typical and worst-case scenarios
Minimize variables changes between tests
Run early and often
Iterative testing process
Introducing basho_bench
- benchmark anything that is a key/value store (other nosql solutions)
- spins up multiple threads (akin to concurrent requests)
- driver specification (http, protobuf, etc)
- event generator (80% read / 20% write)
- key generator (incrementing integer)
- payload generator (various size, binary)
Microbenchmarks are bad
- benchmarks should be long running
- cache warm ups
- page flushes
- backend specific issues
Eye candy output via R integration
Key generation
- sequential ints
- pareto ints (simulate hot keys)
Value generation
- fixed length random bin data
- random length random bin data
Benchmarking is Hard
- tool and system limits
- multi-variate space
- designing accurate tests
- dont take results out of context
- everything is relative
Gotchas
- file handler exhaustion
- swapping thrashing (one run only developer problems after 12hrs)
Conduct your own tests, things to find out
- gets vs puts vs deletes
- key distribution
- value size distribution