Ricon 2012 Recap

Back in July 2012, Basho's Technical Evangelist, Tom Santero, posted an invitation to the riak mailing list to a conference Basho was holding on distributed databases called Ricon. Tom said:

 

...the goal is to put on a two day event that caters to developers working in, with and
around distributed systems. While Riak will be a primary focus, we've
invited several select speakers whose reputations and work in distributed
systems procceed them.

 

I've been in and or around the non-traditional database world for a few years now prodding and poking around, checking out the various flavors, reading up on comparisons and perusing their mailing lists. Although garnering a lot of attention ranging from justified to hyperbolic in the last few years, non-relational databases - often falling under the conflated NOSQL moniker - are still a niche, fledgling set of technologies. Who would come to this conference? Basho's Riak simply had not had http://www.google.com/trends/explore#q=riak%2C%20couchdb%2C%20redis%2C%20mongodb&cmpt=q" target="_blank">the same mindshare that a Mongo or even a Cassandra or Couch have enjoyed, not to mention the overall Hadoop ecosystem. In truth, I was skeptical. Oftentimes, a company specific conference devolves into a marketing and sales fun fest that leaves you running for the exits.

Well, turns out Ricon ended up selling out at around 350 distributed database enthusiasts and believe it or not, the conference did not devolve but presented a very interesting mix of academic theory and practical implementation. Others thought so as well:

 

 

The Talks

Bookended by two preeminent speakers, Joseph Hellerstein and Eric Brewer, Ricon started and ended like an academic conference. Their talks were both theoretical and practical in that they both presented distributed database concepts like monotinicity and commutative replicated data types (CRDT's) in an approachable way and coupled that with practical next steps for current and future development like Joe and his team's efforts on Bloom language (not to be confused with Bloom filters) and Russel Sears' Stasis. These talks were about as product agnostic as any at Ricon.

There were talks by Basho engineers that built on and further bridged the gap of the theoretical with the practical as implemented in Riak. Sean Cribbs and Russell Brown showed us how some of those theoretical principles are being realized in code by releasing riak_dt, an experimental Riak branch that adds things like counters to standard Riak. Counters have been available in Cassandra for a while now and have been something the Riak community has been interested for some time. And by Riak community, I mean me... and possibly a few others. Bryan Fink gave a talk on Riak Pipe which is the underpinnings of Riak's map reduce implementation. The Q&A after Bryan's talk was dominated by the possibility of leveraging Pipe to create a real-time processing environment in a similar vein to that of Storm.

One talk I was particularly interested in was Ryan Zezeski's Yokozuna presentation which highlighted ongoing development of Riak's search capabilities. The new effort is a fresh start and departure from the current implementation. Yokozuna replaces the current Lucene-like, a.k.a. not Lucene, search internals for actual Lucene by way of integrating Solr directly into Riak. There are quite a few advantages to be had here. As Ryan indicated in his talk, Basho is not in the business of search. By relying on Lucene to drive search, Basho effectively outsources the core search problem to a proven and well-known solution. Additionally, this reorients the current term-based partitioning mechanism with a document-based partitioning mechanism. What that basically means, is that your search index will live on the same node as your document which has implications for most composite search queries. Part of Gary Flake's keynote delved into this particular consideration and how his company, Clipboard, deals with this currently.

Two non-Riak talks that were quite interesting were Dana Contreras's talk on Twitters' internal stack re-architecture employing interconnected independent services and Accenture on how to present "big data" as a concern, opportunity and solution to decision-makers in your organization. Dana's talk took a look at the internal engineering and operational process as it evolves in an orginization as their code base and engineering head count grows. For Twitter, segmenting their internal services not only provided for a cleaner code base but also allowed teams of people to focus on one area of the code base at a time. Accenture's talk was the most "businessy" of the conference but I think it is something that developers and engineers should have some insight into. Market fit, risk, productization and cost are all concerns for your executive suite and as a developer or consultant you need to speak to those concerns when pitching solutions. Accenture's analysis shows that in the last few years, mainly thanks to the popularity of Hadoop, executives in traditional industries are aware of "big data" and are open to entertaining solutions in the space.

Future directions

Where does Ricon go from here? There is always room for improvement. My main concern is that the content remain diverse in terms of products. Politics and money aside (is that even possible?), I would like to see continued participation from other vendors or even end users whose solutions employ other vendors' products if this conference is to continue billing itself as a distributed systems gathering. Riak is not the only solution in the space and it is worthwhile knowing how other solutions are implementing distributed principles. I definitely think, on the whole, an inclusive approach simply stands to grow the non-traditional distributed database/systems pie.

Chatter by conference attendees left me convinced that Ricon was a success. Ricon was-well executed, well-attended and actually interesting. But more importantly, it was relevant. For those of us at the conference, we actually work in this space. We are interested in the ongoing development of distributed solutions to a number of problems. The conference delivered on creating a space that brought us together to share solutions and learn about continuing advancements. For a new conference to have a successful maiden voyage is no small feat in my book. I, for one, am looking forward to the next one.


I would love to hear your thoughts on what would make a good distributed systems conference. Were you at Ricon? What did you think?

Using Riak's map/reduce for sorting

From a database perspective, Riak is a schemaless, key/value datastore. The focus of this post is to show you how to do the equivalent of the sql "SORT BY date DESC" using Riak's map/reduce interface. Due to Riak's schemaless, document focused nature Riak lacks internal indexing and by extension, native sorting capabilities. Additionally, Riak does not have a single file backend. The primary default backend is called Bitcask but Riak does offer a number of different backends for specific use cases. This makes an internal general purpose index implementation impractical, especially so once you factor in the distributed nature of the platform.

So how does a sort actually work in this environment? Map/Reduce. Riak implements map/reduce as its way of querying the riak cluster. Lets keep this description light and simply say: Riak brings your query (for the most part) to the node where your data lives. The map part of your query is distributed about the cluster to the nodes where the data resides, executed, then results sent back to the originating node for the reduce phase. You can write your map/reduce query in two different languages - erlang and javascript (Spidermonkey is the internal JavaScript engine.)

So now that you have a basic theoretical underpinning, how does this actually work in practice? I'm including here a snippet of a heavily commented javascript function that i use in one of my nodejs apps. The bridge between nodejs and Riak is a module called riak-js (disclosure, I've contributed some patches.) Let's take a look, I'll see you on the other side.

Lets break this down. This function is part of a larger nodejs application that uses the fu router library lifted from node_chat, a quite approchable getting-to-know-node example application. No you can not cut and paste this code somewhere and have it work. What you should do is take a look at the map and reduceDescending variables (lines 15 and 40). Those functions are written in javascript and sent over the wire to riak. Lets go over some of the magic that makes this work.

Riak will gladly accept a bucket as it's input mechanism in a map/reduce. Although Basho has done a good amount of work to make this performant, simply passing a bucket will force an expensive list:keys operation internally. The more keys you have in your system the longer this will take. Sometimes this is unavoidable or even desirable. Most likely you will want to expressly pass keys to the map/reduce job. This is done in the format:

[ ["bucket","key1"],["bucket","key2"],["bucket","key3"],["bucket","key4"] ] 

Now, although I'm passing the keys here in order (key1... keyN), recall that riak has no internal concept of ordering. The map phase will seek out the keys wherever they live and the result is not guaranteed to be ordered. What is needed is to sort the result set in the reduce phase once all the data has been collected. In this case I will be sorting by the X-Riak-Last-Modified header which is a date kept in the format "Tue, 31 Aug 2010 06:46:02 GMT". Well, that doesn't look like a sortable string, does it? The trick is to turn it into an int, as I do on line 28:

o.lastModifiedParsed = Date.parse(v["values"][0]["metadata"]["X-Riak-Last-Modified"]); 

Here the string date is pulled out of the header and converted via the native javascript function Date.parse() into an int. It is the int that allows the numeric sorting in the reduce phase on line 46:

v.sort ( function(a,b) { return b['lastModifiedParsed'] - a['lastModifiedParsed'] } );

The format "b-a" is what dictates descending order, conversely ascending order would be written as "a-b". Remember the value is embedded within a javascript object and needs to be accessed as such. This trick can be used with any integer value embedded in a json object. If my "key" (on line 30) were an int I could use that, or maybe a price or quantity value.

Map/reduce is a bit tricky to wrap you mind around when coming from a relational/sql background but the new breed of NoSQL databases available make it easy to duplicate many of those features. Riak exposes a fully functional map/reduce implementation to get at all the nested parts of your complex json documents. So what are you waiting for? Get codin!

Notes from the Basho webinar on benchmarking Riak

Something that often comes up in the various nosql message boards and irc channels from new and experienced users alike is the broad question of performance. How many ops/sec can I squeeze out of Riak/Mongodb/Cassandra/etc.? How many keys can it hold? How will performance degrade if most of the values I'm keeping are less than 100KB on Tuesday's but on every other Thursday they spike to 500KB. Most of the time I have 80% reads vs 20% writes but I want to know what would happen if that mix changes. Will it shred my disk? Do I have enough I/O for my load? I've seen all those and then some out there in the wild... Ok. Maybe not the alternate Thursday's, but you get my point.

Users need a uniform, simple to use mechanism to test their systems themselves. There are so many floating variables that govern overall system performance that it is hard to get a straight answer from anybody, but more specifically - hard to get an answer that is right for you and your unique needs.

Earlier today I had the pleasure of sitting in on a webinar hosted by Basho, the makers of Riak. Shortly, Basho will release basho_bench (I believe that is the correct name), a framework for benchmarking Riak. This all dovetails nicely with a Basho blog post regarding the inevitable comparisons between various nosql offerings. Beyond having many knobs and levers to tweak for your demanding benchmarking needs, I'll touch on three features that make this tool very useful. 

Each baso_bench test is a configurable, simple text file. This will allow standard test patterns to be developed and shared amongst the community for various use cases. Basho_bench is also integrated with the R statistical analysis programming language. All tests dump their results to their own self contained folder which is than used by R to print out eye candy graphs. Oooh... shiny. Most importantly, basho_bench has the ability to change the transport mechanism by which it connects to Riak. Because Riak itself supports multiple access methods (http, protobuf and native erlang client), the framework will allow the basho_bench tool to be extended to support benchmarking on other nosql key/value like systems. I see the glimmer of a thrift interface in the distance... This single feature will go a ways to making basho_bench a standard test suite in the nosql space. 

 

Keep your eyes open for the release of the basho_bench tool in the next week or so.  

 

The following are some of my non-authoritative, off the cuff notes form the presentation. Many of them you should be familiar with from benchmarking in general and some are specific to the options available in this new suite. The full slide stack should be available from Basho in the next week or so.

Performance measured in - 

  • Throughput - operation/sec
  • Latency

Test typical and worst-case scenarios

Minimize variables changes between tests

Run early and often

Iterative testing process

Introducing basho_bench

  • benchmark anything that is a key/value store (other nosql solutions)
  • spins up multiple threads (akin to concurrent requests)
  • driver specification (http, protobuf, etc)
  • event generator (80% read / 20% write)
  • key generator (incrementing integer)
  • payload generator (various size, binary)

Microbenchmarks are bad

  • benchmarks should be long running
  • cache warm ups
  • page flushes
  • backend specific issues

Eye candy output via R integration

Key generation

  • sequential ints
  • pareto ints (simulate hot keys)

Value generation

  • fixed length random bin data
  • random length random bin data

Benchmarking is Hard

  • tool and system limits
  • multi-variate space
  • designing accurate tests
  • dont take results out of context
  • everything is relative

Gotchas

  • file handler exhaustion
  • swapping thrashing (one run only developer problems after 12hrs)

Conduct your own tests, things to find out

  • gets vs puts vs deletes
  • key distribution
  • value size distribution