Logo F2FInterview

CouchDB Interview Questions

Q   |   QA

CouchDB uses an "optimistic concurrency" model. In the simplest terms, this just means that you send a document version along with your update, and CouchDB rejects the change if the current document version doesn't match what you've sent.

It's deceptively simple, really. You can reframe many normal transaction based scenarios for CouchDB. You do need to sort of throw out your RDBMS domain knowledge when learning CouchDB, though. It's helpful to approach problems from a higher level, rather than attempting to mold Couch to a SQL based world.

Keeping track of inventory

The problem you outlined is primarily an inventory issue. If you have a document describing an item, and it includes a field for "quantity available", you can handle concurrency issues like this:

  •     Retrieve the document, take note of the _rev property that CouchDB sends along
  •     Decrement the quantity field, if it's greater than zero
  •     Send the updated document back, using the _rev property
  •     If the _rev matches the currently stored number, be done!
  •     If there's a conflict (when _rev doesn't match), retrieve the newest document version

In this instance, there are two possible failure scenarios to think about. If the most recent document version has a quantity of 0, you handle it just like you would in a RDBMS and alert the user that they can't actually buy what they wanted to purchase. If the most recent document version has a quantity greater than 0, you simply repeat the operation with the updated data, and start back at the beginning. This forces you to do a bit more work than an RDBMS would, and could get a little annoying if there are frequent, conflicting updates.

Now, the answer I just gave presupposes that you're going to do things in CouchDB in much the same way that you would in an RDBMS. I might approach this problem a bit differently:

I'd start with a "master product" document that includes all the descriptor data (name, picture, description, price, etc). Then I'd add an "inventory ticket" document for each specific instance, with fields for product_key and claimed_by. If you're selling a model of hammer, and have 20 of them to sell, you might have documents with keys like hammer-1, hammer-2, etc, to represent each available hammer.

Then, I'd create a view that gives me a list of available hammers, with a reduce function that lets me see a "total". These are completely off the cuff, but should give you an idea of what a working view would look like.

Map

function(doc)
{
if (doc.type == 'inventory_ticket' && doc.claimed_by == null ) {
emit(doc.product_key, { 'inventory_ticket' :doc.id, '_rev' : doc._rev });
}
}

This gives me a list of available "tickets", by product key. I could grab a group of these when someone wants to buy a hammer, then iterate through sending updates (using the id and _rev) until I successfully claim one (previously claimed tickets will result in an update error).

Reduce

function (keys, values, combine) {
return values.length;
}

This reduce function simply returns the total number of unclaimed inventory_ticket items, so you can tell how many "hammers" are available for purchase.

Caveats

This solution represents roughly 3.5 minutes of total thinking for the particular problem you've presented. There may be better ways of doing this! That said, it does substantially reduce conflicting updates, and cuts down on the need to respond to a conflict with a new update. Under this model, you won't have multiple users attempting to change data in primary product entry. At the very worst, you'll have multiple users attempting to claim a single ticket, and if you've grabbed several of those from your view, you simply move on to the next ticket and try again  

To get on write view update semantics, you can create a little daemon script to run alongside CouchDB and specified in couch.ini, as described in ExternalProcesses. This daemon gets sent a notification each time the database is changed and could in turn trigger a view update every N document inserts or every Y seconds, whichever occurs first. The reason not to integrate each doc as it comes in is that it is horribly inefficient and CouchDB is designed to do view index updates very fast, so batching is a good idea. See RegeneratingViewsOnUpdate for an example.

To get a list of all views in a database, you can do a GET /db/_all_docs?startkey=_design/&endkey=_design/ZZZZ (we will have a /db/_all_design_docs view to make the ZZZZ-hack go away).

That should solve your problem.

Yes, such a daemon should be shipped with CouchDB, but we haven't got around to work on the deployment infrastructure yet. Any contributions to this are very welcome. I think the developer's choice of language for helper scripts is Python, but any will do, whatever suits you best.
 

On you local machine, set up an ssh tunnel to your server and tell it to forward requests to the local port 5984 to the remote server's port 5984:

$ ssh -L5984:127.0.0.1:5984 ssh.example.com

Now you can connect to the remote CouchDB through http://localhost:5984/

It would be quite hard to give out any numbers that make much sense. From the architecture point of view, a view on a table is much like a (multi-column) index on a table in an RDBMS that just performs a quick look-up. So this theoretically should be pretty quick.

The major advantage of the architecture is, however, that it is designed for high traffic. No locking occurs in the storage module (MVCC and all that) allowing any number of parallel readers as well as serialized writes. With replication, you can even set up multiple machines for a horizontal scale-out and data partitioning (in the future) will let you cope with huge volumes of data. (See slide 13 of Jan Lehnardt's essay for more on the storage module or the whole post for detailed info in general). 

A couple of reasons:

1) Your reduce function is not reducing the input data to a small enough output. See Introduction_to_CouchDB_views#reduce_functions for more details.

2) If you have a lot of documents or lots of large documents (going into the millions and Gigabytes), the first time a view index is created just takes the time it is needed to run through all documents.

3) If you use the emit()-function in your view with doc as the second parameter you effectively copy your entire data into the view index. This takes a lot of time. Consider rewriting your emit() call to emit(key, null); and query the view with the ?include_docs=true parameter to get all doc's data with the view without having to copy the data into the view index.

4) You are using Erlang release R11B (or 5.5.x). Update to at least R12B-3 (or 5.6.3). 

In order to link this F2FInterview's page as Reference on your website or Blog, click on below text area and pres (CTRL-C) to copy the code in clipboard or right click then copy the following lines after that paste into your website or Blog.

Get Reference Link To This Page: (copy below code by (CTRL-C) and paste into your website or Blog)
HTML Rendering of above code: