Jumping from one processor to two creates a world of parallel execution issues. Having two couchdb servers creates consistency and locking issues for the appserver using the datastore. Document datastores are coming into use for real-world large scale websites - FourSquare/MongoDB for example.
The goal is simple: consistency across more than one server - a webapp with multi-site redundancy. One server runs some number of app servers and some number of document stores. Usually a few appservers and one store. This sets up easily and on a low-cost machine. How is a webapp constructed so that I can start some number of extra servers at will? Note the servers may only have Internet-speed connections instead of being in the same data center. The Appservers can run independently of each other, with session data in a browser cookie. Datastore consistency between servers is the key.
Assume two sites A and B with one appserver and one datastore per site. Round-robin DNS is setup to distribute requests between servers. When one datastore gets an update, it performs the update locally and distributes that update to all other servers as fast as it can.
A user requests the user profile page. Server A serves the page with user data retrieved from its datastore. The user changes the street address in a browser and presses save. The POST goes to server B, the result of which is a redirect to the user profile page. This user profile page comes from server A. Has the user record on server A been updated to reflect the change? The answer is maybe. It will eventually become consistent but we don't know when. Sometimes this is important, sometimes it isn't.
A more difficult example is if an update went to server B and a delete went to server A. When server B communicates the update to server A, the record will no longer exist. These are the exact issues dealt with by distributed version control systems such as git.
The best research I have come across for keeping seperate datastores in sync is Amazon's Dynamo project. Using three consistency controls, N R W, different attributes of speed, durability, and consistency can be emphasized in the storage system.
The reason I am interested in couchdb above other document storage projects is its single-mindedness on map/reduce. One of the shockers about couch is that it does not map/reduce amongst multiple nodes on its own. A separate project, Lounge, which doesn't seem to have much momentum, is for that.
Today I learned about BigCouch. I'm not sure if its a significant change to the couchdb code, or more of a front-end helper (I suspect the former). It takes dynamo-style storage controls and applies them to managing multiple couchdb processes. awesome.
From the BigCouch README with some rewording of my own:
- Q - number of nodes to use, specified during database creation
- N - replication constant. N defaults to 3, but can vary by database just as Q can vary. Document updates will be written to N different nodes.
- R - read quorum constant. When reads are requested, a read is sent to the N nodes that store the particular document. The system will return to the requesting client with the document when R successful reads have returned and agree on versioning. R defaults to 2. Lower R values often result in faster reads at the expense of consistency. Higher R values usually result in slower reads, but more consistent, or agreed-upon data values returning.
- W - write quorum constant. When writing the N copies, the data store will respond to the write client after W successful writes have completed. The remaining N-W writes are still being attempted in the background, but the client receives a 201 Created status and can resume execution. W defaults to 2. Lower W values mean more write throughput, and higher W values mean more data durability.
Immediate consistency is ideal and RDBMS guarantees that. I believe there are advantages to knowing where eventual consistency is good enough. With the user address changes, the next read from the user should show the updated value, since the user is aware that a change should have taken place. Everyone else is happy with eventual consistency. This is app-specific of course. For a social media site, lets say a few minutes is enough time for the system to sync up. I imagine interesting dashboard data can be done on how long it takes for data to achieve consistency on all nodes.