Tuesday, March 22, 2011

Distributed systems and dynamic typing

There's a very good blog post by Robert Harper, a CMU CS professor, called "Dynamic languages are static languages." It's very enlightening and I strongly encourage you to read it. If I understand it correctly, which I freely admit I may not, the general idea is that dynamic languages can be thought of as static languages that have a single all-encompassing type. In that regard, dynamic languages are a proper subset of static languages. If you think I misinterpreted his post and I'm confused, please flame me in the comments.

In a previous post, called "Parallelism is not concurrency", he opines on a pet peeve of mine, namely how the terms parallelism and concurrency are nonchalantly and incorrectly interchanged. Parallelism applies to deterministic operations that operate on similar data in similar time. Some examples of parallel operations include rendering of 3D scenes on GPUs, or encoding/decoding blocks of compressed video. Concurrency, on the other hand, refers to the nondeterministic manner in which distributed systems operate, particularly ones where CPUs are separated over an unreliable network which can lose connectivity at any time.

In a third blog post, "Teaching FP to freshmen", Robert says he'll be teaching Standard ML to freshmen, and further argues he won't be teaching Object Oriented Programming because it's "both anti-modular and anti-parallel by its very nature, and hence unsuitable for a modern CS curriculum."

Three very interesting blog posts by the same person, clearly a well-educated computer science professor who knows more than I do. I mean, I like programming languages and I'm working on my own dynamically typed functional programming language, but Robert Harper wrote a book, one I certainly couldn't write. So who am I to judge?

Parallelism and Concurrency are Both Important

What I see missing from Robert Harper's writing is any attention paid to concurrency. He pays intense attention to parallelism, recognizes parallelism as important for the future, and strongly advocates functional languages as a great way to address the problems of parallelism. I have absolutely no bone to pick with him here and wish him great luck with his courses at CMU which address this problem domain.

However, my interests primarily lie in the realm of concurrent computing, and particularly in the area of distributed computing. In the area of distributed computing I find dynamic languages particularly important and think Robert Harper's article on static vs. dynamic languages omits some of the advantages of dynamic languages which make them work better in distributed systems.

This is a weighty subject, and one in which I don't think my own ideas alone make a particularly cogent argument. For some opening remarks, I will defer to Joe Armstrong, creator of the Erlang programming language, and his opinion from the book "Coders at Work" by Peter Seibel. As a bit of context to this quote, Joe is discussing the potential advantages that a static type system might confer upon Erlang. But then he gets to the disadvantages...
...the static type people say, "Well, we really rather like the benefits of dynamic types when we're marshaling data structures." We can't send an arbitrary program down a wire and reconstruct it at the other end because we need to know the type. And we have—Cardelli called it a system that's permanently inconsistent. We have systems that are growing and changing all the time, where the parts may be temporarily inconsistent. And as I change the code in a system, it's not atomic. Some of the nodes change, others don't. They talk to each other—at certain times they're consistent. At other times—when we go over a communication boundary—do we trust that the boundary is correct?
Type Systems and the CAP Theorem

There are two particularly sticky problems when it comes to the use of type in distributed systems. The first is the issue of serialization, or marshaling, of particular states. One way or another this is a solvable problem, both for statically typed and dynamically typed languages. I really don't want to delve too deep into this issue as it distracts from the larger point I'm trying to make, but in general, I think this is an easier problem to solve in dynamic type systems. I would also like to note that serialization formats which vomit their types all over the protocol and the codebase are outgrowths of static type systems. I'm looking at you, CORBA, SOAP, Protocol Buffers, and Thrift. On the flip side, systems which choose a minimal, semi-universal type system, such as JSON, BSON, BERT, and Msgpack, are all outgrowths of dynamic type systems. If I have a point to make here, I think it's that these systems are outgrowths of two very different ways of thinking about the same problem.

Marshaling is still a very important topic in distributed systems. Erlang largely abstracts this problem away from the programmer, allowing distributed nodes to exchange Erlang "terms" between processes on distributed nodes in the exact same way one would exchange messages between two processes located on the same host. The process of serializing that data, transmitting it over the network, receiving it via TCP, decoding it, and sending it to the appropriate Erlang process, is completely transparent to the end user. This is an extraordinarily powerful abstraction.

While statically typed languages can attempt to marshal data in a minimalistic JSON-like type system, this typically isn't the case. Instead, statically typed languages generally seem to prefer to vomit their types all over the protocol. The boilerplate code needed to marshal/unmarshal particular types can be generated automatically by a declaration of the types and methods of a particular protocol, such as the WSDL files used by SOAP. Again, users of statically typed languages could reduce the state of particular entities in their system to one which could fit into a minimalistic type system, but for static languages this is still a manual process, or one which requires manual code generation. In a language like Erlang which is built from the ground up to be distributed, dynamic, and operate around a minimalistic type system, serialization and deserialization can happen completely automatically.

Why is this important in a distributed system? Because, to paraphrase Joe Armstrong, distributed systems are messy. Imagine an Erlang-like distributed system that's statically typed. In order for such a system to work effectively, all nodes in the system must have the exact same code loaded and therefore have a universal consensus on what the types in the system are. This has huge implications on the theoretical properties on such a system. In order for a distributed system to agree on the nature of all types, it must be consistent.

However, if you're familiar with the CAP theorem, you may recognize the inherent problem this may cause. The CAP theorem gives you three options: a consistent fault-tolerant system, a highly available fault-tolerant system, or a consistent highly available system which breaks at the first sign of a fault. Only two of these options provide the consistency needed to ensure universal agreement on the types in the system such that automatic marshaling/unmarshaling of static types is even possible. In a distributed system, you either must give up universal agreement on the types, or sacrifice availability.

To quote Joe again, distributed systems are composed of parts which are "growing and changing all the time" with "parts may be temporarily inconsistent." While there aren't any guarantees that distributed systems built around dynamic type systems will work, inconsistent statically typed systems with disagreements about types are guaranteed not to work. Dynamic systems not only provide the possibility that your system may continue to function with different versions of the code loaded at the same time, but the ability for the programmer to plan for this contingency and offer ways to mitigate it. It's possible this may result in errors, but it may work, whereas incompatible type definitions are universally guaranteed to create errors. In a distributed environment, dynamic type systems provide extensibility, whereas static type systems actively seek to preclude it.

Something I've seen time and time again in systems like SOAP and even Thrift and Protocol buffers is attempts by programmers to escape the constraints of the type system, which almost universally fall into proprietary ways to store key/value pairs. One SOAP API I'm working with now provides "Maps" with "Entry" tags that have a key attribute and an associated value. Another implementation provides an array of "NameValuePair" objects. These solutions seem ugly, but in my opinion, their intentions are not. These are people seeking to extend running systems without the need to completely redefine the protocol. That's very much a practical concern.

Distributed Applications Must Be Flexible

In order for distributed programming to work effectively, nodes need to be able to call functions on each other without the need for programmers to write custom marshaling/demarshaling code for each type. The marshaled data needs to work extensibly, so that nodes running different versions of the code can still talk to each other in a forwards and backwards compatible manner.

Protocols will change over time. Older versions of the code need to work with a newer protocol, and vice versa, older versions of the protocol need to work with newer code. Nodes should be upgraded as practicality dictates. Perhaps your system administrator begins an upgrade and you lose access to a datacenter, because a janitor at your upstream ISP spilled a bucket of mopwater all over their datacenter's raised floor and caused a huge electrical disaster. Now your distributed application is running two different versions of the code, and it's suffered a network partition. Does this mean it should just break when the network partition is fixed?

Erlang has shown us it's possible to recover from these kinds of conditions. Even when we can't change code atomically across the entirety of a distributed application, it should still continue to operate without breaking. Distributed systems should be able to grow and change all the time without rigid boundaries.

Tuesday, March 15, 2011

Relativity for Programmers (A.K.A. Oh CAP!!!)

When you first start to learn physics you typically learn Newtonian Mechanics. Objects have inertia. Force is mass times acceleration. For every action there is an equal and opposite reaction. These all seem straightforward.

Albert Einstein turned the world on its head. Everything is relative. Time is relative. Simultaneity is relative. There is no universal clock.

The same thing happened in the computer science world, not with a bang or the attention that was given relativistic physics, but with a small crowd paying attention. CAP Theorem has the same implications for computer scientists as relativity had for physicists, but no one is really paying attention.

Computer scientists may dream of a fully consistent database distributed across a network, capable of two-phase commit and typical consistency guarantees. Not many think about the implications of such a database.

Imagine you had your database split across two datacenters. Let's call them Foo and Bar. While at one point in time, Foo and Bar could talk, now they can't. Perhaps a clumsy janitor spilled the bucket for his mop, and it destroyed all of your routing hardware between your datacenters. Some customers are accessing Foo and some are accessing Bar. If they want to perform a read, they'll get data that's possibly stale. If they want to perform a write, what happens?

CAP Theorem

CAP Theorem is possibly the most important property of distributed systems that you can possibly understand. It's been motherfucking proven by computer scientists, bitches! So what does it state? The core idea is that consistency which is both instantaneous and global is impossible, except on a single node. As soon as you have more than node storing data for you, you either have to sacrifice availability or consistency. Much like how you can only infer the state of the universe from old light beams hitting your light cone in relativistic physics, distributed systems can only infer the state of other nodes via the protocols they communicate with. These protocols can fall into one of three different forms per the CAP theorem:
  • CP: we have masters you have to listen to, bitch slaves. You can try to change the cluster state but all changes have to go through the masters, and if your master is split, you can't modify any state you bitch submissive node
  • AP: sure, we'll accept reads and writes. We'll try to eventually resolve any conflicts in the event of a netsplit. If we're cool we'll let you specify your own conflict resolution mechanism in the event of conflicting writes, otherwise we'll just do last writer wins.
  • CA: the network broke? Whoa. I was totally not expecting that. I was just happily doing bongrips until you barged in. I'm so high I don't even know what's going on.
If you're writing a datastore you consider to be "CA", you're doing it wrong. Fortunately, most datastores that claim to have "CA" properties are actually CP. That is to say, in the event of a network partition, they sacrifice availability. Reads to the cluster hopefully work, but writes shouldn't. Writes should error or time out for a consistent system. Any system that functions this way is in fact "CP". That is to say: it provides consistency, but in the event of a netsplit, you lose availability, at least for writes. Ideally these systems still provide stale reads.

Calling your system CA is an admission of the fact that if a partition separation occurs, and due to writes each partition has a separate view of the world, when the netsplit resolves your partitions have no mechanism for resolving conflicting states between partitions and thus your datastore is just fucked. Short of a system administrator picking which view of the world they consider to be the best and blowing away the state on the other nodes, the system state is going to be inconsistent and irresolvable.

So what are the alternatives which are partition tolerant and don't break in the event that the janitor spills mopwater on the router that routes packets between your datacenters?

Option #1: sacrifice consistency for availability

This option lets clients read and write from any partition. When connectivity between partitions is restored, there's a plan for resolving all of the writes into a consistent world state. The simplest plan is last writer wins, which is used by Cassandra. Whoever was the last one to alter the cluster state provided the definitive cluster state, simply by virtue of writing last.

In an eventual consistency model, clients may have inconsistent views of the world. This is unfortunately a fact of life. State may unexpectedly change by a particular client's view, because it's being modified on a different partition and those partitions managed to sync up. The road ahead of you may be a beach, but after syncing up with another part of the world, it may turn into a cactus. Such is life.

Option #2: sacrifice availability for consistency

Say you have a master/slave data store, and someone writes to a slave. Wat do? Slave can't talk to master. What's the most obvious option? Error.

This is a "CP" system. Slaves can serve stale state, but if you want to change the state, you have to talk to the master for that state. State can be partitioned/shareded/durr between multiple masters, but if the master for the particular state a slave wants to modify isn't available, sorry, that's a paddlin', or rather an error.

Option #3: highly available, and consistent at the partition level, netsplit fixed wat???

The only other option, the "CA" option, is that each partition pretends that writes to that partition are consistent, but when the netsplits are resolved, there's no consistent way to resolve that state. That is to say, if netsplits occur, each partition is highly available and allows writes, but as soon as the netsplit resolves, the cluster state is broke and irreconcilable.

In a quick survey of NoSQL datastores, Membase, MongoDB, and the hypothetical Redis Cluster all claim to be "CA". Are they really? Or do they just sacrifice availability in the event of a netsplit? If that's true, they're actually "CP", not "CA".

You really don't want to sacrifice partition tolerance. In fact, some might go as far as to say you CAN'T sacrifice partition tolerance. If you think your system is "CA" you probably understand the the CAP theorem and its implications.

CAP is the law, and AP or CP are the only reasonable options 

If you're running an app on a distributed network, things are going to go wrong. You might idealistically claim that you can provide both consistency and availability, sacrificing partition tolerance. You can't.


Can you pick "CA"? Sure, if you're content with losing all your data if the network had any errors. That's probably not what you mean. If you're building software for a fault-tolerant world, it's probably either going to be:
  • Consistent but unavailable during netsplits (CP)
  • "Eventually" consistent but available during netsplits (AP)
The only other option is "broken after netsplits". The best option is probably AP: this is the approach was pioneered by the Amazon Dynamo system which powered Amazon's shopping cart, and by copycat open source systems like Riak and Cassandra, not that Riak and Cassandra aren't awesome too.