# Async Programming

11 thoughts
last posted Jan. 9, 2013, 12:02 p.m.
0
get stream as: markdown or atom
0

When it comes to software "async" is both a programming model and an execution model.

As posted on Twitter 1, 2, 3

0

When people get into Twisted vs gevent arguments in the Python world, almost inevitably its a matter of the Twisted fan arguing from an async-as-programming-model point of view, while the the gevent fan is arguing from the view of async-as-execution-model.

0

To clarify what I mean by that, the core reason people are interested in any flavour of async programming is because typical modern operating systems can handle thousands (or tens of thousands) of concurrent IO operations in a single process, but usually only hundreds of threads.

For IO bound tasks, the challenge then is to ensure your application is bound by the concurrent number of IO operations, rather than by the number of available threads.

0

With a synchronous execution model, the OS level thread itself stops when an operation needs to wait for IO activity. Each concurrent IO operation consumes the resources of an entire OS thread, including the memory for its stack.

The power of an asynchronous execution model is that the individual IO operations are decoupled from OS level threads. So a single thread can dispatch multiple IO operations in parallel, and then receive notifications when the operations are complete. Most importantly, the record keeping for each concurrent IO operation is much less resource intensive than that for entire threads.

0

At the programming model level, the two relevant competitors are preemptive multi-threading and cooperative multi-threading.

The concurrent.futures API in Python 3.2+ is built around preemptive multi-threading. There are no explicit switch points, you write your code assuming blocking calls, and if data is shared between threads, you need to use explicit locking to preserve data integrity.

0

Twisted, and other event loop based software, is built around the idea of cooperative multi-threading. As a programming model, cooperative multi-threading requires explicit switching points - between switching points, you can assume exclusive access to any data shared only with other cooperating threads.

The gain is that a lot of the complexity of locking from the preemptive model can simply go away. The downside is that software written assuming the preemptive model becomes harder to use (you have to spin it out to a separate OS level thread).

0

The purpose of gevent is to bring the benefits of the async execution model to the preemptive multi-threading programming model. It achieves this goal admirably. However, there's no low level standard API for the C stack manipulation required to correctly support extension modules, so gevent is unlikely to ever become officially endorsed as part of the standard library. As a practical matter it's a hugely important piece of software (like Stackless Python before it), but that's a separate question from whether or not its appropriate for stdlib inculsion.

0

The purpose of tulip, by contrast, is to bring a standard model for cooperative multi-threading into the Python standard library specification. Like functional programming, cooperative multi-threading is an important tool in a programmer's design toolkit, and objecting to the fact that it doesn't solve every problem (like being able to use a preemptive threading programming model with an async execution model, which is the problem gevent solves) is like complaining that a purely functional program doesn't cope well with mutable system state.

The key purpose of tulip is to allow the transport and protocol implementations to be shared between different event loop implementations, as well as to allow multiple event loops to cooperate within a single process.

0

What, then, of the yield from based coroutine part of tulip? One of the core problems with callback-based cooperative multi-threading development is that it doesn't fit the way people think very well. Preemptive multi-threading piggy backs pretty well on our intuitive ideas of autonomous agents doing their own thing in parallel with each other. Event-driven programming with callbacks, on the other hand, can make it hard to see how a single operation flows from beginning to end.

Twisted's inlineDeferred's use generators to adapt between a callback based event loop and generators that exhibit linear control flow with clearly marked points for possible suspension. tulip's coroutines serve exactly the same purpose - rather than dealing with callbacks explicitly, it's possible to write code that shows the end-to-end handling of the operation, while still have local markers indicating where the operation may be suspended.

0

Cooperative multi-threading with explicitly marked suspension points is always going to be antithetical to designs that want to permit implicit IO operations (like lazily making a database query when a particular attribute is accessed, as is done by many Python ORMs). That's unavoidable, and why a tool like gevent will remain valuable even in a world with a standard Python event loop interface.

0

This turned into a python-notes essay