Categories
Blather Programming Python Software

In Which We Begin at the Beginning

Part 1: In Which We Begin at the Beginning

Preface

Someone recently posted to the Twisted mailing list asking for something like the “Twisted introduction for people on a deadline”. Full disclosure: this isn’t it. On the spectrum of introductions to Twisted and asynchronous programming in Python, it may be on the exact opposite end. So if you don’t have any time, or any patience, this isn’t the introduction you are looking for.

However, I also believe that if you are new to asynchronous programming, a quick introduction is simply not possible, at least if you are not a genius. I’ve used Twisted successfully for a number of years and having thought about how I initially learned it (slowly), and what I found difficult, I’ve come to the conclusion that much of the challenge does not stem from Twisted per se, but rather in the acquisition of the “mental model” required to write and understand asynchronous code. Most of the Twisted source code is clear and well written, and the online documentation is good, at least by the standards of most free software. But without that mental model, reading the Twisted codebase, or code that uses Twisted, or even much of the documentation, will result in confusion and headache.

So the first parts of this introduction are designed to help you acquire that model and only later on will we introduce the features of Twisted. In fact, we will start without using Twisted at all, instead using simple Python programs to illustrate how an asynchronous system works. And once we get into Twisted, we will begin with very low-level aspects that you would not normally use in day-to-day programming. Twisted is a highly abstracted system and this gives you tremendous leverage when you use it to solve problems. But when you are learning Twisted, and particularly when you are trying to understand how Twisted actually works, the many levels of abstraction can cause troubles. So we will go from the inside-out, starting with the basics.

And once you have the mental model in place, I think you will find reading the Twisted documentation, or just browsing the source code, to be much easier. So let’s begin.

The Models

We will start by reviewing two (hopefully) familiar models in order to contrast them with the asynchronous model. By way of illustration we will imagine a program that consists of three conceptually distinct tasks which must be performed to complete the program. We will make these tasks more concrete later on, but for now we won’t say anything about them except the program must perform them. Note I am using “task” in the non-technical sense of “something that needs to be done”.

The first model we will look at is the single-threaded synchronous model, in Figure 1 below:

Figure 1: the synchronous model
Figure 1: the synchronous model

This is the simplest style of programming. Each task is performed one at a time, with one finishing completely before another is started. And if the tasks are always performed in a definite order, the implementation of a later task can assume that all earlier tasks have finished without errors, with all their output available for use — a definite simplification in logic.

We can contrast the synchronous model with another one, the threaded model illustrated in Figure 2:

Figure 2: the threaded model
Figure 2: the threaded model

In this model, each task is performed in a separate thread of control. The threads are managed by the operating system and may, on a system with multiple processors or multiple cores, run truly concurrently, or may be interleaved together on a single processor. The point is, in the threaded model the details of execution are handled by the OS and the programmer simply thinks in terms of independent instruction streams which may run simultaneously. Although the diagram is simple, in practice threaded programs can be quite complex because of the need for threads to coordinate with one another. Thread communication and coordination is an advanced programming topic and can be difficult to get right.

Some programs implement parallelism using multiple processes instead of multiple threads. Although the programming details are different, for our purposes it is the same model as in Figure 2.

Now we can introduce the asynchronous model in Figure 3:

Figure 3: the asynchronous model
Figure 3: the asynchronous model

In this model, the tasks are interleaved with one another, but in a single thread of control. This is simpler than the threaded case because the programmer always knows that when one task is executing, another task is not. Although in a single-processor system a threaded program will also execute in an interleaved pattern, a programmer using threads should still think in terms of Figure 2, not Figure 3, lest the program work incorrectly when moved to a multi-processor system. But a single-threaded asynchronous system will always execute with interleaving, even on a multi-processor system.

There is another difference between the asynchronous and threaded models. In a threaded system the decision to suspend one thread and execute another is largely outside of the programmer’s control. Rather, it is under the control of the operating system, and the programmer must assume that a thread may be suspended and replaced with another at almost any time. In contrast, under the asynchronous model a task will continue to run until it explicitly relinquishes control to other tasks. This is a further simplification from the threaded case.

Note that it is possible to mix the asynchronous and threaded models and use both in the same system. But for most of this introduction, we will stick to “plain vanilla” asynchronous systems with one thread of control.

The Motivation

We’ve seen that the asynchronous model is simpler than the threaded one because there is a single instruction stream and tasks explicitly relinquish control instead of being suspended arbitrarily. But the asynchronous model is clearly more complex than the synchronous case. The programmer must organize each task as a sequence of smaller steps that execute intermittently. And if one task uses the output of another, the dependent task must be written to accept its input as a series of bits and pieces instead of all together. Since there is no actual parallelism, it appears from our diagrams that an asynchronous program will take just as long to execute as a synchronous one, perhaps longer as the asynchronous program might exhibit poorer locality of reference.

So why would you choose to use the asynchronous model? There are at least two reasons. First, if one or more of the tasks are responsible for implementing an interface for a human being, then by interleaving the tasks together the system can remain responsive to user input while still performing other work in the “background”. So while the background tasks may not execute any faster, the system will be more pleasant for the person using it.

However, there is a condition under which an asynchronous system will simply outperform a synchronous one, sometimes dramatically so, in the sense of performing all of its tasks in an overall shorter time. This condition holds when tasks are forced to wait, or block, as illustrated in Figure 4:

Figure 4: blocking in a synchronous program
Figure 4: blocking in a synchronous program

In the figure, the gray sections represent periods of time when a particular task is waiting (blocking) and thus cannot make any progress. Why would a task be blocked? A frequent reason is that it is waiting to perform I/O, to transfer data to or from an external device. A typical CPU can handle data transfer rates that are orders of magnitude faster than a disk or a network link is capable of sustaining. Thus, a synchronous program that is doing lots of I/O will spend much of its time blocked while a disk or network catches up. Such a synchronous program is also called a blocking program for that reason.

Notice that Figure 4, a blocking program, looks a bit like Figure 3, an asynchronous program. This is not a coincidence. The fundamental idea behind the asynchronous model is that an asynchronous program, when faced with a task that would normally block in a synchronous program, will instead execute some other task that can still make progress. So an asynchronous program only “blocks” when no task can make progress and is thus called a non-blocking program. And each switch from one task to another corresponds to the first task either finishing, or coming to a point where it would have to block. With a large number of potentially blocking tasks, an asynchronous program can outperform a synchronous one by spending less overall time waiting, while devoting a roughly equal amount of time to real work on the individual tasks.

Compared to the synchronous model, the asynchronous model performs best when:

  1. There are a large number of tasks so there is likely always at least one task that can make progress.
  2. The tasks perform lots of I/O, causing a synchronous program to waste lots of time blocking when other tasks could be running.
  3. The tasks are largely independent from one another so there is little need for inter-task communication (and thus for one task to wait upon another).

These conditions almost perfectly characterize a typical busy network server (like a web server) in a client-server environment. Each task represents one client request with I/O in the form of receiving the request and sending the reply. And client requests (being mostly reads) are largely independent. So a network server implementation is a prime candidate for the asynchronous model and this is why Twisted is first and foremost a networking library.

Onward and Upward

This is the end of Part 1. In Part 2, we will write some network programs, both blocking and non-blocking, as simply as possible (without using Twisted), to get a feel for how an asynchronous Python program actually works.

86 replies on “In Which We Begin at the Beginning”

Leave a Reply

Discover more from krondo

Subscribe now to keep reading and get access to the full archive.

Continue reading