Back to the future with async and await in Python 3.5

Piotr Groza Piotr Groza • Dec 05
Post Img

Python 3.5 was released on September 13th, a couple of weeks ago. Among it’s features it introduces new syntax focused on facilitating writing asynchronous code – await expression and async def, async with and async for statements.

Days of future past

Major Python releases take place about every year and a half (Python 3.4 was released on March 16th 2014, 3.5 on 13th Sep 2015). I think I first heard about attempt to include new syntax for asynchronous programs around March this year – the PEP 0492 which proposes the syntax is from 9th of April 2015 and accepted at the beginning of May – which is very close to feature freezing the release of the language, and getting it to beta and release candidate stages. Until diving a bit into the subject I though that such a late feature submission means that the changes are probably not significant, and the main novelty of 3.5 release will be the type inference facilities partially derived from mypy project. It turns out I was not completely right – the typing module is indeed part of 3.5 release, but the first point of 3.5 release highlights on python.org is the enhanced coroutines support. Being at the top of release notes does not necessarily mean that the feature is a big change though – the other syntax features – matrix multiplication operator and improved unpacking support do seem like just a small (but useful) additions. I imagined the new async/await changes will just introduce an alternative for the already existing coroutines (with yield/yield from) – it turns out this is also not accurate. Even though under the hood, both solutions use old generators, the two forms of specifying async routines are not interchangeable – let’s take a closer look to see what does the new syntax bring.

Out with the old…

Let’s take a look at how async code looks in python 3.4. We’re going to use the asyncio standard library module (previously known as tulip, introduced in python 3.4 and available for 3.3 from PyPi) and aiohttp library which exposes http client and service abstractions working well with tulip. Here’s the code:


import pprint  
import aiohttp  
import asyncio  
import logging  
import sys  
import threading  
logging.basicConfig(level=logging.INFO, stream=sys.stdout, format="%(asctime)s: %(message)s")


@asyncio.coroutine
def get_body(client, url, delay):  
    response = yield from client.get(url)
    logging.info('(not really) sleeping for %s, current_thread: %s', delay, threading.current_thread())
    yield from asyncio.sleep(delay)
    logging.info('status from %s: %s, current_thread: %s', url, response.status, threading.current_thread())
    return (yield from response.read())

if __name__ == '__main__':  
    loop = asyncio.get_event_loop()
    client = aiohttp.ClientSession(loop=loop)
    futures = [
        asyncio.ensure_future(get_body(client, 'http://docs.python.org', 3), loop=loop),
        asyncio.ensure_future(get_body(client, 'http://python.org', 2), loop=loop),
        asyncio.ensure_future(get_body(client, 'http://pypi.python.org', 1), loop=loop)
    ]
    logging.info(pprint.pformat(futures))
    results = loop.run_until_complete(asyncio.wait(futures))
    logging.info(pprint.pformat(results))
    client.close()

If you’re familiar with futures, promises, event loops and similar constructs from any language this should look familiar. The work horse of all event loop solutions is … the event loop – which we get from asyncio. We then use it to schedule computation and operate on future objects, from which we can get results, if those are already available. The gist of this is to asynchronous code look similar to synchronous – if you take a look at the get_body code it does look readable – we’re requesting an HTTP resource from a url, sleep for a bit, then output the result. If this code was synchronous though – each call to client.get and sleep would actually block – so if we were doing this in one thread, it would take 6 seconds + network operations time, if were to do it in separate threads – it would of course take less, but we would waste 3 threads (which is not very much, but typical applications do not end at issuing 3 http requests). But if you take a look at the output of this script:


2015-09-27 12:25:34,277: [>,  
 >,
 >]
2015-09-27 12:25:35,067: (not really) sleeping for 3, current_thread: <_MainThread(MainThread, started 6952)>  
2015-09-27 12:25:35,212: (not really) sleeping for 1, current_thread: <_MainThread(MainThread, started 6952)>  
2015-09-27 12:25:35,634: (not really) sleeping for 2, current_thread: <_MainThread(MainThread, started 6952)>  
2015-09-27 12:25:36,218: status from http://pypi.python.org: 200, current_thread: <_MainThread(MainThread, started 6952)>  
2015-09-27 12:25:37,641: status from http://python.org: 200, current_thread: <_MainThread(MainThread, started 6952)>  
2015-09-27 12:25:38,078: status from http://docs.python.org: 200, current_thread: <_MainThread(MainThread, started 6952)>  
2015-09-27 12:25:38,083: ({ result=b'<!doctype h...y>\n\n'>,  
   result=b'\n'>,
   result=b'<?xml versi...  \n\n'>},
 set())

You can see that it takes about 3 seconds to execute, and all the workers are attached to the same thread. Before futures and event loops became popular the way to cope with this was to code up a complex system of callbacks, which often ended up as unreadable solution. Today, it’s much easier – the only thing you have to do is replace every blocking call with an async version, add yield from in front of it and you’re good to go. What happens under the hood (a very simplified hood), is the asyncio event loop gathering every yield you make, put it into a priority queue, and wake you up, if the result you’re waiting for is available. So we get the best of both worlds – the code still looks readable, it uses less resources (threads – though this is actually hidden from you) and as long as it is I/O bound – it let’s you operate in parallel without much penalties (when waiting for I/O – Python is as fast as C).

There are some hidden caveats – for example – since the code is not really run in parallel – if one of your worker function decides to hang – not much can be done (e.g: if I made a mistake of using time.sleep instead of asyncio.sleep I would end up suspending all the routines for a certain amount of time. This leads us to another problem – calling a synchronous (blocking I/O) operation from an asynchronous one will impact the whole event loop – so it is difficult to combine the sync and async worlds – and since Python is a pretty old language – it does have a bunch of sync style libraries that you can’t just start using with asyncio.

And in with the new…

I mentioned readability of asynchronous code previously, but I wasn’t perfectly honest with you – the code for get_body coroutine does look readable but only under one condition – you know what the yield from does. If this was completely new to you – I bet the yield from asyncio.sleep(delay) line does not look so obvious. The idiom of using generators and yield for expressing async operations was present in python for years now (i.e: in gevent and tornado) so I guess it’s imprinted in the back of my head by now. Not every python programmers knows this, so PEP 0492 proposed a new syntax. Let’s rewrite the previous example using it:


import pprint  
import aiohttp  
import asyncio  
import logging  
import sys  
import threading  
logging.basicConfig(level=logging.INFO, stream=sys.stdout, format="%(asctime)s: %(message)s")


async def get_body(client, url, delay):  
    response = await client.get(url)
    logging.info('(not really) sleeping for %s, current_thread: %s', delay, threading.current_thread())
    await asyncio.sleep(delay)
    logging.info('status from %s: %s, current_thread: %s', url, response.status, threading.current_thread())
    return await response.read()

if __name__ == '__main__':  
    loop = asyncio.get_event_loop()
    client = aiohttp.ClientSession(loop=loop)
    futures = [
        asyncio.ensure_future(get_body(client, 'http://docs.python.org', 3), loop=loop),
        asyncio.ensure_future(get_body(client, 'http://python.org', 2), loop=loop),
        asyncio.ensure_future(get_body(client, 'http://pypi.python.org', 1), loop=loop)
    ]
    logging.info(pprint.pformat(futures))
    results = loop.run_until_complete(asyncio.wait(futures))
    logging.info(pprint.pformat(results))
    client.close()

The changes are easy to spot:

  • the method is defined with async def instead of def
  • the asyncio.coroutine decorator is no longer needed. It was previously mainly used to differentiate ‘normal’ generators from async code – there’s no such need if you use async def
  • all the yield from expressions are replaced with await (also for return statement)

That’s it – no more changes needed. Before you start sprinkling async/await all over your codebase though – let’s take a look at a few more examples of mixing old and new-style async code.


async def syntax_error():  
    yield from asyncio.sleep()
    return 1

That simply won’t work – it’s a syntax error:


    yield from asyncio.sleep()
       ^
SyntaxError: 'yield from' inside async function  

How about this:


@asyncio.coroutine
def gen_coro_1():  
    await asyncio.sleep(3)
    print(3)

Nope:


    await asyncio.sleep(3)
                ^
SyntaxError: invalid syntax  

It doesn’t mean you can’t mix the old and new worlds, though. The example below works just fine:


async def new_sleep(delay):  
    await asyncio.sleep(delay)
    print('new_sleep')

@asyncio.coroutine
def gen_coro_1():  
    yield from new_sleep(3)
    print('gen_coro_1')

if __name__ == '__main__':  
    loop = asyncio.get_event_loop()
    loop.run_until_complete(gen_coro_1())

The thing you need to remember you can’t use yield in async def functions, and can’t use await in yield-based coroutines.

Monkeying with loops, loopers and loopholes

That’s not all of the new syntax – there’s also async for and async with – what are those for? Well, suppose you wanted to write an async iterator – a method that can pass control back to the event loop while it’s waiting for some data, you could write code like this:


import asyncio  
import logging  
import sys  
logging.basicConfig(level=logging.INFO, stream=sys.stdout, format="%(asctime)s: %(message)s")

class AsyncIter:  
    def __init__(self):
        self._data = list(range(10))
        self._index = 0

    async def __aiter__(self):
        return self

    async def __anext__(self):
        while self._index < 10:
            await asyncio.sleep(1)
            self._index += 1
            return self._data[self._index-1]
        raise StopAsyncIteration


async def do_loop():  
    async for x in AsyncIter():
        logging.info(x)


if __name__ == '__main__':  
    loop = asyncio.get_event_loop()
    futures = [asyncio.ensure_future(do_loop()), asyncio.ensure_future(do_loop())]
    loop.run_until_complete(asyncio.wait(futures))

Reading the code bottom to top, there are a few new things here. First, there’s an async for loop, which can only be declared in an async def function. It does what a standard loop does – except – each call to next can be a coroutine that will also yield to the event loop. The async iteration is itself a new protocol – having __aiter__ and __anext__methods which are meant to do the same thing as in standard iteration protocol – except __aiter__ can pass control back to the loop. This change does not look as simple as the previous ones – a whole new protocol and a new statement, that does not have an equivalent in other languages having similar syntactic features. On top of that – the protocol + statement combo is duplicated for the async context managers (async with, __aenter__, __aexit__) – making the whole new syntax thing a larger change that might be expected. I haven’t seen async for and async with equivalents in other languages containing similar syntactic support for writing cooperatively-parallel code – so why did Python decide to do it different? PEP 0492 tries to address some of those concerns.

The first of the main reasons for the new protocols is to make it clear where the code can be suspended – agreeably, an explicit async does a good job at that. The second is to be able to code classes supporting both sync and async iteration/context management. Also, if you think (or google, which is a good substitute for thinking nowadays) – replacing the code above with the yield from version might not be that trivial – the best solution I came up with was an explicit for loop (which defeats the purpose of having a special protocol). Also with PEP 0479 in place (which turns StopIteration exception into RuntimeErrors) you can’t use StopIteration exception for signaling the end of loop (that’s also why there’s StopAsyncIteration in the async for code). Lastly mixing sync and async words is not really a good idea in the first place (when you block on sync call, all your other coroutines are also halted) so the change authors wanted to express the divide between those two approaches as explicit as possible.

Software Engineers - development team

Roads? Where we’re going, we don’t need roads.

As you can see, the changes related to support the new syntax are not limited to syntactic sugar – and I did not cover all of them. There is also new __await__ magic method, new abstract base classes, new base type (coroutine), C-API changes, decorators, depreciation warnings… and probably more to come (think async lambdas, comprehensions, even the possibility of combining coroutines with generators to have asynchronous iterators defined in forms of functions). Of course most of those changes are relevant only for library and framework authors, who while being a large group, are easily outnumbered by library users. For them, the visible scope of changes is mostly limited to new syntax. Speaking of library users though, we cannot miss the elephant in the room – the asyncio and all it’s satellites are limited to Python 3.3+ – and any progress in development of new libraries, adoption of new asynchronous solutions is strictly connected to adoption of Python 3. Right now, there already is a sizable asyncio library set (available for example at asyncio.org) – but the packages are nowhere as popular as Python 2 compatible solution. Time will tell if the new async gizmos will help drive the adoption of Python 3 – unfortunately that’s a blocking call.


Leave a Reply

Your email address will not be published. Required fields are marked *

Drive tactical delivery
without inflating the top line

Your Swiss Army Knife in AI, Cloud and Digital

Get in touch Button Arrow