English | 简体中文 | 繁體中文 | Русский язык | Français | Español | Português | Deutsch | 日本語 | 한국어 | Italiano | بالعربية

Python Concurrency2Using asyncio to Handle Concurrency

正文

asyncio 2in Python 3 It implements a native coroutine library based on generators, which directly built-in support for asynchronous IO, this is asyncio, in the era of Python 3.4It was introduced into the standard library.

asyncio this package uses event-driven coroutines to implement concurrency.

Before asyncio was introduced into the standard library, it was codenamed 'Tulip' (Tulip), so when searching for information on the Internet, you will often see this name of the flower.

What is the event loop?63;

wiki says: The event loop is 'a programming architecture that waits for the program to distribute events or messages.' In essence, the event loop is: 'When A happens, execute B.' Or to explain this concept in the simplest way, it is the JavaScript event loop that exists in each browser. When you click something ('When A happens'), this click action is sent to the JavaScript event loop, and it checks whether there is a registered onclick callback to handle this click (execute B). As long as there is a registered callback function, it will be executed with the details of the click action. The event loop is considered an illusion because it continuously picks up events and sends them out through loops to deal with these events.

For Python, asyncio, which provides the event loop, was added to the standard library. asyncio focuses on solving problems in network services, and the event loop here comes from the I/O is ready to read and/or written as 'when A happens' (via the selectors module). In addition to GUI and I/The event loop is often used to execute code in other threads or subprocesses and use the event loop as a regulation mechanism (such as cooperative multitasking). If you understand Python's GIL, the event loop is very useful for places that need to release the GIL.

Thread and Coroutine

Let's look at two pieces of code, one implemented using the threading module and the other using the asyncio package.

# sinner_thread.py
import threading
import itertools
import time
import sys
class Signal: # This class defines a mutable object used to control threads from the outside
 go = True
def spin(msg, signal): # This function will run in a separate thread, signal parameter is an instance of the Signal class defined earlier
 write, flush = sys.stdout.write, sys.stdout.flush
 for char in itertools.cycle('|)/-\": # The itertools.cycle function generates elements repeatedly from the specified sequence
  status = char + ' ' + msg
  write(status)
  flush()
  write('\x0)8' * len(status)) # Use the backspace character to move the cursor back to the beginning of the line
  time.sleep(.)1) # Every 0.1 refresh once per second
  if not signal.go: # If the go attribute is not True, exit the loop
   break
 write(' ' * len(status) + '\x08' * len(status)) # Use spaces to clear the status message, move the cursor back to the beginning
def slow_function(): # Simulate time-consuming operation
 # Pretend to wait I/O for a while
 time.sleep(3) # Calling sleep will block the main thread, and doing this is to release the GIL, creating dependent threads
 return 42
def supervisor(): # This function sets up dependent threads, displays thread objects, calculates execution time, and finally kills the process
 signal = Signal()
 spinner = threading.Thread(target=spin,
        args=('thinking!', signal))
 print('spinner object:', spinner) # Display the thread object Output spinner object: <Thread(Thread-1, initial)>
 spinner.start() # Start the subordinate process
 result = slow_function() # Run the slow_function line, blocking the main thread. At the same time, the book thread spins the pointer in animation form
 signal.go = False
 spinner.join() # Wait for the spinner thread to finish
 return result
def main():
 result = supervisor() 
 print('Answer', result)
if __name__ == '__main__':
 main()

Run it, and the result will be roughly like this:

This is an animation, the line before 'thinking' is moving (I increased the sleep time for screen recording)

Python does not provide an API to terminate threads, so to close a thread, you must send a message to it. Here we use the signal.go attribute: set it to False in the main thread, and the spinner thread will receive it and exit

Now let's take a look at the version using the asyncio package:

# spinner_asyncio.py
# Display text-based spinning pointer in animation form through coroutine
import asyncio
import itertools
import sys
@asyncio.coroutine # Coroutines intended for asyncio processing must use @asyncio.coroutine decorator
def spin(msg):
 write, flush = sys.stdout.write, sys.stdout.flush
 for char in itertools.cycle('|)/-\": # The itertools.cycle function generates elements repeatedly from the specified sequence
  status = char + ' ' + msg
  write(status)
  flush()
  write('\x0)8' * len(status)) # Use the backspace character to move the cursor back to the beginning of the line
  try:
   yield from asyncio.sleep(0.)1) # Use yield from asyncio.sleep(0.)1) instead of time.sleep(.)1Such hibernation will not block the event loop
  except asyncio.CancelledError: # If the spin function wakes up and throws an asyncio.CancelledError exception, the reason is that a cancellation request was made
   break
 write(' ' * len(status) + '\x08' * len(status)) # Use spaces to clear the status message, move the cursor back to the beginning
@asyncio.coroutine
def slow_function(): # 5 Now this function is a coroutine, using sleep to pretend to perform I/# O while performing operations, use yield from to continue executing the event loop
 # Pretend to wait I/O for a while
 yield from asyncio.sleep(3) # This expression passes control to the main loop, and resumes this coroutine after the sleep ends
 return 42
@asyncio.coroutine
def supervisor(): # This function is also a coroutine, so it can use yield from to drive slow_function
 spinner = asyncio.async(spin('thinking!')) # The asyncio.async() function schedules the execution time of the coroutine, wraps the spin coroutine with a Task object, and returns immediately
 print('spinner object:', spinner) # Task object, output similar to spinner object: <Task pending coro=<spin() running at spinner_asyncio.py:6>>
 # Drive the slow_function() function, get the return value after it ends. Meanwhile, the event loop continues to run,
 # Because the slow_function function uses yield from asyncio.sleep at the end3) The expression passes control to the main loop
 result = yield from slow_function()
 # The Task object can be cancelled; after cancellation, an asyncio.CancelledError exception will be raised at the yield where the coroutine is currently paused
 # The coroutine can catch this exception, can delay cancellation, or even refuse cancellation
 spinner.cancel()
 return result
def main():
 loop = asyncio.get_event_loop() # Get the reference to the event loop
 # Drive the supervisor coroutine to run to completion; the return value of this coroutine is the return value of this call
 result = loop.run_until_complete(supervisor())
 loop.close()
 print('Answer', result)
if __name__ == '__main__':
 main()

Unless you want to block the main thread, thereby freezing the event loop or the entire application, do not use time.sleep() in asyncio coroutines.

If the coroutine needs to do nothing for a period of time, you should use yield from asyncio.sleep(DELAY).

Using the @asyncio.coroutine decorator is not a strict requirement, but it is recommended to do so because it can highlight coroutines in the code. If the coroutine has not yielded a value yet, the garbage collector can be triggered (meaning the operation is not complete and may have defects), and a warning can be issued. This decorator will not pre-trigger the coroutine.

The execution results of these two pieces of code are basically the same. Now let's look at the main difference between the core code supervisor of the two pieces of code:

  1. The asyncio.Task object is roughly equivalent to the threading.Thread object (the Task object is like a green thread in a library that implements writing-time multitasking).
  2. The Task object is used to drive coroutines, and the Thread object is used to call callable objects.
  3. The Task object is not instantiated manually but obtained by passing the coroutine to the asyncio.async(...) function or the loop.create_task(...) method.
  4. The obtained Task object has scheduled runtime; the Thread instance must call the start method to explicitly inform it to run.
  5. In the thread-based supervisor function, slow_function is a regular function, called directly by the thread, while the asynchronous version of slow_function is a coroutine, driven by yield from.
  6. There is no API to terminate threads from the outside because threads can be interrupted at any time. To terminate a task, you can use the Task.cancel() instance method, raise a CancelledError exception within the coroutine, and the coroutine can catch this exception to handle the termination request at the paused yield.
  7. The supervisor coroutine must be executed in the main function using the loop.run_until_complete method.

Coroutines have a key advantage over threads: threads must remember to retain locks to protect important parts of the program, preventing interruptions during the execution of multi-step operations and preventing the state of 'water and mountains' from being interrupted. Coroutines will automatically handle protection by default, and we must explicitly yield (using yield or yield from to give up control) to allow the rest of the program to run.

asyncio.Future: intentionally does not block

The asyncio.Future class has a basic interface consistent with the concurrent.futures.Future class, but the implementation is different and cannot be interchanged.

The previous [python concurrency 1: Using futures to handle concurrency We have introduced the future of concurrent.futures.Future, in which the future is just the result of scheduling the execution of something. In the asyncio package, the BaseEventLoop.create_task(...) method receives a coroutine, schedules its execution time, and then returns an asyncio.Task instance (which is also an instance of asyncio.Future class, because Task is a subclass of Future, used to wrap coroutines. (In concurrent.futures.Future, a similar operation is Executor.submit(...)).

Similar to the concurrent.futures.Future class, the asyncio.Future class also provides

  1. .done() returns a boolean value indicating whether the Future has been executed
  2. The .add_done_callback() method has only one parameter, which is a callable object. After the Future is executed, this object will be called back.
  3. The .result() method has no parameters, so it cannot specify a timeout time. If the .result() method is called before the execution is completed, an asyncio.InvalidStateError exception will be thrown.

After the Future in the concurrent.futures.Future class is executed, calling result() will return the result of the callable object or throw an exception that was thrown when executing the callable object. If the f.result() method is called before the Future is executed, it will block the calling thread until a result is returned. At this time, the result method can also accept a timeout parameter. If the Future does not complete within the specified time, a TimeoutError exception will be thrown.

When using asyncio.Future, we usually use yield from to obtain the result, rather than using the result() method. The yield from expression generates a return value in the suspended coroutine, resuming the execution process.

The purpose of the asyncio.Future class is to be used with yield from, so it is usually not necessary to use the following methods:

  1. There is no need to call my_future.add_down_callback(...), because you can directly place the operations you want to perform after the future is completed in the coroutine after the yield from my_future expression. (Because coroutines can pause and resume functions)
  2. There is no need to call my_future.result(), because the result produced by yield from is (result = yield from my_future)

In the asyncio package, you can use yield from to produce results from asyncio.Future objects. This means that we can write as follows:

res = yield from foo() # foo can be a coroutine function or a normal function that returns a Future or task instance

asyncio.async(...)* function

asyncio.async(coro_or_future, *, loop=None)

This function unifies coroutines and Futures: the first parameter can be either of the two. If it is a Future or Task object, it is returned directly; if it is a coroutine, the async function will automatically call the loop.create_task(...) method to create a Task object. The loop parameter is optional and is used to pass the event loop; if it is not passed, the async function will obtain the loop object by calling the asyncio.get_event_loop() function.

BaseEventLoop.create_task(coro)

This method schedules the execution time of the coroutine and returns an asyncio.Task object. If it is called on a custom BaseEventLoop subclass, the returned object may be an instance of a class compatible with the Task class from an external library.

The BaseEventLoop.create_task() method is only available in Python3.4.2 and above versions are available. Python3.3 Only the asyncio.async(...) function can be used.
If you want to experiment with future and coroutines in the Python console or small test scripts, you can use the following snippet:

import asyncio
def run_sync(coro_or_future):
 loop = asyncio.get_event_loop()
 return loop.run_until_complete(coro_or_future)
a = run_sync(some_coroutine())

Using asyncio and aiohttp package to download

Now that we have understood the basic knowledge of asyncio, it's time to rewrite our previous [python concurrency 1: Use futures to handle concurrent downloads of flags script.

Let's take a look at the code:

import asyncio
import aiohttp # You need to install aiohttp using pip install aiohttp
from flags import save_flag, show, main, BASE_URL
@asyncio.coroutine # We know that coroutines should be decorated with asyncio.coroutine
def get_flag(cc):
 url = ""/{cc}/{cc}.gif".format(BASE_URL, cc=cc.lower())
  # Blocking operations are implemented through coroutines, and customer code delegates the responsibility to coroutines through yield from to enable asynchronous operations
 resp = yield from aiohttp.request('GET', url) 
 # Reading is also an asynchronous operation
 image = yield from resp.read()
 return image
@asyncio.coroutine
def download_one(cc): # This function must also be a coroutine because it uses yield from
 image = yield from get_flag(cc) 
 show(cc)
 save_flag(image, cc.lower()) + '.gif')
 return cc
def download_many(cc_list):
 loop = asyncio.get_event_loop() # Get a reference to the underlying implementation of the event loop
 to_do = [download_one(cc) for cc in sorted(cc_list)] # Call download_one to get each flag and construct a list of generator objects
 # Although the function name is wait, it is not a blocking function; wait is a coroutine that ends after all the coroutines passed to it are completed.
 wait_coro = asyncio.wait(to_do)
 res, _ = loop.run_until_complete(wait_coro) # Execute the event loop until wait_coro is completed; during the operation of the event loop, this script will be blocked here.
 loop.close() # Close the event loop
 return len(res)
if __name__ == '__main__':
 main(download_many)

The brief description of the operation of this code is as follows:

  1. In the download_many function, an event loop is obtained to handle several coroutine objects generated by calling the download_one function.
  2. The asyncio event loop activates each coroutine once.
  3. When the coroutine (get_flag) in the customer code delegates the responsibility to the coroutine (aiohttp.request) in the library using yield from, the control is returned to the event loop, and the scheduled coroutine is executed before it.
  4. 事件循环通过基于回调的底层API,在阻塞的操作执行完毕后获得通知。
  5. The event loop gets a notification through the low-level API based on callbacks after the blocking operation is completed.
  6. After receiving the notification, the main loop sends the result to the suspended coroutine4The coroutine proceeds to the next 'yield from' expression, such as get_flag function's 'yield from resp.read()'. The event loop regains control, and repeat step6~

Step until the loop terminates.

In the download_many function, we use the asyncio.wait(...) function, which is a coroutine, and the parameters of the coroutine are an iterable object consisting of futures or coroutines; wait wraps each coroutine into a Task object. The final result is that all objects processed by wait are converted into instances of the Future class.

The wait method is a coroutine function, so it returns a coroutine or generator object; the waite_coro variable stores this kind of object

<section class="caption">wait</section> has two named parameters, timeout and return_when, which may return an incomplete future if set.

Something you might have noticed is that we have rewritten the get_flags function because the previously used requests library executes blocking I/O operation. To use the asyncio package, we must modify the function to asynchronous version.

A little trick

If you find it difficult to understand the code after using coroutines, you can adopt the suggestion of Python's father (Guido van Rossum), pretending there is no 'yield from'.

Taking the above code as an example:

@asyncio.coroutine
def get_flag(cc):
 url = ""/{cc}/{cc}.gif".format(BASE_URL, cc=cc.lower())
 resp = yield from aiohttp.request('GET', url) 
 image = yield from resp.read()
 return image
# Remove the 'yield form'
def get_flag(cc):
 url = ""/{cc}/{cc}.gif".format(BASE_URL, cc=cc.lower())
 resp = aiohttp.request('GET', url) 
 image = resp.read()
 return image
# Now isn't it clearer?

Knowledge points

When using yield from in the asyncio package's API, there is a detail to note:

When using the asyncio package, the asynchronous code we write contains coroutines (delegated generators) driven by asyncio itself, and the generator ultimately delegates responsibility to coroutines in the asyncio package or third-party libraries. This approach is equivalent to building a pipeline, allowing the asyncio event loop to drive the execution of underlying asynchronous I/O/O library function.

Avoid blocking calls

Let's first look at a diagram that shows the latency of reading data from different storage media on the computer:

From this diagram, we can see that blocking calls are a huge waste of CPU. How can we avoid blocking calls from halting the entire application?

There are two methods:

  1. Run each blocking operation in a separate thread
  2. Convert each blocking operation into a non-blocking asynchronous call

Of course, we recommend the second solution because the cost is too high if each connection uses a thread in the first solution.

The second way we can implement asynchronous programming is to use generators as coroutines. For the event loop, calling a callback is similar to calling .send() method on a paused coroutine. The paused coroutines consume much less memory than threads.

Now, you should understand why the flags_asyncio.py script is much faster than flags.py.

Because flags.py is sequentially synchronized download, each download has to wait for tens of billions of CPU cycles for the result. In flags_asyncio.py, when calling loop.run_until_complete in the download_many function, the event loop drives various download_one coroutines, running to the yield from expression, and that expression drives various get_flag coroutines, running to the first yield from expression, calling aiohttp.request() function. These calls will not block, so all requests can start all at once within a few tenths of a second.

Improve the asyncio download script

Now let's improve the previous flags_asyncio.py by adding exception handling and a counter

import asyncio
import collections
from collections import namedtuple
from enum import Enum
import aiohttp
from aiohttp import web
from flags import save_flag, show, main, BASE_URL
DEFAULT_CONCUR_REQ = 5
MAX_CONCUR_REQ = 1000
Result = namedtuple('Result', 'status data')
HTTPStatus = Enum('Status', 'ok not_found error')
# Custom exception used to wrap other HTTP or network exceptions, and get country_code to report errors
class FetchError(Exception):
 def __init__(self, country_code):
  self.country_code = country_code
@asyncio.coroutine
def get_flag(cc):
 # This coroutine has three return results:
 # 1. Return the downloaded image
 # 2. HTTP response is404 . Throw web.HTTPNotFound exception when
 # 3. Throw aiohttp.HttpProcessingError when returning other HTTP status codes
 url = ""/{cc}/{cc}.gif".format(BASE_URL, cc=cc.lower())
 resp = yield from aiohttp.request('GET', url)
 if resp.status == 200:
  image = yield from resp.read()
  return image
 elif resp.status == 404:
  raise web.HttpNotFound()
 else:
  raise aiohttp.HttpProcessionError(
   code=resp.status, message=resp.reason,
   headers=resp.headers
  )
@asyncio.coroutine
def download_one(cc, semaphore):
 # The semaphore parameter is an instance of the asyncio.Semaphore class
 # The Semaphore class is a synchronization device used to limit concurrent requests
 try:
  with (yield from semaphore):
    # Use semaphore as a context manager in the yield from expression to prevent blocking the entire system
    # If the semaphore counter value is the maximum allowed, only this coroutine will block
    image = yield from get_flag(cc)
    # After exiting the with statement, the semaphore counter value will decrement
    # Release blocking for other coroutine instances waiting for the same semaphore object
 except web.HTTPNotFound:
  status = HTTPStatus.not_found
  msg = 'not found'
 except Exception as exc:
  raise FetchError(cc) from exc
 else:
  save_flag(image, cc.lower()) + '.gif')
  status = HTTPStatus.ok
  msg = 'ok'
 return Result(status, cc)
@asyncio.coroutine
def downloader_coro(cc_list):
 counter = collections.Counter()
 # Create an asyncio.Semaphore instance, allowing a maximum of MAX_CONCUR_REQ coroutines to be activated using this counter
 semaphore = asyncio.Semaphore(MAX_CONCUR_REQ)
 # Call download_one coroutine multiple times, creating a list of coroutine objects
 to_do = [download_one(cc, semaphore) for cc in sorted(cc_list)]
 # Get an iterator that returns the future after the future runs
 to_do_iter = asyncio.as_completed(to_do)
 for future in to_do_iter:
  # Iterate over the future that is allowed to end 
  try:
   res = yield from future # Get the result of the asyncio.Future object (can also call future.result)
  except FetchError as exc:
   # All thrown exceptions are wrapped in a FetchError object
   country_code = exc.country_code
   try:
    # Try to get the error message from the original exception (__cause__)
    error_msg = exc.__cause__.args[0]
   except IndexError:
    # If the error message cannot be found in the original exception, use the class name of the connected exception as the error message
    error_msg = exc.__cause__.__class__.__name__
   if error_msg:
    msg = '*** Error for {}: {}'
    print(msg.format(country_code, error_msg))
   status = HTTPStatus.error
  else:
   status = res.status
  counter[status] += 1
 return counter
def download_many(cc_list):
 loop = asyncio.get_event_loop()
 coro = downloader_coro(cc_list)
 counts = loop.run_until_complete(coro)
 loop.close()
 return counts
if __name__ == '__main__':
 main(download_many)

Since the requests initiated by coroutines are fast, to prevent too many concurrent requests from being sent to the server, which may overload the server, we create an asyncio.Semaphore instance in the download_coro function and pass it to the download_one function.

<secion class="caption">Semaphore</The section> object maintains an internal counter; if the .acquire() coroutine method is called on the object, the counter will decrease; if the .release() coroutine method is called on the object, the counter will increase. The value of the counter is set when initialized.

If the counter is greater than 0, calling the .acquire() method will not block; if the counter is 0, the .acquire() method will block the coroutine calling this method until another coroutine calls the .release() method on the same Semaphore object, increasing the counter.

In the above code, we did not manually call the .acquire() or .release() method, but instead used the semaphore as a context manager in the download_one function:

with (yield from semaphore):
 image = yield from get_flag(cc)

This code ensures that at any time, there will not be more than MAX_CONCUR_REQ get_flag coroutines started.

Use the asyncio.as_completed function

Since we need to use yield from to get the result of the future produced by asyncio.as_completed, the as_completed function must be called within a coroutine. Since download_many needs to be passed as a parameter to the non-coroutine main function, I have added a new downloader_coro coroutine so that the download_many function is only used to set up the event loop.

Use the Executor object to prevent blocking the event loop

Now let's go back and look at the graph of the delay in reading data from different storage media by the computer, one thing to pay attention to in real time is that accessing the local file system will also block.

In the above code, the save_flag function blocks the only thread shared by the customer code and the asyncio event loop, so when saving files, the entire application will pause. To avoid this problem, we can use the run_in_executor method of the event loop object.

The asyncio event loop maintains a ThreadPoolExecutor object in the background, and we can call the run_in_executor method to send callable objects to it for execution.

Below is the modified code we have made:

@asyncio.coroutine
def download_one(cc, semaphore):
 try:
  with (yield from semaphore):
   image = yield from get_flag(cc)
 except web.HTTPNotFound:
  status = HTTPStatus.not_found
  msg = 'not found'
 except Exception as exc:
  raise FetchError(cc) from exc
 else:
  # This is the modified part
  loop = asyncio.get_event_loop() # Get the reference to the event loop
  loop.run_in_executor(None, save_flag, image, cc.lower()) + '.gif')
  status = HTTPStatus.ok
  msg = 'ok'
 return Result(status, cc)

The first parameter of the run_in_executor method is an Executor instance; if set to None, it uses the default ThreadPoolExecutor instance of the event loop.

From callbacks to futures to coroutines

Before we get to know coroutines, we may have some understanding of callbacks. Then, compared to callbacks, what improvements do coroutines have?

Callback code style in Python:

def stage1(response1):
 request2 = step1(response1)
 api_call2(request2, stage2)
def stage2(response2):
 request3 = step3(response3)
 api_call3(request3, stage3) 
 def stage3(response3):
  step3(response3) 
api_call1(request1, stage1)

Defects of the above code:

  1. It is prone to callback hell
  2. The code is difficult to read

Coroutines can play a very important role in this problem. If you replace it with asynchronous code done with coroutines and yield from, the code example would be as follows:

@asyncio.coroutine
def three_stages(request1):
 response1 = yield from api_call1(request1)
 request2 = step1(response1)
 response2 = yield from api_call2(requests)
 request3 = step2(response2)
 response3 = yield from api_call3(requests)
 step3(response3) 
loop.create_task(three_stages(request1)

Compared to the previous code, this code is much easier to understand. If the asynchronous call api_call1,api_call2,api_call3 If an exception is thrown, you can place the corresponding yield from expression inside a try/except block to handle exceptions.

You must get used to the yield from expression when using coroutines, and coroutines cannot be called directly. They must have their execution time explicitly scheduled, or they can be activated using the yield from expression in another coroutine that has its execution time scheduled. If you do not use loop.create_task(three_stages(request1)) then nothing will happen.

Next, let's demonstrate with a practical example:

Multiple requests are initiated each time a download is initiated

Let's modify the code for downloading flags above to also obtain the country name while downloading flags, and use it when saving the image.
We use coroutines and yield from to solve this problem:

@asyncio.coroutine
def http_get(url):
 resp = yield from aiohttp.request('GET', url)
 if resp.status == 200:
  ctype = resp.headers.get('Content-type', '').lower()
  if 'json' in ctype or url.endswith('json'):
   data = yield from resp.json()
  else:
   data = yield from resp.read()
  return data
 elif resp.status == 404:
  raise web.HttpNotFound()
 else:
  raise aiohttp.HttpProcessionError(
   code=resp.status, message=resp.reason,
   headers=resp.headers)
@asyncio.coroutine
def get_country(cc):
 url = ""/{cc}/metadata.json".format(BASE_URL, cc=cc.lower())
 metadata = yield from http_get(url)
 return metadata['country']
@asyncio.coroutine
def get_flag(cc):
 url = ""/{cc}/{cc}.gif".format(BASE_URL, cc=cc.lower())
 return (yield from http_get(url))
@asyncio.coroutine
def download_one(cc, semaphore):
 try:
  with (yield from semaphore):
   image = yield from get_flag(cc)
  with (yield from semaphore):
   country = yield from get_country(cc)
 except web.HTTPNotFound:
  status = HTTPStatus.not_found
  msg = 'not found'
 except Exception as exc:
  raise FetchError(cc) from exc
 else:
  country = country.replace(' ', '_')
  filename = '{}--{}.gif'.format(country, cc)
  print(filename)
  loop = asyncio.get_event_loop()
  loop.run_in_executor(None, save_flag, image, filename)
  status = HTTPStatus.ok
  msg = 'ok'
 return Result(status, cc)

In this code, we call get_flag and get_country in the two with blocks controlled by semaphore in the download_one function to save time.

The return statement of get_flag is added with parentheses in the outer layer because the operator precedence of parentheses is high, and the yield from statement inside the parentheses will be executed first, returning the result. If not added, a syntax error will occur

Adding parentheses is equivalent to

image = yield from http_get(url)
return image

If parentheses are not added, the program will break at the yield from and give up control, and using return will report a syntax error at this time.

Summary

In this article, we discussed:

  1. Compared a multi-threaded program and an asyncio version, explaining the relationship between multi-threading and asynchronous tasks
  2. Compared the differences between asyncio.Future class and concurrent.futures.Future class
  3. How to use asynchronous programming to manage high concurrency in network applications
  4. In asynchronous programming, compared with callbacks, coroutines significantly improve performance in the way

That's all for this article. I hope it will be helpful to everyone's learning and I also hope everyone will support the Yell Tutorial.

Statement: The content of this article is from the Internet, and the copyright belongs to the original author. The content is contributed and uploaded by Internet users spontaneously. This website does not own the copyright, does not edit the content manually, and does not assume any relevant legal liability. If you find any content suspected of copyright infringement, please send an email to notice#w3Please send an email to codebox.com (replace # with @ when sending emails) to report any violations, and provide relevant evidence. Once verified, this site will immediately delete the content suspected of infringement.

You May Also Like