Evolution and Practice of Asynchronous HTTP Requests in Python: From requests to grequests

Nov 11, 2025 · Programming · 24 views · 7.8

Keywords: Python | Asynchronous Programming | HTTP Requests | grequests | aiohttp | Concurrency

Abstract: This article provides an in-depth exploration of the evolution of asynchronous HTTP requests in Python, focusing on the development of requests library's asynchronous capabilities and the grequests alternative. Through detailed code examples, it demonstrates how to use event hooks for response processing, compares performance differences among various asynchronous implementations, and presents alternative solutions using thread pools and aiohttp. Combining practical cases, the article helps developers understand core concepts of asynchronous programming and choose appropriate solutions.

Evolution Background of Asynchronous HTTP Requests in Python

Within the Python ecosystem, the development of HTTP request libraries has undergone significant evolution. Early versions of the requests library provided built-in asynchronous functionality, but as the library matured and its architecture was refined, these features were separated into the independent grequests project. This evolution reflects the Python community's continuous exploration and optimization of asynchronous programming patterns.

Original Implementation of requests Library Asynchronous Features

In versions prior to requests v0.13.0, developers could directly use the built-in async module for asynchronous HTTP requests. The core mechanism involved using event hooks to handle response content, providing a foundational framework for asynchronous programming.

The following example demonstrates how to use the original requests asynchronous functionality to retrieve content from multiple web pages:

from requests import async

urls = [
    'http://python-requests.org',
    'http://httpbin.org',
    'http://python-guide.org',
    'http://kennethreitz.com'
]

def process_response(response):
    print(f"URL: {response.url}")
    print(f"Content length: {len(response.content)}")
    return response.content

async_list = []
for url in urls:
    request_item = async.get(url, hooks={'response': process_response})
    async_list.append(request_item)

async.map(async_list)

Transition to grequests

As the requests library underwent architectural restructuring, asynchronous functionality was migrated to the independent grequests library. This separation allowed the core library to remain lightweight while providing a specialized solution for users requiring asynchronous capabilities.

The usage pattern of grequests closely resembles that of the original requests async module:

import grequests

urls = [
    'http://www.heroku.com',
    'http://tablib.org',
    'http://httpbin.org',
    'http://python-requests.org',
    'http://kennethreitz.com'
]

requests = (grequests.get(u) for u in urls)
responses = grequests.map(requests)

for response in responses:
    if response:
        print(f"Status: {response.status_code}")
        print(f"Content: {response.content[:100]}...")

Alternative Asynchronous Implementation Approaches

Beyond specialized asynchronous HTTP libraries, developers can leverage Python's standard library concurrency tools to achieve similar functionality. ThreadPoolExecutor offers a concise thread pool solution:

import requests
import concurrent.futures

def fetch_url(url):
    try:
        response = requests.get(url, timeout=10)
        return {
            'url': url,
            'status_code': response.status_code,
            'content': response.content,
            'success': True
        }
    except Exception as e:
        return {
            'url': url,
            'error': str(e),
            'success': False
        }

def concurrent_requests(urls, max_workers=5):
    results = []
    with concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) as executor:
        future_to_url = {executor.submit(fetch_url, url): url for url in urls}
        
        for future in concurrent.futures.as_completed(future_to_url):
            url = future_to_url[future]
            try:
                result = future.result()
                results.append(result)
            except Exception as exc:
                print(f"{url} generated exception: {exc}")
    
    return results

# Usage example
url_list = ['https://httpbin.org/get', 'https://api.github.com']
results = concurrent_requests(url_list)
for result in results:
    print(f"URL: {result['url']}, Success: {result['success']}")

Modern Asynchronous Solution: aiohttp

For scenarios requiring genuine asynchronous I/O, the aiohttp library provides native asynchronous support based on asyncio. This approach excels in I/O-intensive tasks, fully leveraging modern Python's asynchronous features.

import aiohttp
import asyncio
import time

async def fetch_page(session, url):
    async with session.get(url) as response:
        content = await response.read()
        return {
            'url': url,
            'status': response.status,
            'content_length': len(content),
            'timestamp': time.time()
        }

async def fetch_multiple_pages(urls):
    async with aiohttp.ClientSession() as session:
        tasks = [fetch_page(session, url) for url in urls]
        results = await asyncio.gather(*tasks, return_exceptions=True)
        
        successful_results = []
        for result in results:
            if not isinstance(result, Exception):
                successful_results.append(result)
        
        return successful_results

# Usage example
async def main():
    urls = [
        'https://httpbin.org/json',
        'https://api.github.com',
        'https://jsonplaceholder.typicode.com/posts/1'
    ]
    
    start_time = time.time()
    results = await fetch_multiple_pages(urls)
    end_time = time.time()
    
    print(f"Fetched {len(results)} pages in {end_time - start_time:.2f} seconds")
    for result in results:
        print(f"URL: {result['url']}, Size: {result['content_length']} bytes")

# Run asynchronous function
if __name__ == "__main__":
    asyncio.run(main())

Performance Comparison and Practical Recommendations

Different asynchronous implementation approaches exhibit significant variations in performance characteristics. Thread pool-based solutions are suitable for CPU-intensive tasks, while genuine asynchronous I/O solutions perform better in I/O-intensive scenarios.

In practical projects, selecting an asynchronous approach should consider factors such as project complexity, team familiarity, performance requirements, and dependency management. For simple concurrency needs, ThreadPoolExecutor may suffice; for high-concurrency I/O scenarios, aiohttp is a better choice; and for projects requiring consistency with the requests API, grequests provides a smooth migration path.

Error Handling and Best Practices

Error handling in asynchronous programming requires special attention. The following example demonstrates robust error handling implementation in grequests:

import grequests
from requests.exceptions import RequestException

def exception_handler(request, exception):
    print(f"Request {request.url} failed: {exception}")
    return None

urls = [
    'http://valid-website.com',
    'http://invalid-website-that-does-not-exist.xyz',
    'http://another-valid-site.org'
]

requests = (grequests.get(u) for u in urls)
responses = grequests.map(requests, exception_handler=exception_handler)

for response in responses:
    if response and response.status_code == 200:
        print(f"Success: {response.url}")
        # Process response content
        process_content(response.content)
    else:
        print(f"Failed or invalid response for request")

Through proper error handling and timeout configuration, developers can build efficient and reliable asynchronous HTTP request systems. This architectural pattern finds extensive application value in modern web development, API integration, and data collection scenarios.

Copyright Notice: All rights in this article are reserved by the operators of DevGex. Reasonable sharing and citation are welcome; any reproduction, excerpting, or re-publication without prior permission is prohibited.