Complete Guide to Converting SQL Query Results to Pandas Data Structures

Nov 20, 2025 · Programming · 12 views · 7.8

Keywords: SQL Query | Pandas | Data Conversion | DataFrame | SQLAlchemy

Abstract: This article provides a comprehensive guide on efficiently converting SQL query results into Pandas DataFrame structures. By analyzing the type characteristics of SQLAlchemy query results, it presents multiple conversion methods including DataFrame constructors and pandas.read_sql function. The article includes complete code examples, type parsing, and performance optimization recommendations to help developers quickly master core data conversion techniques.

Data Type Analysis of SQL Query Results

When executing database queries using SQLAlchemy, the returned resoverall variable is actually a sqlalchemy.engine.result.ResultProxy object. This object serves as a proxy for query results and is not a directly manipulable data structure itself, but rather provides interfaces to access query results.

To understand the characteristics of this object, we can analyze it through the following code:

from sqlalchemy import create_engine

# Create database connection engine
engine = create_engine('mysql://username:password@localhost/database_name')
connection = engine.connect()

# Execute query
result = connection.execute("SELECT * FROM sample_table WHERE id = %s", 1022)

# Check result type
print(type(result))  # Output: <class 'sqlalchemy.engine.result.ResultProxy'>
print(hasattr(result, 'fetchall'))  # Output: True
print(hasattr(result, 'keys'))  # Output: True

The ResultProxy object provides several key methods: fetchall() returns a list of all rows, fetchone() returns a single row, and keys() returns a list of column names. These methods form the foundation for subsequent data conversion.

Basic Conversion Methods

The simplest and most direct conversion method uses Pandas DataFrame constructor, as recommended in Answer 1:

import pandas as pd
from sqlalchemy import create_engine

# Establish database connection
engine = create_engine('mysql://username:password@localhost/database_name')
connection = engine.connect()

# Execute query
dataid = 1022
query = """
    SELECT 
       sum(sales) AS total_sales,
       sum(profit) AS total_profit,
       sum(clicks) AS total_clicks,
       sum(impressions) AS total_impressions,
       100*sum(clicks)/sum(impressions) AS ctr,
       sum(profit)/sum(clicks) AS cpc
    FROM daily_report_cooked
    WHERE campaign_id = %s
"""

result = connection.execute(query, dataid)

# Convert to DataFrame
df = pd.DataFrame(result.fetchall())
df.columns = result.keys()

# Close connection
connection.close()

print(df.head())
print(f"DataFrame shape: {df.shape}")
print(f"Column names: {list(df.columns)}")

The core advantage of this method lies in its simplicity: the fetchall() method converts query results to a Python list, which is then used to create a DataFrame via the constructor, with column names correctly set using the keys() method.

Advanced Method Using pandas.read_sql

The method mentioned in Answer 2 provides a more modern and efficient solution. Pandas' built-in read_sql function can directly create DataFrames from SQL queries:

import pandas as pd
from sqlalchemy import create_engine

# Create engine
engine = create_engine('mysql://username:password@localhost/database_name')

# Define query
dataid = 1022
sql_query = """
    SELECT 
       sum(sales) AS total_sales,
       sum(profit) AS total_profit,
       sum(clicks) AS total_clicks,
       sum(impressions) AS total_impressions,
       100*sum(clicks)/sum(impressions) AS ctr,
       sum(profit)/sum(clicks) AS cpc
    FROM daily_report_cooked
    WHERE campaign_id = %s
"""

# Direct read to DataFrame
df = pd.read_sql(sql_query, engine, params=[dataid])

print(df.info())
print(f"Data types:\n{df.dtypes}")

This approach offers several significant advantages: automatic data type inference, better memory efficiency, and more concise code. Pandas automatically maps database data types to appropriate Pandas data types.

Data Type Handling and Optimization

In practical applications, proper data type handling is crucial. We can optimize data type conversion through the following approach:

import pandas as pd
from sqlalchemy import create_engine
from datetime import datetime

# Execute query and convert
engine = create_engine('mysql://username:password@localhost/database_name')

# Query with multiple data types
sql_query = """
    SELECT 
       campaign_id,
       campaign_name,
       sum(sales) AS total_sales,
       sum(clicks) AS total_clicks,
       start_date,
       end_date,
       100*sum(clicks)/sum(impressions) AS ctr
    FROM campaign_data
    WHERE campaign_id = %s
    GROUP BY campaign_id, campaign_name, start_date, end_date
"""

df = pd.read_sql(sql_query, engine, params=[1022])

# Manual data type optimization
df['total_sales'] = df['total_sales'].astype('float64')
df['total_clicks'] = df['total_clicks'].astype('int64')
df['ctr'] = df['ctr'].astype('float32')
df['start_date'] = pd.to_datetime(df['start_date'])
df['end_date'] = pd.to_datetime(df['end_date'])

print("Optimized data types:")
print(df.dtypes)
print(f"Memory usage: {df.memory_usage(deep=True).sum() / 1024 ** 2:.2f} MB")

By precisely controlling data types, we can significantly reduce memory usage and improve computational performance.

Error Handling and Best Practices

In production environments, robust error handling is essential:

import pandas as pd
from sqlalchemy import create_engine
from sqlalchemy.exc import SQLAlchemyError

def safe_sql_to_dataframe(sql_query, engine, params=None):
    """
    Safely convert SQL query to DataFrame
    """
    try:
        if params:
            df = pd.read_sql(sql_query, engine, params=params)
        else:
            df = pd.read_sql(sql_query, engine)
        
        # Validate data integrity
        if df.empty:
            print("Warning: Query returned empty result set")
        
        return df
        
    except SQLAlchemyError as e:
        print(f"Database error: {e}")
        return pd.DataFrame()
    except Exception as e:
        print(f"Unknown error: {e}")
        return pd.DataFrame()

# Usage example
engine = create_engine('mysql://username:password@localhost/database_name')
query = "SELECT * FROM non_existent_table"

df = safe_sql_to_dataframe(query, engine)
if not df.empty:
    print("Data conversion successful")
else:
    print("Data conversion failed or returned empty results")

Performance Comparison and Selection Recommendations

For different usage scenarios, choose the appropriate conversion method:

import pandas as pd
from sqlalchemy import create_engine
import time

engine = create_engine('mysql://username:password@localhost/database_name')

# Method 1: Using DataFrame constructor
start_time = time.time()
connection = engine.connect()
result = connection.execute("SELECT * FROM large_table LIMIT 10000")
df1 = pd.DataFrame(result.fetchall())
df1.columns = result.keys()
connection.close()
time_method1 = time.time() - start_time

# Method 2: Using read_sql
start_time = time.time()
df2 = pd.read_sql("SELECT * FROM large_table LIMIT 10000", engine)
time_method2 = time.time() - start_time

print(f"Method 1 execution time: {time_method1:.4f} seconds")
print(f"Method 2 execution time: {time_method2:.4f} seconds")
print(f"Data consistency check: {df1.equals(df2)}")

Generally, pd.read_sql is the better choice in most scenarios, particularly when handling large datasets, as it offers better memory management and performance characteristics.

Conclusion

Converting SQL query results to Pandas DataFrames is a fundamental operation in modern data analysis workflows. By understanding the characteristics of ResultProxy objects, we can flexibly choose conversion methods. The basic method using DataFrame(result.fetchall()) with result.keys() is suitable for simple conversion needs, while pd.read_sql provides a more complete and optimized solution, especially when dealing with complex queries and large datasets. In practical applications, combining appropriate data type optimization and error handling enables the construction of robust and efficient data processing pipelines.

Copyright Notice: All rights in this article are reserved by the operators of DevGex. Reasonable sharing and citation are welcome; any reproduction, excerpting, or re-publication without prior permission is prohibited.