Complete Guide to Calling Python Scripts from Another Script with Argument Passing

Oct 27, 2025 · Programming · 18 views · 7.8

Keywords: Python script calling | argument passing | subprocess management | modular programming | code reuse

Abstract: This article provides a comprehensive exploration of various methods to call one Python script from another while passing arguments. It focuses on implementations using os.system, subprocess module, exec function, and importlib module, analyzing the advantages, disadvantages, and suitable scenarios for each approach. Through detailed code examples and in-depth technical analysis, it helps developers choose the most appropriate solution for their needs, while discussing best practices in modular programming and performance considerations.

Introduction

In Python development, there is often a need to call one script from another while passing arguments, a requirement particularly common in scenarios such as modular programming, task distribution, and code reuse. This article systematically analyzes and compares multiple implementation methods based on high-quality Q&A from Stack Overflow and related technical articles.

Using the os.system Method

os.system is one of the most straightforward approaches, executing another Python script through the operating system command line. This method simulates manually running the script in a terminal and can fully pass command-line arguments.

import os

# Define parameters to pass
values = [0, 1, 2, 3]

# Iterate through parameters and call the second script
for value in values:
    os.system(f"python script2.py {value}")

The advantage of this method lies in its simplicity and intuitiveness, making it particularly suitable for calling scripts designed as command-line tools. The called script can normally receive parameters via sys.argv:

# script2.py
import sys

if len(sys.argv) > 1:
    argument = sys.argv[1]
    print(f"Received argument: {argument}")
    # Logic for processing the argument
else:
    print("No arguments received")

However, the os.system method has some limitations. It starts a new Python interpreter process, which incurs additional performance overhead. Furthermore, error handling is relatively complex, requiring parsing of return codes to understand execution status.

Using the subprocess Module

The subprocess module provides more powerful and flexible subprocess management capabilities. Compared to os.system, it offers better control over input/output and error handling.

import subprocess

# Define argument list
arguments = ["Geeks", "for", "Geeks"]

# Execute script using subprocess.run
result = subprocess.run(
    ["python", "called_script.py"] + arguments,
    capture_output=True,
    text=True
)

# Check execution result
if result.returncode == 0:
    print("Script executed successfully:")
    print(result.stdout)
else:
    print("Script execution failed:")
    print(result.stderr)

The subprocess.run method provides rich options to control subprocess behavior. The capture_output parameter can capture standard output and error output, while the text parameter ensures output is returned as strings. This method is particularly suitable for scenarios requiring output processing or complex error handling.

Using the exec Function

The exec function dynamically executes code within the current Python interpreter, eliminating the need to start new processes and thus offering better performance.

# Define parameters to pass
arg1 = "Geeks"
arg2 = "for"
arg3 = "Geeks"

# Create global variable dictionary containing parameters
globals_dict = {
    'arg1': arg1,
    'arg2': arg2,
    'arg3': arg3
}

# Execute script and pass parameters
with open("called_script.py", "r") as file:
    script_content = file.read()
    exec(script_content, globals_dict)

In the called script, parameters can be retrieved via the globals() function:

# called_script.py
# Retrieve parameters from global variables
received_arg1 = globals().get('arg1', 'default_value')
received_arg2 = globals().get('arg2', 'default_value')
received_arg3 = globals().get('arg3', 'default_value')

print(f"Received parameters: {received_arg1}, {received_arg2}, {received_arg3}")

The advantage of this method is high execution efficiency since no inter-process communication is required. However, attention must be paid to variable scope and naming conflicts.

Using the importlib Module

The importlib module provides the ability to dynamically import Python modules. This approach imports the script as a module and then calls functions within it.

import importlib.util

# Define parameters
parameters = ["parameter1", "parameter2", "parameter3"]

# Dynamically import script
spec = importlib.util.spec_from_file_location("target_module", "target_script.py")
target_module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(target_module)

# Call function in module
target_module.process_arguments(*parameters)

The called script needs to be designed as an importable module:

# target_script.py
def process_arguments(*args):
    """Process incoming arguments"""
    for i, arg in enumerate(args, 1):
        print(f"Parameter {i}: {arg}")
    # Execute main business logic

# Support command-line invocation
if __name__ == "__main__":
    import sys
    process_arguments(*sys.argv[1:])

This method combines the advantages of modular programming, supporting both command-line invocation and module importation. It is the most Pythonic approach, promoting code reuse and modularization.

Method Comparison and Selection Recommendations

Each method has its suitable scenarios:

os.system: Suitable for simple script calls, especially when the called script is designed as a standalone command-line tool. Advantages include simplicity and directness, while disadvantages include significant performance overhead.

subprocess: Provides the most comprehensive control capabilities, suitable for scenarios requiring output handling, error management, and complex execution environments. Performance is similar to os.system but with more powerful features.

exec: Offers the best performance, suitable for executing code within the same process. However, attention must be paid to variable scope and security issues.

importlib: Most aligned with Python's modular design principles, suitable for long-term maintenance projects. Promotes code reuse and good architectural design.

Best Practice Recommendations

In practical development, it is recommended to follow these best practices:

1. If possible, prioritize encapsulating functionality as importable functions rather than directly executing scripts. This provides maximum flexibility.

2. For scripts that need to remain independent, use the standard if __name__ == "__main__": pattern, enabling both command-line invocation and module importation.

3. Consider using context managers to temporarily modify sys.argv, which is useful in testing and specific scenarios:

import contextlib
import sys

@contextlib.contextmanager
def temporary_arguments(args):
    """Context manager for temporarily modifying command-line arguments"""
    original_args = sys.argv.copy()
    sys.argv = [sys.argv[0]] + args
    try:
        yield
    finally:
        sys.argv = original_args

# Usage example
with temporary_arguments(["test_arg"]):
    # Within this block, sys.argv contains test arguments
    print(f"Current arguments: {sys.argv}")

4. For performance-sensitive applications, avoid frequently starting new Python processes and consider using process pools or other concurrency mechanisms.

Conclusion

There are multiple ways to call one Python script from another while passing arguments, each with its own advantages and disadvantages. Choosing the appropriate method requires consideration of specific application scenarios, performance requirements, and code maintainability. In most cases, using the importlib module to import scripts as modules is the best choice as it promotes code modularization and reuse. For simple command-line tool calls, os.system or subprocess provide direct solutions. Regardless of the chosen method, understanding their working principles and limitations is key to ensuring code quality and performance.

Copyright Notice: All rights in this article are reserved by the operators of DevGex. Reasonable sharing and citation are welcome; any reproduction, excerpting, or re-publication without prior permission is prohibited.