Docker Build and Run in One Command: Optimizing Development Workflow

Nov 22, 2025 · Programming · 12 views · 7.8

Keywords: Docker | Containerization | Image Building | Development Workflow | Dockerfile

Abstract: This article provides an in-depth exploration of single-command solutions for building Docker images and running containers. By analyzing the combination of docker build and docker run commands, it focuses on the integrated approach using image tagging, while comparing the pros and cons of different methods. With comprehensive Dockerfile instruction analysis and practical examples, the article offers best practices to help developers optimize Docker workflows and improve development efficiency.

The Need for Integrated Docker Build and Run Commands

In modern software development, Docker has become the standard tool for containerized deployment. During development, frequent image building and container running are common operations. The traditional two-step approach—first using docker build to create an image, then docker run to start a container—while functionally complete, proves inefficient in rapid iteration development environments.

Core Solution: Build Tagging and Run Combination

Based on the best answer from the Q&A data, the most practical single-command solution connects build and run commands using the && operator:

docker build -t foo . && docker run -it foo

The advantages of this approach include:

Dockerfile Key Instruction Analysis

To ensure smooth integrated build-run workflows, deep understanding of Dockerfile core instructions is essential:

FROM Instruction: Build Foundation

The FROM instruction serves as the Dockerfile starting point, defining the base image for build stages. Proper base image selection directly impacts final image size and security:

FROM ubuntu:22.04
# Or using lighter base images
FROM alpine:latest

RUN Instruction: Executing Build Commands

The RUN instruction executes commands during build and commits results, supporting both shell and exec forms:

# Shell form
RUN apt-get update && apt-get install -y python3

# Exec form
RUN ["apt-get", "update"]
RUN ["apt-get", "install", "-y", "python3"]

COPY and ADD Instructions: File Copying

COPY and ADD instructions copy files from build context into the image:

# Copy local files
COPY requirements.txt /app/

# Copy and extract archive files
ADD application.tar.gz /app/

CMD and ENTRYPOINT: Container Startup Behavior

These two instructions define default execution commands at container startup, crucial for runtime debugging:

# Define default startup command
CMD ["python3", "app.py"]

# Or use ENTRYPOINT to define executable entry
ENTRYPOINT ["python3"]
CMD ["app.py"]

Alternative Approach Analysis

Beyond the tagging-based method, the Q&A data mentions another approach using image hashes:

docker run -it $(docker build -q .)

Comparison of these methods' advantages and disadvantages:

<table> <tr><th>Approach</th><th>Advantages</th><th>Disadvantages</th></tr> <tr><td>Tagging Approach</td><td>Image reusability, easy management</td><td>Requires manual tag naming</td></tr> <tr><td>Hash Approach</td><td>Fully automated, no naming needed</td><td>Difficult image tracking, no reusability</td></tr>

Complete Practical Example

Below is a complete Python web application Dockerfile example demonstrating how to build testable images:

FROM python:3.9-slim

WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]

Integrated build and run command:

docker build -t my-python-app . && docker run -it -p 8000:8000 my-python-app

Advanced Techniques and Best Practices

Multi-stage Build Optimization

For complex applications, multi-stage builds significantly reduce final image size:

# Build stage
FROM node:16 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

# Run stage
FROM node:16-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .
CMD ["node", "server.js"]

Environment Variable Configuration

Proper use of environment variables enhances image flexibility:

ENV NODE_ENV=production
ENV PORT=3000

# Pass arguments during build via --build-arg
ARG BUILD_VERSION=latest
ENV APP_VERSION=${BUILD_VERSION}

Container Cleanup Optimization

Add the --rm parameter to run commands, ensuring automatic cleanup after test containers exit:

docker build -t test-app . && docker run --rm -it test-app

Development Workflow Integration

Integrating single commands into development scripts further enhances efficiency:

#!/bin/bash
# dev-test.sh
set -e

echo "Building and testing application..."
docker build -t ${1:-dev-app} . && \
docker run --rm -it -p 8080:8080 ${1:-dev-app}

Usage: ./dev-test.sh my-app

Common Issues and Solutions

Build Cache Problems

Frequent builds may encounter cache issues, resolved by:

# Clear build cache
docker build --no-cache -t my-app . && docker run -it my-app

Port Conflict Handling

When ports are occupied, dynamically allocate ports:

docker build -t my-app . && docker run -it -p 0:8080 my-app

Conclusion

The integrated Docker build and run command via the docker build -t && docker run pattern provides an efficient solution for development testing workflows. This approach, combined with Dockerfile best practices including proper instruction usage, multi-stage builds, and environment configuration, significantly improves development efficiency. In practical projects, integrating this pattern with CI/CD pipelines establishes comprehensive containerized development testing systems.

Copyright Notice: All rights in this article are reserved by the operators of DevGex. Reasonable sharing and citation are welcome; any reproduction, excerpting, or re-publication without prior permission is prohibited.