Sharing Storage Between Kubernetes Pods: From Design Patterns to NFS Implementation

Dec 04, 2025 · Programming · 14 views · 7.8

Keywords: Kubernetes | Shared Storage | NFS | PersistentVolume | Microservices Architecture

Abstract: This article comprehensively examines the challenges and solutions for sharing storage between pods in Kubernetes clusters. It begins by analyzing design pattern considerations in microservices architecture, highlighting maintenance issues with direct filesystem access. The article then details Kubernetes-supported ReadWriteMany storage types, focusing on NFS as the simplest solution with configuration examples for PersistentVolume and PersistentVolumeClaim. Alternative options like CephFS, Glusterfs, and Portworx are discussed, along with practical deployment recommendations.

In the Kubernetes ecosystem, sharing storage between pods is a common yet challenging requirement. Many developers encounter scenarios where multiple pods need simultaneous read-write access to the same storage volume, such as data sharing between continuous integration servers and application servers. This article systematically explores solutions to this problem from three perspectives: architectural design, technology selection, and practical deployment.

Architectural Considerations: When is Shared Storage Necessary?

In microservices architecture (MSA), data encapsulation is a fundamental principle. Similar to encapsulation in object-oriented programming, each service should manage data within its domain and interact with other services through well-defined interfaces (such as APIs, message queues, or gRPC). Direct filesystem sharing resembles using global variables in traditional programming, potentially leading to maintenance issues and unintended side effects (as described by Hyrum's Law).

For example, in logging scenarios, a better approach is to establish a dedicated logging service that other services call via API, rather than directly writing to a shared filesystem. This makes modifications like log format changes or feature extensions (e.g., sending emails on errors) more controllable.

However, there are indeed scenarios where a filesystem supporting multiple concurrent writers offers better solutions than traditional MSA communication. These typically involve processing large amounts of unstructured data, legacy system integration, or specific performance requirements.

Kubernetes Volume Types and Access Modes

Kubernetes supports various volume types, but not all support multiple pods writing simultaneously. The key distinction lies in access modes, where ReadWriteMany mode allows a single volume to be mounted as read-write by multiple pods on multiple nodes.

Currently supported volume types with ReadWriteMany include:

For Google Cloud Platform (GCE/GKE) users, it's important to note that standard GCE persistent disks do not support ReadWriteMany mode, prompting developers to seek alternatives.

NFS: The Simplest Shared Storage Solution

For most use cases, NFS (Network File System) provides the most straightforward and easily implementable shared storage solution. As a built-in Kubernetes volume plugin, NFS can be used without special compilation options.

The basic steps for deploying NFS shared storage are:

  1. Set up an NFS server: This can be deployed internally or externally to the cluster. For testing environments, refer to DigitalOcean's Ubuntu NFS setup tutorial; production environments should use professional NAS devices or cloud-managed services.
  2. Create a PersistentVolume: Define the Kubernetes resource for the NFS storage volume.
  3. Create a PersistentVolumeClaim: Pods claim storage usage through PVCs.
  4. Mount the volume in Deployments/Pods: Attach the PVC to pods requiring shared storage.

Configuration Example: NFS PersistentVolume and PersistentVolumeClaim

Below is a complete NFS storage configuration example. First, create the PersistentVolume:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: shared-nfs-pv
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.1.100  # NFS server IP address
    path: "/exports/shared-data"

Next, create the PersistentVolumeClaim:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: shared-nfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi

Finally, mount the volume in a Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: app-container
        image: myapp:latest
        volumeMounts:
        - mountPath: "/shared-data"
          name: shared-storage
      volumes:
      - name: shared-storage
        persistentVolumeClaim:
          claimName: shared-nfs-pvc

This approach allows multiple pods to simultaneously read and write files in the /shared-data directory.

Node Requirements and Considerations

Using NFS volumes requires cluster nodes to have appropriate NFS client tools installed. On most Linux distributions, this typically means installing the nfs-common or nfs-utils packages. For managed Kubernetes services (like GKE, EKS), verify that nodes support NFS connections; some cloud platforms may require using their managed NFS services (e.g., AWS EFS).

Another important consideration is using subPath to isolate data between different pods. While multiple pods can mount the same volume, specifying different subPath values for each pod prevents file conflicts:

volumeMounts:
- mountPath: "/data"
  name: shared-volume
  subPath: "pod1-data"  # Different subpath for each pod

Alternative Solutions and Advanced Options

Beyond NFS, several other shared storage solutions are worth considering:

For scenarios requiring advanced features (like snapshots, cloning, encryption), consider storage operators like Rook, which simplify deployment and management of complex storage systems like Ceph in Kubernetes.

Practical Recommendations and Best Practices

When selecting a shared storage solution, follow these principles:

  1. Start Simple: For most use cases, NFS provides sufficient performance and reliability with simple configuration.
  2. Evaluate Actual Needs: Carefully analyze whether multiple pods truly need direct filesystem write access or if inter-service communication could suffice.
  3. Consider Performance Requirements: NFS performs well under low concurrency, but high-concurrency write scenarios may require CephFS or Glusterfs.
  4. Plan Data Isolation: Even with shared volumes, appropriately isolate data between services via directory structures or subPath.
  5. Test Failure Recovery: Ensure applications have appropriate fault tolerance mechanisms for NFS server failures.

With proper design and correct technology selection, shared storage in Kubernetes can form a solid foundation for building powerful distributed applications.

Copyright Notice: All rights in this article are reserved by the operators of DevGex. Reasonable sharing and citation are welcome; any reproduction, excerpting, or re-publication without prior permission is prohibited.