Ephemeral volumes are a feature of Kubernetes that allows the creation of storage volumes which follow the lifecycle of the pod. In other words, when the pod is destroyed, the volume is also destroyed. This differs from persistent volumes, which are often used to retain data, even after the pod is no longer running.
In this article, I will focus on using generic ephemeral volumes on AWS EKS clusters using the Amazon EBS CSI driver.
Use Cases
Ephemeral volumes are not a new concept. Kubernetes has long used ephemeral volumes for certain features such as emptyDir, configMap, downwardAPI, and secrets. These features use the kubelet to manage such volumes using local storage on the node.
Ephemeral volumes are also useful for workloads that need temporary “scratch” space, for processing data or saving large cache files to disk, for example. Using emptyDir to handle such use-cases has the disadvantage of using the node’s local storage capacity. Pods using emptyDir, especially if missing a sizeLimit configuration, can result in the node’s disk utilization becoming too high. This can result in node-pressure eviction and cause interruptions of all pods running on the node as they are rescheduled on different nodes.
Without ephemeral volumes, you would need to configure nodes with larger storage capacity (paying for this extra capacity whether used or not) or use StatefulSets to assign per-pod storage.
Of course, you could configure a persistent volume with a “Delete” reclaim policy in a pod specification, but this presents a problem with pods created by Deployments. If a Deployment has multiple replicas, you could easily run into a “Multi-Attach error” when trying to use EBS volumes as PersistentVolumes in a Deployment Spec, since those volumes use the ReadWriteOnce access mode, where the volume can be mounted as read-write by a single node only:
# This will NOT achieve the desired result of a per-pod "scratch" storage volume
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/var/www/pvc"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 8Gi
The error might look similar to this when using EBS-backed volumes …
Multi-Attach error for volume "pvc-d01d376e-0fc3-4e99-9e8a-c78eebec3010" Volume is already used by pod(s) nginx-deployment-6f9b9c4c8f-nhmqw
Furthermore, the EBS CSI only supports ReadWriteOnce, since EBS volumes are inherently not distributed file systems. You would have to use something like the Amazon EFS CSI driver, which will provision an AWS Elastic File System – a costly and complicated setup. Also, EFS was designed for shared, persistent storage. To use EFS as temporary Pod “scratch” storage you would have to add configuration to ensure files are cleaned up after pods are terminated. This is a rabbit hole you probably want to avoid.
Ephemeral Volumes Types
You may have read the Kubernetes docs on CSI ephemeral volumes and checked the list of CSI Drivers that support CSI ephemeral volumes:

You might have also been disappointed to read that the AWS Elastic Block Storage (EBS) CSI driver only supports persistent volumes. Do not despair however, since you can still use “generic” ephemeral volumes!
Ephemeral volumes come it two varieties:
- CSI “inline” Ephemeral Volumes – Implemented in the driver using the Container Storage Interface (CSI). The driver manages the complete lifecycle of the volume, creating ephemeral volumes directly, without the need for a PersistentVolume (PV) and PersistentVolumeClaim (PVC). In this regard, they are more efficient and can offer more configuration options.
- Generic Ephemeral Volumes – Implemented natively within Kubernetes. Volumes are managed directly by the kubelet on the node where the pod is scheduled and do not rely on any external storage plugins or interfaces. In other words, Kubernetes does all the “heavy lifting” for managing the volume.
You can read more about the support for ephemeral local volumes and their availability on the kubernetes-csi for project docs.
Using Generic Ephemeral Volumes with AWS EBS CSI
First, ensure you have the installed StorageClass implementing the EBS CSI. My output looks like this:
kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp3 (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 320d
Next, we’ll create a Pod, adding a volume using the generic ephemeral volume syntax:
kind: Pod
apiVersion: v1
metadata:
name: pod-with-generic-ephemeral-vol
spec:
containers:
- name: my-frontend
image: busybox:1.28
volumeMounts:
- mountPath: "/scratch"
name: scratch-volume
command: [ "sleep", "1000000" ]
volumes:
- name: scratch-volume
ephemeral:
volumeClaimTemplate:
metadata:
labels:
type: my-volume-claim
spec:
accessModes: [ "ReadWriteOnce" ]
# You can specify storage class explicitly, or use the default
#storageClassName: "gp3"
resources:
requests:
storage: 1Gi
After creating the pod, examine the PVC and PV in the pod’s namespace to see the creation of the ephemeral volume in action:
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pod-with-generic-ephemeral-vol-scratch-volume Bound pvc-dc7189bc-6585-4e96-b633-64c06e54e7da 1Gi RWO gp3 19s
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-dc7189bc-6585-4e96-b633-64c06e54e7da 1Gi RWO Delete Bound default/pod-with-generic-ephemeral-vol-scratch-volume gp3 2s
Note the naming convention of the PersistentVolumeClaim, which combines the pod name and volume name. When using a Deployment with multiple replicas, each pod will have a unique name, so this would not cause a naming conflict.
Let’s jump onto the pod and have a look:
kubectl exec -it pod-with-generic-ephemeral-vol -- /bin/sh
/ # df -h
Filesystem Size Used Available Use% Mounted on
overlay 80.0G 7.8G 72.2G 10% /
tmpfs 64.0M 0 64.0M 0% /dev
tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup
/dev/nvme1n1 973.4M 24.0K 957.4M 0% /scratch
/dev/nvme0n1p1 80.0G 7.8G 72.2G 10% /etc/hosts
/dev/nvme0n1p1 80.0G 7.8G 72.2G 10% /dev/termination-log
/dev/nvme0n1p1 80.0G 7.8G 72.2G 10% /etc/hostname
Our ephemeral “scratch” volume is mounted on /scratch, as configured in the mountPath of our Pod spec.
Finally, let’s destroy the pod to confirm that the volume is also destroyed and follows the lifecycle of the pod:
kubectl delete pod pod-with-generic-ephemeral-vol
pod "pod-with-generic-ephemeral-vol" deleted
kubectl get pvc
No resources found in default namespace.
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
No resources found in default namespace.
Summary
We’ve demonstrated using ephemeral volumes with the AWS EBS CSI driver. I think the official documentation could be a bit clearer about the differences between CSI ephemeral and generic ephemeral volumes, since each is implemented in a different way.
This is an great post from the official Kubernetes blog which does a good job at explaining why the two types of ephemeral volumes evolved, some use-cases, and implementation differences. By the way, ephemeral volumes are also supported by EKS Fargate.