Docker storage drivers control how images and containers are stored on your filesystem. They’re the mechanism that lets you create images, start containers, and modify writable layers. Here are the differences between each driver and the situations where they should be used.

What Are Storage Drivers For?

The active storage driver determines how Docker manages your images and containers. The available drivers implement different strategies for handling image layers. They’ll have unique performance characteristics depending on the storage scenario at hand.

Storage drivers are intrinsically linked to a container’s “writable layer.” This term refers to the topmost level of a container’s filesystem which you can modify by running commands, writing files, and adding software at runtime.

Although persistent Docker container data should always be stored in volumes, changes to the container’s own filesystem are often inevitable. You might be writing temporary files, storing environment variables into a config file, or caching data for later reference.

These operations all result in the running container’s filesystem deviating from the one defined by its image. Your choice of storage driver handles the discrepancy and applies the difference.

What Happens When You Start a Container?

When a new container starts, Docker first pulls the image layers created by building its Dockerfile. The layers get stored on your host machine so you don’t need to pull the image again until you want to fetch updates. As part of the pull process, Docker will identify and reuse layers it already has, avoiding redundant downloads.

Once the image layers are available, Docker will launch the container and add an extra layer on top. This is the writable layer that the container can modify. All lower layers are immutable and derived from their Dockerfile definitions.

The writable layer works well with few overheads when you’re simply adding files to a container’s filesystem. They end up in the writable layer, at the top of the stack. Modifications to existing files are more troublesome though: they exist in lower read-only layers but now need to be written to.

Docker’s approach is to “copy-on-write,” meaning the file is copied out of its original layer and into the writable layer at the point of modification. This is an I/O-intensive operation which can lead to performance degradation.

The differing storage drivers are responsible for implementing copy-on-write support. Each driver offers unique trade-offs between performance and disk usage efficiency.

Available Storage Drivers

Docker uses a pluggable storage driver architecture and provides several options by default. Storage drivers don’t affect individual images or containers – you can run any Docker image irrespective of the selected driver.

The active storage driver is a runtime-level setting that’s defined in the Docker daemon’s configuration file. Some storage drivers require special filesystem provisioning before you can use them. You then add your selected storage driver to /etc/docker/daemon.json:

You can check your current driver by running docker info | grep “Storage Driver”. On most modern systems, it’ll default to overlay2.

Here’s a rundown of the options you can choose between.

overlay2

The overlay2 driver is now the default on all actively supported Linux distributions. It requires an ext4 or xfs backing filesystem.

overlay2 offers a good balance between performance and efficiency for copy-on-write operations. When a copy-on-write is needed, the driver searches through the image’s layers to find the right file, starting from the topmost layer. Results are cached to accelerate the process next time.

Once the file’s been found, it’s copied up to the container’s writable layer. The copy is then modified with the changes requested by the container. From here on, the container sees only the new copied version of the file. The original in the lower image layer becomes opaque to the container.

overlay2 operates at the file level as opposed to the block level. This enhances performance by maximizing memory use efficiency but can result in larger writable layers when many changes are made.

Alternatives to this driver include aufs and the older overlay. Neither of these are recommended for use on modern Linux distributions where overlay2 is supported.

btrfs and zfs

These two drivers work at the block level and are ideal for write-intensive operations. They each require their respective backing filesystem.

Use of these drivers results in your /var/lib/docker directory being stored on a btrfs or zfs volume. Each image layer gets its own directory in the subvolumes folder. Space is allocated to directories on-demand as it’s needed, keeping disk utilization low until copy-on-write operations occur.

Image base layers are stored as subvolumes on the filesystem. Other layers become snapshots, containing only the differences they introduce. Writable layer modifications are handled at the block level, adding another space-efficient snapshot.

You can create snapshots of subvolumes and other snapshots at any time. These snapshots continue to share unchanged data, minimizing overall storage consumption.

Using one of these drivers can you a give better experience for heavily write-intensive containers. If you’re writing a lot of temporary files or caching many operations on-disk, btrfs or zfs can out-perform overlay2. The one you should use depends on your backing filesystem – generally zfs is preferred as a more modern alternative to btrfs.

fuse-overlayfs

This storage driver provides a way to run Docker in rootless mode on a machine that lacks support for the overlay2 driver. However, as all currently targeted Linux distributions now work with overlay2, fuse-overlayfs is no longer needed or recommended.

This driver works by implementing an overlay filesystem using FUSE. As a user-space file system, it works in rootless mode but incurs performance penalties compared to a kernel-level storage system.

vfs

The vfs driver is included for test purposes only and shouldn’t be used in production. The performance of this driver is documented as poor.

vfs can be useful in some special scenarios as it does not use copy-on-write. Instead each layer gets its own on-disk directory. When a file needs to move between layers, it’s simply copied into a new directory.

Consequently vfs works with all filesystems and benefits from simplicity and ease of inspection. It suffers from being I/O-intensive and prone to causing high disk usage as each file modification triggers a full copy from the source layer.

devicemapper

This was once the recommended driver for CentOS and RHEL but it has lost its place to overlay2 in more recent kernel releases. This driver required a direct-lvm backing filesystem. devicemapper should no longer be used – it’s deprecated and will be removed entirely in the future.

Summary

Docker’s storage drivers are used to manage image layers and the writable portion of a container’s filesystem. Although changes to container filesystems are lost when the container stops, they still need to be persisted while the container running. It’s the storage driver that provides this mechanism.

Each driver possesses a different set of optimizations that makes it more or less suitable for different scenarios. Nowadays overlay2 is the default driver and the recommended option for most workloads, although alternative options like btrfs, zfs, and fuse-overlayfs have some more advanced features and may be required in certain cases.