1 - VolumesOn-disk files in a container are ephemeral, which presents some problems for non-trivial applications when running in containers. One problem is the loss of files when a container crashes. The kubelet restarts the container but with a clean state. A second problem occurs when sharing files between containers running together in a Show
BackgroundDocker has a concept of volumes, though it is somewhat looser and less managed. A Docker volume is a directory on disk or in another container. Docker provides volume drivers, but the functionality is somewhat limited. Kubernetes supports many types of volumes. A Pod can use any number of volume types simultaneously. Ephemeral volume types have a lifetime of a pod, but persistent volumes exist beyond the lifetime of a pod. When a pod ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not destroy persistent volumes. For any kind of volume in a given pod, data is preserved across container restarts. At its core, a volume is a directory, possibly with some data in it, which is accessible to the containers in a pod. How that directory comes to be, the medium that backs it, and the contents of it are determined by the particular volume type used. To use a volume, specify the volumes to provide for the Pod in Volumes cannot mount within other volumes (but see Using subPath for a related mechanism). Also, a volume cannot contain a hard link to anything in a different volume. Types of VolumesKubernetes supports several types of volumes. awsElasticBlockStore (deprecated)FEATURE STATE: An There are some restrictions when using an
Creating an AWS EBS volumeBefore you can use an EBS volume with a pod, you need to create it.
Make sure the zone matches the zone you brought up your cluster in. Check that the size and EBS volume type are suitable for your use. AWS EBS configuration example
If the EBS volume is partitioned, you can supply the optional field AWS EBS CSI migrationFEATURE STATE: The AWS EBS CSI migration completeFEATURE STATE: To disable the azureDisk (deprecated)FEATURE STATE: The For more details, see the
azureDisk CSI migrationFEATURE STATE: The azureDisk CSI migration completeFEATURE STATE: To disable the azureFile (deprecated)FEATURE STATE: The For more details, see the azureFile CSI migrationFEATURE STATE: The Azure File CSI driver does not support using same volume with different fsgroups. If azureFile CSI migration completeFEATURE STATE: To disable the cephfsA See the CephFS example for more details. cinder (deprecated)FEATURE STATE: The Cinder volume configuration example
OpenStack CSI migrationFEATURE STATE: The To disable the in-tree Cinder plugin from being loaded by the controller manager and the kubelet, you can enable the configMapA ConfigMap provides a way to inject configuration data into pods. The data stored in a ConfigMap can be referenced in a volume of type When referencing a ConfigMap, you provide the name of the ConfigMap in the volume. You can customize the path to use for a specific entry in the ConfigMap. The
following configuration shows how to mount the
The downwardAPIA See Expose Pod Information to Containers Through Files to learn more. emptyDirAn Some uses for an
Depending on your environment, emptyDir configuration example
fc (fibre channel)An See the fibre channel example for more details. gcePersistentDisk (deprecated)FEATURE STATE: A There are some restrictions when using a
One feature of GCE persistent disk is concurrent read-only access to a persistent disk. A Using a GCE persistent disk with a Pod controlled by a ReplicaSet will fail unless the PD is read-only or the replica count is 0 or 1. Creating a GCE persistent diskBefore you can use a GCE persistent disk with a Pod, you need to create it.
GCE persistent disk configuration example
Regional persistent disksThe Regional persistent disks feature allows the creation of persistent disks that are available in two zones within the same region. In order to use this feature, the volume must be provisioned as a PersistentVolume; referencing the volume directly from a pod is not supported. Manually provisioning a Regional PD PersistentVolumeDynamic provisioning is possible using a StorageClass for GCE PD. Before creating a PersistentVolume, you must create the persistent disk:
Regional persistent disk configuration example
GCE CSI migrationFEATURE STATE: The GCE CSI migration completeFEATURE STATE: To disable the gitRepo (deprecated)A Here is an example of a
glusterfs (deprecated)FEATURE STATE: A See the GlusterFS example for more details. hostPathA For example, some uses for a
In addition to the required The supported values for field
Watch out when using this type of volume, because:
hostPath configuration example
hostPath FileOrCreate configuration example
iscsiAn A feature of iSCSI is that it can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a volume with your dataset and then serve it in parallel from as many Pods as you need. Unfortunately, iSCSI volumes can only be mounted by a single consumer in read-write mode. Simultaneous writers are not allowed. See the iSCSI example for more details. localA Local volumes can only be used as a statically created PersistentVolume. Dynamic provisioning is not supported. Compared to However, The following example shows a PersistentVolume using a
You must set a PersistentVolume PersistentVolume When using local volumes, it is recommended to create a StorageClass with An external static provisioner can be run separately for improved management of the local volume lifecycle. Note that this provisioner does not support dynamic provisioning yet. For an example on how to run an external local provisioner, see the local volume provisioner user guide. nfsAn
See the NFS example for an example of mounting NFS volumes with PersistentVolumes. persistentVolumeClaimA See the information about PersistentVolumes for more details. portworxVolume (deprecated)FEATURE STATE: A A
For more details, see the Portworx volume examples. Portworx CSI migrationFEATURE STATE: The projectedA projected volume maps several existing volume sources into the same directory. For more details, see projected volumes. rbdAn A feature of RBD is that it can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a volume with your dataset and then serve it in parallel from as many pods as you need. Unfortunately, RBD volumes can only be mounted by a single consumer in read-write mode. Simultaneous writers are not allowed. See the RBD example for more details. RBD CSI migrationFEATURE STATE: The secretA For more details, see Configuring Secrets. vsphereVolume (deprecated)A For more information, see the vSphere volume examples. vSphere CSI migrationFEATURE STATE: The vSphere CSI driver must be installed on the cluster. You can find additional advice on how to migrate in-tree As of Kubernetes v1.25, vSphere releases less than 7.0u2 are not supported for the (deprecated) in-tree vSphere storage driver. You must run vSphere 7.0u2 or later in order to either continue using the deprecated driver, or to migrate to the replacement CSI driver. If you are running a version of Kubernetes other than v1.25, consult the documentation for that version of Kubernetes. vSphere CSI migration completeFEATURE STATE: To turn off the Using subPathSometimes, it is useful to share one volume for multiple uses in a single pod. The The following example shows how to
configure a Pod with a LAMP stack (Linux Apache MySQL PHP) using a single, shared volume. This sample The PHP application's code and assets map to the volume's
Using subPath with expanded environment variablesFEATURE STATE: Use the In this example, a
ResourcesThe storage media (such as Disk or SSD) of an
To learn about requesting space using a resource specification, see how to manage resources. Out-of-tree volume pluginsThe out-of-tree volume plugins include Container Storage Interface (CSI), and also FlexVolume (which is deprecated). These plugins enable storage vendors to create custom storage plugins without adding their plugin source code to the Kubernetes repository. Previously, all volume plugins were "in-tree". The "in-tree" plugins were built, linked, compiled, and shipped with the core Kubernetes binaries. This meant that adding a new storage system to Kubernetes (a volume plugin) required checking code into the core Kubernetes code repository. Both CSI and FlexVolume allow volume plugins to be developed independent of the Kubernetes code base, and deployed (installed) on Kubernetes clusters as extensions. For storage vendors looking to create an out-of-tree volume plugin, please refer to the volume plugin FAQ. csiContainer Storage Interface (CSI) defines a standard interface for container orchestration systems (like Kubernetes) to expose arbitrary storage systems to their container workloads. Please read the CSI design proposal for more information. Once a CSI compatible volume driver is deployed on a Kubernetes cluster, users may use the A
The following fields are available to storage administrators to configure a CSI persistent volume:
CSI raw block volume supportFEATURE STATE:
Vendors with external CSI drivers can implement raw block volume support in Kubernetes workloads. You can set up your PersistentVolume/PersistentVolumeClaim with raw block volume support as usual, without any CSI specific changes. CSI ephemeral volumesFEATURE STATE: You can directly configure CSI volumes within the Pod specification. Volumes specified in this way are ephemeral and do not persist across pod restarts. See Ephemeral Volumes for more information. For more information on how to develop a CSI driver, refer to the kubernetes-csi documentation Windows CSI proxyFEATURE STATE: CSI node plugins need to perform various privileged operations like scanning of disk devices and mounting of file systems. These operations differ for each host operating system. For Linux worker nodes, containerized CSI node node plugins are typically deployed as privileged containers. For Windows worker nodes, privileged operations for containerized CSI node plugins is supported using csi-proxy, a community-managed, stand-alone binary that needs to be pre-installed on each Windows node. For more details, refer to the deployment guide of the CSI plugin you wish to deploy. Migrating to CSI drivers from in-tree pluginsFEATURE STATE: The The operations and features that are supported include: provisioning/delete, attach/detach, mount/unmount and resizing of volumes. In-tree plugins that support
The following in-tree plugins support persistent storage on Windows nodes:
flexVolume (deprecated)FEATURE STATE: FlexVolume is an out-of-tree plugin interface that uses an exec-based model to interface with storage drivers. The FlexVolume driver binaries must be installed in a pre-defined volume plugin path on each node and in some cases the control plane nodes as well. Pods interact with FlexVolume drivers through the The following FlexVolume plugins, deployed as PowerShell scripts on the host, support Windows nodes:
Mount propagationMount propagation allows for sharing volumes mounted by a container to other containers in the same pod, or even to other pods on the same node. Mount propagation of a volume is controlled by the
ConfigurationBefore mount propagation can work properly on some deployments (CoreOS, RedHat/Centos, Ubuntu) mount share must be configured correctly in Docker as shown below. Edit your Docker's Or, remove
What's nextFollow an example of deploying WordPress and MySQL with Persistent Volumes. 2 - Persistent VolumesThis document describes persistent volumes in Kubernetes. Familiarity with volumes is suggested. IntroductionManaging storage is a distinct problem from managing compute instances. The PersistentVolume subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. To do this, we introduce two new API resources: PersistentVolume and PersistentVolumeClaim. A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system. A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see AccessModes). While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the StorageClass resource. See the detailed walkthrough with working examples. Lifecycle of a volume and claimPVs are resources in the cluster. PVCs are requests for those resources and also act as claim checks to the resource. The interaction between PVs and PVCs follows this lifecycle: ProvisioningThere are two ways PVs may be provisioned: statically or dynamically. StaticA cluster administrator creates a number of PVs. They carry the details of the real storage, which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption. DynamicWhen none of the static PVs the administrator created match a user's PersistentVolumeClaim, the cluster may try to dynamically provision a volume specially for the PVC. This
provisioning is based on StorageClasses: the PVC must request a storage class and the administrator must have created and configured that class for dynamic provisioning to occur. Claims that request the class To enable dynamic storage provisioning based on storage class, the cluster administrator needs to enable the BindingA user creates, or in the case of dynamic provisioning, has already created, a PersistentVolumeClaim with a specific amount of storage requested and with certain access modes. A control loop in the master watches for new PVCs, finds a matching PV (if possible), and binds them together. If a PV was dynamically provisioned for a new PVC, the loop will always bind that PV to the PVC. Otherwise, the user will always get at least what they asked for, but the volume may be in excess of what was requested. Once bound, PersistentVolumeClaim binds are exclusive, regardless of how they were bound. A PVC to PV binding is a one-to-one mapping, using a ClaimRef which is a bi-directional binding between the PersistentVolume and the PersistentVolumeClaim. Claims will remain unbound indefinitely if a matching volume does not exist. Claims will be bound as matching volumes become available. For example, a cluster provisioned with many 50Gi PVs would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster. UsingPods use claims as volumes. The cluster inspects the claim to find the bound volume and mounts that volume for a Pod. For volumes that support multiple access modes, the user specifies which mode is desired when using their claim as a volume in a Pod. Once a user has a claim and that claim is bound, the bound PV belongs to the user for as long as they need it. Users schedule Pods
and access their claimed PVs by including a Storage Object in Use ProtectionThe purpose of the Storage Object in Use Protection feature is to ensure that PersistentVolumeClaims (PVCs) in active use by a Pod and PersistentVolume (PVs) that are bound to PVCs are not removed from the system, as this may result in data loss. If a user deletes a PVC in active use by a Pod, the PVC is not removed immediately. PVC removal is postponed until the PVC is no longer actively used by any Pods. Also, if an admin deletes a PV that is bound to a PVC, the PV is not removed immediately. PV removal is postponed until the PV is no longer bound to a PVC. You can see that a PVC is protected when the PVC's status is
You can see that a PV is protected when the PV's status is
ReclaimingWhen a user is done with their volume, they can delete the PVC objects from the API that allows reclamation of the resource. The reclaim policy for a PersistentVolume tells the cluster what to do with the volume after it has been released of its claim. Currently, volumes can either be Retained, Recycled, or Deleted. RetainThe
If you want to reuse the same storage asset, create a new PersistentVolume with the same storage asset definition. DeleteFor volume plugins that support the
RecycleIf supported by the underlying volume plugin, the However, an administrator can configure a
custom recycler Pod template using the Kubernetes controller manager command line arguments as described in the reference. The custom recycler Pod template must contain a
However, the particular path specified in the custom recycler Pod template in the PersistentVolume deletion protection finalizerFEATURE STATE: Finalizers can be added on a PersistentVolume to ensure that PersistentVolumes having The newly introduced finalizers The finalizer
The finalizer
When the Reserving a PersistentVolumeThe control plane can bind PersistentVolumeClaims to matching PersistentVolumes in the cluster. However, if you want a PVC to bind to a specific PV, you need to pre-bind them. By specifying a PersistentVolume in a PersistentVolumeClaim, you declare a binding between that specific PV and PVC. If the PersistentVolume exists and has not reserved PersistentVolumeClaims through its The binding happens regardless of some volume matching criteria, including node affinity. The control plane still checks that storage class, access modes, and requested storage size are valid.
This method does not guarantee any binding privileges to the PersistentVolume. If other PersistentVolumeClaims could use the PV that you specify, you
first need to reserve that storage volume. Specify the relevant PersistentVolumeClaim in the
This is useful if you want to consume PersistentVolumes that have their Expanding Persistent Volumes ClaimsFEATURE STATE: Support for expanding PersistentVolumeClaims (PVCs) is enabled by default. You can expand the following types of volumes:
You can only expand a PVC if its storage class's
To request a larger volume for a PVC, edit the PVC object and specify a larger size. This triggers expansion of the volume that backs the underlying PersistentVolume. A new PersistentVolume is never created to satisfy the claim. Instead, an existing volume is resized. CSI Volume expansionFEATURE STATE: Support for expanding CSI volumes is enabled by default but it also requires a specific CSI driver to support volume expansion. Refer to documentation of the specific CSI driver for more information. Resizing a volume containing a file systemYou can only resize volumes containing a file system if the file system is XFS, Ext3, or Ext4. When
a volume contains a file system, the file system is only resized when a new Pod is using the PersistentVolumeClaim in FlexVolumes (deprecated since Kubernetes v1.23) allow resize if the driver is configured with the Resizing an in-use PersistentVolumeClaimFEATURE STATE: In this case, you don't need to delete and recreate a Pod or deployment that is using an existing PVC. Any in-use PVC automatically becomes available to its Pod as soon as its file system has been expanded. This feature has no effect on PVCs that are not in use by a Pod or deployment. You must create a Pod that uses the PVC before the expansion can complete. Similar to other volume types - FlexVolume volumes can also be expanded when in-use by a Pod. Recovering from Failure when Expanding VolumesIf a user specifies a new size that is too big to be satisfied by underlying storage system, expansion of PVC will be continuously retried until user or cluster administrator takes some action. This can be undesirable and hence Kubernetes provides following methods of recovering from such failures.
If expanding underlying storage fails, the cluster administrator can manually recover the Persistent Volume Claim (PVC) state and cancel the resize requests. Otherwise, the resize requests are continuously retried by the controller without administrator intervention.
FEATURE STATE: If the feature gates Note that, although you can specify a lower amount of storage than what was requested previously, the new value must still be higher than Types of Persistent VolumesPersistentVolume types are implemented as plugins. Kubernetes currently supports the following plugins:
The following types of PersistentVolume are deprecated. This means that support is still available but will be removed in a future Kubernetes release.
Older versions of Kubernetes also supported the following in-tree PersistentVolume types:
Persistent VolumesEach PV contains a spec and status, which is the specification and status of the volume. The name of a PersistentVolume object must be a valid DNS subdomain name.
CapacityGenerally, a PV will have a specific storage capacity. This is set using the PV's Currently, storage size is the only resource that can be set or requested. Future attributes may include IOPS, throughput, etc. Volume ModeFEATURE STATE: Kubernetes supports two
A volume with You can set the value of Access ModesA PersistentVolume can be mounted on a host in any way supported by the resource provider. As shown in the table below, providers will have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV's capabilities. The access modes are: ReadWriteOnce the volume can be mounted as read-write by a single node. ReadWriteOnce access mode still can allow multiple pods to access the volume when the pods are running on the same node.ReadOnlyMany
the volume can be mounted as read-only by many nodes.ReadWriteMany the volume can be mounted as read-write by many nodes.ReadWriteOncePod the volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod access mode if you want to ensure that only one pod across whole cluster can read that PVC or write to it. This is only supported for CSI volumes and Kubernetes version 1.22+.The blog article Introducing Single Pod Access Mode for PersistentVolumes covers this in more detail. In the CLI, the access modes are abbreviated to:
ClassA PV can have a class, which is specified by setting the In the past, the annotation Reclaim PolicyCurrent reclaim policies are:
Currently, only NFS and HostPath support recycling. AWS EBS, GCE PD, Azure Disk, and Cinder volumes support deletion. Mount OptionsA Kubernetes administrator can specify additional mount options for when a Persistent Volume is mounted on a node. The following volume types support mount options:
Mount options are not validated. If a mount option is invalid, the mount fails. In the past, the annotation Node AffinityA PV can specify node affinity to define constraints that limit what nodes this volume can be accessed from. Pods that use a PV will only be scheduled to nodes that
are selected by the node affinity. To specify node affinity, set PhaseA volume will be in one of the following phases:
The CLI will show the name of the PVC bound to the PV. PersistentVolumeClaimsEach PVC contains a spec and status, which is the specification and status of the claim. The name of a PersistentVolumeClaim object must be a valid DNS subdomain name.
Access ModesClaims use the same conventions as volumes when requesting storage with specific access modes. Volume ModesClaims use the same convention as volumes to indicate the consumption of the volume as either a filesystem or block device. ResourcesClaims, like Pods, can request specific quantities of a resource. In this case, the request is for storage. The same resource model applies to both volumes and claims. SelectorClaims can specify a label selector to further filter the set of volumes. Only the volumes whose labels match the selector can be bound to the claim. The selector can consist of two fields:
All of the requirements, from both ClassA claim can request a particular class by specifying the name of a StorageClass using the attribute PVCs don't necessarily have to request a class. A PVC with its
See retroactive default StorageClass assignment for more details. Depending on installation method, a default StorageClass may be deployed to a Kubernetes cluster by addon manager during installation. When a PVC specifies a In the past, the annotation Retroactive default StorageClass assignmentFEATURE STATE: You can create a
PersistentVolumeClaim without specifying a When a default StorageClass becomes available, the control plane identifies any existing PVCs without In order to keep binding to PVs with This behavior helps administrators change default StorageClass by removing the old one first and then creating or setting another one. This brief window while there is no default causes PVCs without Claims As VolumesPods access storage by using the claim as a volume. Claims must exist in the same namespace as the Pod using the claim. The cluster finds the claim in the Pod's namespace and uses it to get the PersistentVolume backing the claim. The volume is then mounted to the host and into the Pod.
A Note on NamespacesPersistentVolumes binds are
exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with "Many" modes ( PersistentVolumes typed hostPathA Raw Block Volume SupportFEATURE STATE: The following volume plugins support raw block volumes, including dynamic provisioning where applicable:
PersistentVolume using a Raw Block Volume
PersistentVolumeClaim requesting a Raw Block Volume
Pod specification adding Raw Block Device path in container
Binding Block VolumesIf a user requests a raw block volume by indicating this using the
Volume Snapshot and Restore Volume from Snapshot SupportFEATURE STATE: Volume snapshots only support the out-of-tree CSI volume plugins. For details, see Volume Snapshots. In-tree volume plugins are deprecated. You can read about the deprecated volume plugins in the Volume Plugin FAQ. Create a PersistentVolumeClaim from a Volume Snapshot
Volume CloningVolume Cloning only available for CSI volume plugins. Create PersistentVolumeClaim from an existing PVC
Volume populators and data sourcesFEATURE STATE: Kubernetes supports custom volume populators. To use custom volume populators, you must enable the Volume populators take advantage of a PVC spec field called Data source referencesThe There are two differences between the
Users should always use Using volume populatorsVolume populators are controllers that can create non-empty volumes, where the contents of the volume are determined by a Custom Resource. Users create a populated volume by referring to a Custom Resource using the
Because volume populators are external components, attempts to create a PVC that uses one can fail if not all the correct components are installed. External controllers should generate events on the PVC to provide feedback on the status of the creation, including warnings if the PVC cannot be created due to some missing component. You can install the alpha volume data source validator controller into your cluster. That controller generates warning Events on a PVC in the case that no populator is registered to handle that kind of data source. When a suitable populator is installed for a PVC, it's the responsibility of that populator controller to report Events that relate to volume creation and issues during the process. Writing Portable ConfigurationIf you're writing configuration templates or examples that run on a wide range of clusters and need persistent storage, it is recommended that you use the following pattern:
What's next
API referencesRead about the APIs described in this page:
3 - Projected VolumesThis document describes projected volumes in Kubernetes. Familiarity with volumes is suggested. IntroductionA Currently, the following types of volume sources can be projected:
All sources are required to be in the same namespace as the Pod. For more details, see the all-in-one volume design document. Example configuration with a secret, a downwardAPI, and a configMap
Example configuration: secrets with a non-default permission mode set
Each projected volume source is listed in the spec under
serviceAccountToken projected volumesWhen the
The example Pod has a projected volume containing the injected service account token. Containers in this Pod can use that token to access the Kubernetes API server, authenticating with the identity of
the pod's ServiceAccount. The The SecurityContext interactionsThe proposal for file permission handling in projected service account volume enhancement introduced the projected files having the correct owner permissions set. LinuxIn Linux pods that have a projected volume and WindowsIn Windows pods that have a projected volume and By default, the projected files will have the following ownership as shown for an example projected volume file:
This implies all administrator users like 4 - Ephemeral VolumesThis document describes ephemeral volumes in Kubernetes. Familiarity with volumes is suggested, in particular PersistentVolumeClaim and PersistentVolume. Some application need additional storage but don't care whether that data is stored persistently across restarts. For example, caching services are often limited by memory size and can move infrequently used data into storage that is slower than memory with little impact on overall performance. Other applications expect some read-only input data to be present in files, like configuration data or secret keys. Ephemeral volumes are designed for these use cases. Because volumes follow the Pod's lifetime and get created and deleted along with the Pod, Pods can be stopped and restarted without being limited to where some persistent volume is available. Ephemeral volumes are specified inline in the Pod spec, which simplifies application deployment and management. Types of ephemeral volumesKubernetes supports several different kinds of ephemeral volumes for different purposes:
CSI ephemeral volumes must be provided by third-party CSI storage drivers. Generic ephemeral volumes can be provided by third-party CSI storage drivers, but also by any other storage driver that supports dynamic provisioning. Some CSI drivers are written specifically for CSI ephemeral volumes and do not support dynamic provisioning: those then cannot be used for generic ephemeral volumes. The advantage of using third-party drivers is that they can offer functionality that Kubernetes itself does not support, for example storage with different performance characteristics than the disk that is managed by kubelet, or injecting different data. CSI ephemeral volumesFEATURE STATE: Conceptually,
CSI ephemeral volumes are similar to Here's an example manifest for a Pod that uses CSI ephemeral storage:
The CSI driver restrictionsCSI ephemeral volumes allow users to provide Cluster administrators who need to restrict the CSI drivers that are allowed to be used as inline volumes within a Pod spec may do so by:
Generic ephemeral volumesFEATURE STATE: Generic ephemeral volumes are similar to
Example:
Lifecycle and PersistentVolumeClaimThe key design idea is that the parameters for a volume claim are allowed inside a volume source of the Pod. Labels, annotations and the whole set of fields for a PersistentVolumeClaim are supported. When such a Pod gets created, the ephemeral volume controller then creates an actual PersistentVolumeClaim object in the same namespace as the Pod and ensures that the PersistentVolumeClaim gets deleted when the Pod gets deleted. That triggers volume binding and/or provisioning, either immediately if the
StorageClass uses immediate volume binding or when the Pod is tentatively scheduled onto a node ( In terms of resource ownership, a Pod that has generic ephemeral storage is the owner of the PersistentVolumeClaim(s) that provide that ephemeral storage. When the Pod is deleted, the Kubernetes garbage collector deletes the PVC,
which then usually triggers deletion of the volume because the default reclaim policy of storage classes is to delete volumes. You can create quasi-ephemeral local storage using a StorageClass with a reclaim policy of While these PVCs exist, they can be used like any other PVC. In particular, they can be referenced as data source in volume cloning or snapshotting. The PVC object also holds the current status of the volume. PersistentVolumeClaim namingNaming of the automatically created PVCs is deterministic: the name is a combination of Pod name and volume name, with a hyphen ( The deterministic naming also introduces a potential conflict between different Pods (a Pod "pod-a" with volume "scratch" and another Pod with name "pod" and volume "a-scratch" both end up with the same PVC name "pod-a-scratch") and between Pods and manually created PVCs. Such conflicts are detected: a PVC is only used for an ephemeral volume if it was created for the Pod. This check is based on the ownership relationship. An existing PVC is not overwritten or modified. But this does not resolve the conflict because without the right PVC, the Pod cannot start. SecurityEnabling the GenericEphemeralVolume feature allows users to create PVCs indirectly if they can create Pods, even if they do not have permission to create PVCs directly. Cluster administrators must be aware of this. If this does not fit their security model, they should use an admission webhook that rejects objects like Pods that have a generic ephemeral volume. The normal namespace quota for PVCs still applies, so even if users are allowed to use this new mechanism, they cannot use it to circumvent other policies. What's nextEphemeral volumes managed by kubeletSee local ephemeral storage. CSI ephemeral volumes
Generic ephemeral volumes
5 - Storage ClassesThis document describes the concept of a StorageClass in Kubernetes. Familiarity with volumes and persistent volumes is suggested. IntroductionA StorageClass provides a way for administrators to describe the "classes" of storage they offer. Different classes might map to quality-of-service levels, or to backup policies, or to arbitrary policies determined by the cluster administrators. Kubernetes itself is unopinionated about what classes represent. This concept is sometimes called "profiles" in other storage systems. The StorageClass ResourceEach StorageClass contains the fields The name of a StorageClass object is significant, and is how users can request a particular class. Administrators set the name and other parameters of a class when first creating StorageClass objects, and the objects cannot be updated once they are created. Administrators can specify a default StorageClass only for PVCs that don't request any particular class to bind to: see the PersistentVolumeClaim section for details.
ProvisionerEach StorageClass has a provisioner that determines what volume plugin is used for provisioning PVs. This field must be specified.
You are not restricted to specifying the "internal" provisioners listed here (whose names are prefixed with "kubernetes.io" and shipped alongside Kubernetes). You can also run and specify external provisioners, which are independent programs that follow a specification defined by Kubernetes. Authors of external provisioners have full discretion over where their code lives, how the provisioner is shipped, how it needs to be run, what volume plugin it uses (including Flex), etc. The repository kubernetes-sigs/sig-storage-lib-external-provisioner houses a library for writing external provisioners that implements the bulk of the specification. Some external provisioners are listed under the repository kubernetes-sigs/sig-storage-lib-external-provisioner. For example, NFS doesn't provide an internal provisioner, but an external provisioner can be used. There are also cases when 3rd party storage vendors provide their own external provisioner. Reclaim PolicyPersistentVolumes that are dynamically created by a StorageClass will have the reclaim policy
specified in the PersistentVolumes that are created manually and managed via a StorageClass will have whatever reclaim policy they were assigned at creation. Allow Volume ExpansionFEATURE STATE: PersistentVolumes can be configured to be expandable. This feature
when set to The following types of volumes support volume expansion, when the underlying StorageClass has the field
Mount OptionsPersistentVolumes that are dynamically created by a StorageClass will have the mount options specified in the If the volume plugin does not support mount options but mount options are specified, provisioning will fail. Mount options are not validated on either the class or PV. If a mount option is invalid, the PV mount fails. Volume Binding ModeThe
The A cluster administrator can address this issue by specifying the The following plugins support
The following plugins support
FEATURE STATE: CSI volumes are also supported with dynamic provisioning and pre-created PVs, but you'll need to look at the documentation for a specific CSI driver to see its supported topology keys and examples.
Allowed TopologiesWhen a cluster operator specifies the This example demonstrates how to restrict the
topology of provisioned volumes to specific zones and should be used as a replacement for the
ParametersStorage Classes have parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the There can be at most 512 parameters defined for a StorageClass. The total length of the parameters object including its keys and values cannot exceed 256 KiB. AWS EBS
GCE PD
If If Glusterfs
NFS
Kubernetes doesn't include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS. Here are some examples:
OpenStack Cinder
vSphereThere are two types of provisioners for vSphere storage classes:
In-tree provisioners are deprecated. For more information on the CSI provisioner, see Kubernetes vSphere CSI Driver and vSphereVolume CSI migration. CSI ProvisionerThe vSphere CSI StorageClass provisioner works with Tanzu Kubernetes clusters. For an example, refer to the vSphere CSI repository. vCP ProvisionerThe following examples use the VMware Cloud Provider (vCP) StorageClass provisioner.
There are few vSphere examples which you try out for persistent volume management inside Kubernetes for vSphere. Ceph RBD
Azure DiskAzure Unmanaged Disk storage class
Azure Disk storage class (starting from v1.7.2)
Azure File
During storage provisioning, a secret named by In a multi-tenancy context, it is strongly recommended to set the value for Portworx Volume
LocalFEATURE STATE:
Local volumes do not currently support dynamic provisioning, however a StorageClass
should still be created to delay volume binding until Pod scheduling. This is specified by the Delaying volume binding allows the scheduler to consider all of a Pod's scheduling constraints when choosing an appropriate PersistentVolume for a PersistentVolumeClaim. 6 - Dynamic Volume ProvisioningDynamic volume provisioning allows storage volumes to be created on-demand. Without dynamic
provisioning, cluster administrators have to manually make calls to their cloud or storage provider to create new storage volumes, and then create BackgroundThe implementation of dynamic volume provisioning is based on the API object More information on storage classes can be found here. Enabling Dynamic ProvisioningTo enable dynamic provisioning, a cluster administrator needs to pre-create one or more StorageClass objects for users. StorageClass objects define which provisioner should be used and what parameters should be passed to that provisioner when dynamic provisioning is invoked. The name of a StorageClass object must be a valid DNS subdomain name. The following manifest creates a storage class "slow" which provisions standard disk-like persistent disks.
The following manifest creates a storage class "fast" which provisions SSD-like persistent disks.
Using Dynamic ProvisioningUsers request dynamically provisioned storage by including a storage class in their To select the "fast" storage class, for example, a user would create the following PersistentVolumeClaim:
This claim results in an SSD-like Persistent Disk being automatically provisioned. When the claim is deleted, the volume is destroyed. Defaulting BehaviorDynamic provisioning can be enabled on a cluster such that all claims are dynamically provisioned if no storage class is specified. A cluster administrator can enable this behavior by:
An administrator can mark a specific Note that there can be at most one default storage class on a cluster, or a Topology AwarenessIn Multi-Zone clusters, Pods can be spread across Zones in a Region. Single-Zone storage backends should be provisioned in the Zones where Pods are scheduled. This can be accomplished by setting the Volume Binding Mode. 7 - Volume SnapshotsIn Kubernetes, a VolumeSnapshot represents a snapshot of a volume on a storage system. This document assumes that you are already familiar with Kubernetes persistent volumes. IntroductionSimilar to how API resources A A
Volume snapshots provide Kubernetes users with a standardized way to copy a volume's contents at a particular point in time without creating an entirely new volume. This functionality enables, for example, database administrators to backup databases before performing edit or delete modifications. Users need to be aware of the following when using this feature:
Lifecycle of a volume snapshot and volume snapshot content
Provisioning Volume SnapshotThere are two ways snapshots may be provisioned: pre-provisioned or dynamically provisioned. Pre-provisionedA cluster administrator creates a number of DynamicInstead of using a pre-existing snapshot, you can request that a snapshot to be dynamically taken from a PersistentVolumeClaim. The VolumeSnapshotClass specifies storage provider-specific parameters to use when taking a snapshot. BindingThe snapshot controller handles the binding of a In the case of pre-provisioned binding, the VolumeSnapshot will remain unbound until the requested VolumeSnapshotContent object is created. Persistent Volume Claim as Snapshot Source ProtectionThe purpose of this protection is to ensure that in-use PersistentVolumeClaim API objects are not removed from the system while a snapshot is being taken from it (as this may result in data loss). While a snapshot is being taken of a PersistentVolumeClaim, that PersistentVolumeClaim is in-use. If you delete a PersistentVolumeClaim API object in active use as a snapshot source, the PersistentVolumeClaim object is not removed immediately. Instead, removal of the PersistentVolumeClaim object is postponed until the snapshot is readyToUse or aborted. DeleteDeletion is triggered by deleting the VolumeSnapshotsEach VolumeSnapshot contains a spec and a status.
A volume snapshot can request a particular class by specifying the name of a
VolumeSnapshotClass using the attribute For pre-provisioned snapshots, you need to specify a
Volume Snapshot ContentsEach VolumeSnapshotContent contains a spec and status. In dynamic provisioning, the snapshot common controller creates
For pre-provisioned snapshots, you (as cluster administrator)
are responsible for creating the
Converting the volume mode of a SnapshotIf the To check if your cluster has capability for this feature, run the following command:
If you want to allow users to create a For pre-provisioned snapshots, An example
Provisioning Volumes from SnapshotsYou can provision a new volume, pre-populated with data from a snapshot, by using the dataSource field in the For more details, see Volume Snapshot and Restore Volume from Snapshot. 8 - Volume Snapshot ClassesThis document describes the concept of VolumeSnapshotClass in Kubernetes. Familiarity with volume snapshots and storage classes is suggested. IntroductionJust like StorageClass provides a way for administrators to describe the "classes" of storage they offer when provisioning a volume, VolumeSnapshotClass provides a way to describe the "classes" of storage when provisioning a volume snapshot. The VolumeSnapshotClass ResourceEach VolumeSnapshotClass contains the fields The name of a VolumeSnapshotClass object is significant, and is how users can request a particular class. Administrators set the name and other parameters of a class when first creating VolumeSnapshotClass objects, and the objects cannot be updated once they are created.
Administrators can specify a default VolumeSnapshotClass for VolumeSnapshots that don't request any particular class to bind to by adding the
DriverVolume snapshot classes have a driver that determines what CSI volume plugin is used for provisioning VolumeSnapshots. This field must be specified. DeletionPolicyVolume snapshot classes have a deletionPolicy. It enables you to configure what happens to a VolumeSnapshotContent when the VolumeSnapshot object it is bound to is to be deleted. The deletionPolicy of a volume snapshot class can either be If
the deletionPolicy is ParametersVolume snapshot classes have parameters that describe volume snapshots belonging to the volume snapshot class. Different parameters may be accepted depending on the 9 - CSI Volume CloningThis document describes the concept of cloning existing CSI Volumes in Kubernetes. Familiarity with Volumes is suggested. IntroductionThe
CSI Volume Cloning feature adds support for specifying existing
PVCs in the A Clone is defined as a duplicate of an existing Kubernetes Volume that can be consumed as any standard Volume would be. The only difference is that upon provisioning, rather than creating a "new" empty Volume, the back end device creates an exact duplicate of the specified Volume. The implementation of cloning, from the perspective of the Kubernetes API, adds the ability to specify an existing PVC as a dataSource during new PVC creation. The source PVC must be bound and available (not in use). Users need to be aware of the following when using this feature:
ProvisioningClones are provisioned like any other PVC with the exception of adding a dataSource that references an existing PVC in the same namespace.
The result is a new PVC with the name UsageUpon availability of the new PVC, the cloned PVC is consumed the same as other PVC. It's also expected at this point that the newly created PVC is an independent object. It can be consumed, cloned, snapshotted, or deleted independently and without consideration for it's original dataSource PVC. This also implies that the source is not linked in any way to the newly created clone, it may also be modified or deleted without affecting the newly created clone. 10 - Storage CapacityStorage capacity is limited and may vary depending on the node on which a pod runs: network-attached storage might not be accessible by all nodes, or storage is local to a node to begin with. FEATURE STATE: This page describes how Kubernetes keeps track of storage capacity and how the scheduler uses that information to schedule Pods onto nodes that have access to enough storage capacity for the remaining missing volumes. Without storage capacity tracking, the scheduler may choose a node that doesn't have enough capacity to provision a volume and multiple scheduling retries will be needed. Before you beginKubernetes v1.25 includes cluster-level API support for storage capacity tracking. To use this you must also be using a CSI driver that supports capacity tracking. Consult the documentation for the CSI drivers that you use to find out whether this support is available and, if so, how to use it. If you are not running Kubernetes v1.25, check the documentation for that version of Kubernetes. APIThere are two API extensions for this feature:
SchedulingStorage capacity information is used by the Kubernetes scheduler if:
In that case, the scheduler only considers nodes for the Pod which have enough storage available to them. This check is very simplistic and only compares the size of the volume against the capacity listed in For volumes
with For CSI ephemeral volumes, scheduling always happens without considering storage capacity. This is based on the assumption that this volume type is only used by special CSI drivers which are local to a node and do not need significant resources there. ReschedulingWhen a node has been selected for a Pod with Because Kubernetes might have chosen a node based on out-dated capacity information, it is possible that the volume cannot really be created. The node selection is then reset and the Kubernetes scheduler tries again to find a node for the Pod. LimitationsStorage capacity tracking increases the chance that scheduling works on the first try, but cannot guarantee this because the scheduler has to decide based on potentially out-dated information. Usually, the same retry mechanism as for scheduling without any storage capacity information handles scheduling failures. One situation where scheduling can fail permanently is when a Pod uses multiple volumes: one volume might have been created already in a topology segment which then does not have enough capacity left for another volume. Manual intervention is necessary to recover from this, for example by increasing capacity or deleting the volume that was already created. What's next
11 - Node-specific Volume LimitsThis page describes the maximum number of volumes that can be attached to a Node for various cloud providers. Cloud providers like Google, Amazon, and Microsoft typically have a limit on how many volumes can be attached to a Node. It is important for Kubernetes to respect those limits. Otherwise, Pods scheduled on a Node could get stuck waiting for volumes to attach. Kubernetes default limitsThe Kubernetes scheduler has default limits on the number of volumes that can be attached to a Node:
Custom limitsYou can change these limits by setting the value of the Use caution if you set a limit that is higher than the default limit. Consult the cloud provider's documentation to make sure that Nodes can actually support the limit you set. The limit applies to the entire cluster, so it affects all Nodes. Dynamic volume limitsFEATURE STATE: Dynamic volume limits are supported for following volume types.
For volumes managed by in-tree volume plugins, Kubernetes automatically determines the Node type and enforces the appropriate maximum number of volumes for the node. For example:
12 - Volume Health MonitoringFEATURE STATE: CSI volume health monitoring allows CSI Drivers to detect abnormal volume conditions from the underlying storage systems and report them as events on PVCs or Pods. Volume health monitoringKubernetes volume health monitoring is part of how Kubernetes implements the Container Storage Interface (CSI). Volume health monitoring feature is implemented in two components: an External Health Monitor controller, and the kubelet. If a CSI Driver supports Volume Health Monitoring feature from the controller side, an event will be reported on the related PersistentVolumeClaim (PVC) when an abnormal volume condition is detected on a CSI volume. The External Health Monitor controller also watches for node failure events. You can enable node failure monitoring by setting the If a CSI Driver supports Volume Health Monitoring feature from the node side, an Event will be reported on every Pod using the PVC when an abnormal volume condition is detected on a CSI volume. In addition, Volume Health information is exposed as Kubelet VolumeStats metrics. A new metric kubelet_volume_stats_health_status_abnormal is added. This metric includes two labels: What's nextSee the CSI driver documentation to find out which CSI drivers have implemented this feature. 13 - Windows StorageThis page provides an storage overview specific to the Windows operating system. Persistent storageWindows has a layered filesystem driver to mount container layers and create a copy filesystem based on NTFS. All file paths in the container are resolved only within the context of that container.
As a result, the following storage functionality is not supported on Windows nodes:
Kubernetes volumes enable complex applications, with data persistence and Pod volume sharing requirements, to be deployed on Kubernetes. Management of persistent volumes associated with a specific storage back-end or protocol includes actions such as provisioning/de-provisioning/resizing of volumes, attaching/detaching a volume to/from a Kubernetes node and mounting/dismounting a volume to/from individual containers in a pod that needs to persist data. Volume management components are shipped as Kubernetes volume plugin. The following broad classes of Kubernetes volume plugins are supported on Windows:
In-tree volume pluginsThe following in-tree plugins support persistent storage on Windows nodes:
What programs make copies of files to be used in case of the original are lost or damaged?Backup– programs that make copies of files to beused in case the originals are lost or damaged. File compression– programs that reduce size of filesso that they require less storage space.
What is called making duplicate copy of file or data?Making a copy of a file or a program is known as a BACKUP! In data innovation, a backup, or the way toward going down, alludes to the duplicating and documenting of computer information so it might be utilized to reestablish the first after an information loss occasion.
Which of the following is someone who uses the Internet to destroy or damage computers for political reasons?
Which of the following gives authors and artists exclusive rights to duplicate?2014 Digital Safety and Security. |