Local Storage Provisioner Service
There is a service that can discover partitions mounted and linked into a particular directory in the kubelet container or block devices linked there and present automatically them as Persistent Volumes: Local Static Provisioner. As there are services that can make good use of such volumes and have another means of storage redundancy build in, I set this up similar to the way described in Getting Started. As this provisioner can not change the size of the volumes it will select the next larger volume for any claim. For example a 20 GB claim will select a 29.4 GiB volume, a 30 GB claim a 30.2 GiB volume etc.
For flatcar-linux we can follow the advice on the sig-storage-local-static-provisioner website: Mount formatted block storage on /mnt/local-disks/<UUID>. The UUID will be used to make sure mixing block devices will fail and not expose data to the wrong host.
- Create a block device for an acdh-clusterX node on the VM host. Note that the size of the block device should be a little larger than the desired even number if GiB (example: for a 20 GiB volume create a 21 GiB disk) as there is a difference in how disk size is calculated
- Format the volume on the respective flatcar node. Use ext4 or xfs depending on the needs of the service (for example elasticsearch/opensearch recommeds ext4)
sudo mkfs.ext4 /dev/sdd - reserved blocks for root are not very useful in kubernetes so set them to 0
sudo tune2fs -r 0 /dev/disk/by-uuid/<UUID> - Get the UUID. It is part of the output of mkfs.ext4. It is also for example available using using
ls -l /dev/disk/by-uuid/* - Create a UUID directory in /mnt/local-disks/
- Create a mount unit to mount the filesystem. The filename needs to match the mount point and is encoded.
sudo cp /etc/systemd/system/var-lib-rancher.mount "/etc/systemd/system/$(systemd-escape --path /mnt/local-disks/<UUID>).mount" sudo vi /etc/systemd/system/"$(systemd-escape --path /mnt/local-disks/<UUID>).mount" # change directory name and device name # [Unit] # Description=Mount local storage at /mnt/local-disks/<UUID> # Before=local-fs.target # [Mount] # What=/dev/disk/by-uuid/<UUID> # Where=/mnt/local-disks/<UUID> # Type=ext4 or xfs # [Install] # WantedBy=local-fs.target sudo systemctl daemon-reload sudo systemctl enable "$(systemd-escape --path /mnt/local-disks/<UUID>).mount" sudo systemctl start "$(systemd-escape --path /mnt/local-disks/<UUID>).mount"
Add the sig-storage-local-static-provisioner
apiVersion: catalog.cattle.io/v1 kind: ClusterRepo metadata: name: sig-storage-local-static-provisioner spec: url: https://kubernetes-sigs.github.io/sig-storage-local-static-provisioner/
values.yaml (also for rancher app)
additionalVolumeMounts: [] additionalVolumes: [] affinity: {} classes: - blockCleanerCommand: - /scripts/shred.sh - '2' fsType: ext4 hostDir: /mnt/local-disks name: local-disks namePattern: '*' volumeMode: Filesystem enableWindows: false fullnameOverride: '' image: registry.k8s.io/sig-storage/local-volume-provisioner:v2.6.0 initContainers: [] mountDevVolume: true nameOverride: '' nodeSelector: {} nodeSelectorWindows: {} podAnnotations: {} podLabels: {} privileged: true rbac: create: true resources: {} serviceAccount: create: true name: '' serviceMonitor: additionalLabels: {} enabled: false interval: 10s namespace: null relabelings: [] setPVOwnerRef: false tolerations: [] useJobForCleaning: false useNodeNameOnly: false
Now the DeamonSet created with this chart waits for a storage class. Import that using the following K8s definition:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-disks provisioner: kubernetes.io/no-provisioner reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer