**This is an old revision of the document!**
Local Storage Provisioner Service
There is a service that can discover partitions mounted and linked into a particular directory in the kubelet container or block devices linked there and present automatically them as Persistent Volumes: Local Static Provisioner. As there are services that can make good use of such volumes and have another means of storage redundancy build in, I set this up similar to the way described in Getting Started. As this provisioner can not change the size of the volumes it will select the next larger volume for any claim. For example a 20 GB claim will select a 29.4 GiB volume, a 30 GB claim a 30.2 GiB volume etc.
For flatcar-linux we can follow the advice on the sig-storage-local-static-provisioner website: Mount formatted block storage on /mnt/local-disks/<UUID>. The UUID will be used to make sure mixing block devices will fail and not expose data to the wrong host.
- Create a block device for an acdh-clusterX node on the VM host. Note that the size of the block device should be a little larger than the desired even number if GiB (example: for a 20 GiB volume create a 21 GiB disk) as there is a difference in how disk size is calculated
- Format the volume on the respective flatcar node. Use ext4 or xfs depending on the needs of the service (for example elasticsearch/opensearch recommeds ext4)
sudo mkfs.ext4 /dev/sdd - reserved blocks for root are not very useful in kubernetes so set them to 0
sudo tune2fs -r 0 /dev/disk/by-uuid/<UUID> - Get the UUID. It is part of the output of mkfs.ext4. It is also for example available using using
ls -l /dev/disk/by-uuid/* - Create a UUID directory in /mnt/local-disks/
- Create a mount unit to mount the filesystem. The filename needs to match the mount point and is encoded.
sudo cp /etc/systemd/system/var-lib-rancher.mount "/etc/systemd/system/$(systemd-escape --path /mnt/local-disks/<UUID>).mount" sudo vi /etc/systemd/system/"$(systemd-escape --path /mnt/local-disks/<UUID>).mount" # change directory name and device name # [Unit] # Description=Mount local storage at /mnt/local-disks/<UUID> # Before=local-fs.target # [Mount] # What=/dev/disk/by-uuid/<UUID> # Where=/mnt/local-disks/<UUID> # Type=ext4 or xfs # [Install] # WantedBy=local-fs.target sudo systemctl daemon-reload sudo systemctl enable "$(systemd-escape --path /mnt/local-disks/<UUID>).mount" sudo systemctl start "$(systemd-escape --path /mnt/local-disks/<UUID>).mount"
values.yaml (also for rancher app)
classes: - blockCleanerCommand: - /scripts/blkdiscard.sh - '2' fsType: ext4 hostDir: /mnt/local-disks name: local-disks namePattern: '*' volumeMode: Filesystem storageClass: true common: additionalHostPathVolumes: {} mountDevVolume: null rbac: create: true pspEnabled: false serviceAccount: create: true name: storage-local-static-provisioner setPVOwnerRef: false useAlphaAPI: false useJobForCleaning: false useNodeNameOnly: false configMapName: local-provisioner-config podSecurityPolicy: false daemonset: affinity: {} image: quay.io/external_storage/local-volume-provisioner:v2.4.0 initContainers: null nodeSelector: {} podAnnotations: {} podLabels: {} privileged: null resources: {} tolerations: [] name: local-volume-provisioner serviceAccount: local-storage-admin serviceMonitor: additionalLabels: {} enabled: false interval: 10s namespace: null relabelings: [] prometheus: operator: enabled: true serviceMonitor: interval: 10s namespace: cattle-prometheus selector: prometheus: cluster-monitoring