User Tools

Site Tools


nas:zfs:options_for_zfs_on_raid

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
nas:zfs:options_for_zfs_on_raid [2024/12/22 21:45] adminnas:zfs:options_for_zfs_on_raid [2025/02/12 11:13] (current) admin
Line 37: Line 37:
 Install instructions: Install instructions:
  
-```sh+```bash
 sudo dnf install -y https://zfsonlinux.org/epel/zfs-release-2-3$(rpm --eval "%{dist}").noarch.rpm sudo dnf install -y https://zfsonlinux.org/epel/zfs-release-2-3$(rpm --eval "%{dist}").noarch.rpm
 sudo dnf config-manager --enable zfs-testing sudo dnf config-manager --enable zfs-testing
Line 47: Line 47:
 In the mok util enroll the key. Use the password you just set. In the mok util enroll the key. Use the password you just set.
  
-It is possible to set the size for the ARC using module parameters in `/etc/modprobe.d/zfs.conf`:+It is possible to set the size for the [ARC](https://openzfs.readthedocs.io/en/latest/performance-tuning.html#adaptive-replacement-cache) using module parameters in `/etc/modprobe.d/zfs.conf`:
 ``` ```
 options zfs zfs_arc_max=8589934592 zfs_arc_min=8589934592 options zfs zfs_arc_max=8589934592 zfs_arc_min=8589934592
 ``` ```
-This limits the size of the [ARC](https://openzfs.readthedocs.io/en/latest/performance-tuning.html#adaptive-replacement-cache) to 8GB and reserves these 8GB on module load.+This limits the size of the ARC to 8GB and reserves these 8GB on module load.
 If not set, there is an automatic that usually does a good job but uses up the half of the systems memory for the ARC. If not set, there is an automatic that usually does a good job but uses up the half of the systems memory for the ARC.
-```sh+```bash
 sudo modprobe zfs # this will not work on an EFI Secure Boot system without the MOK enrolement sudo modprobe zfs # this will not work on an EFI Secure Boot system without the MOK enrolement
  
Line 59: Line 59:
 sudo systemctl disable --now zfs-share.service zfs-import-cache.service zfs-mount.service sudo systemctl disable --now zfs-share.service zfs-import-cache.service zfs-mount.service
  
-sudo pcs resource create shared-ZFS-1 ocf:heartbeat:ZFS pool=shared-ZFS-1  op monitor OCF_CHECK_LEVEL="0" timeout="30s" interval="5s" +sudo dnf install -y https://github.com/simar0at/resource-agents/releases/download/4.16.0-1/resource-agents-4.16.0-1.el9.x86_64.rpm 
-sudo pcs resource create zfs-scrub-monthly-shared-ZFS-1 systemd\:zfs-scrub-monthly@shared-ZFS-1.timer + 
-sudo pcs resource create shared-ZFS-2 ocf:heartbeat:ZFS pool=shared-ZFS- op monitor OCF_CHECK_LEVEL="0" timeout="30s" interval="5s" +pcs resource create nfsshare ocf:heartbeat:ZFS pool=nfsshare  op monitor OCF_CHECK_LEVEL="0" timeout="30s" interval="5s" --group nfsgroup 
-sudo pcs resource create zfs-scrub-monthly-shared-ZFS-1systemd\:zfs-scrub-monthly@shared-ZFS-2.timer +pcs resource create zfs-scrub-monthly-nfsshare systemd\:zfs-scrub-monthly@nfsshare.timer --group nfsgroup 
-sudo pcs resource group add nfsgroup shared-ZFS-1 --before clustered-nfs +pcs resource create nfsshare-mirror ocf:heartbeat:ZFS pool=nfsshare-mirror  op monitor OCF_CHECK_LEVEL="0" timeout="30s" interval="5s" --group nfsgroup 
-sudo pcs resource group add nfsgroup zfs-scrub-monthly-shared-ZFS-1 --after shared-ZFS-1 +pcs resource create zfs-scrub-monthly-nfsshare-mirror systemd\:zfs-scrub-monthly@nfsshare-mirror.timer --group nfsgroup 
-sudo pcs resource group add nfsgroup shared-ZFS-2 --before clustered-nfs +pcs resource group add nfsgroup nfsshare --before clustered-nfs 
-sudo pcs resource group add nfsgroup zfs-scrub-monthly-shared-ZFS-2 --after shared-ZFS-2+pcs resource group add nfsgroup zfs-scrub-monthly-nfsshare --after nfsshare 
 +pcs resource group add nfsgroup nfsshare-mirror --before clustered-nfs 
 +pcs resource group add nfsgroup zfs-scrub-monthly-nfsshare-mirror --after nfsshare-mirror
 ``` ```
  
 ```bash ```bash
 sudo dnf install python3-coloredlogs python3-jsonschema python3-isodate python3-croniter python3-paramiko sudo dnf install python3-coloredlogs python3-jsonschema python3-isodate python3-croniter python3-paramiko
-sudo dnf install python3-zettarepl-24.10.1-2.noarch.rpm+sudo dnf install -y https://github.com/simar0at/zettarepl/releases/download/24.10.1/python3-zettarepl-24.10.1-2.noarch.rpm
  
-sudo nano /etc/systemd/system/zettarepl.service+sudo vi /etc/systemd/system/zettarepl.service
 ``` ```
 ```ini ```ini
Line 81: Line 83:
 [Service] [Service]
 Environment=PYTHONPATH=/usr/lib/python3/dist-packages/ Environment=PYTHONPATH=/usr/lib/python3/dist-packages/
-ExecStart=/usr/bin/zettarepl run /shared-ZFS-1/config/zettarepl.yaml+ExecStart=/usr/bin/zettarepl run /nfsshare/config/zettarepl.yaml
 ``` ```
 Note: No install section it will be launched using pacemaker. Note: No install section it will be launched using pacemaker.
 ```sh ```sh
-sudo mkdir /shared-ZFS-1/config +sudo mkdir /nfsshare/config 
-sudo nano /shared-ZFS-1/config/zettarepl.yaml+sudo nano /nfsshare/config/zettarepl.yaml
 ``` ```
 ```yaml ```yaml
Line 92: Line 94:
 periodic-snapshot-tasks: periodic-snapshot-tasks:
   # Each task in zettarepl must have an unique id to make references for it   # Each task in zettarepl must have an unique id to make references for it
-  shared-ZFS-1-qh:+  nfsshare-qh:
     # Dataset to make snapshots     # Dataset to make snapshots
-    dataset: shared-ZFS-1+    dataset: nfsshare
     # You must explicitly specify if you want recursive or non-recursive     # You must explicitly specify if you want recursive or non-recursive
     # snapshots     # snapshots
Line 134: Line 136:
     schedule:     schedule:
       minute: "*/15"    # Every 15 minutes       minute: "*/15"    # Every 15 minutes
-  shared-ZFS-1-hour: +  nfsshare-hour: 
-    dataset: shared-ZFS-1+    dataset: nfsshare
     recursive: true     recursive: true
     #exclude:     #exclude:
-    # - shared-ZFS-1/xyz+    # - nfsshare/xyz
     lifetime: P1D     lifetime: P1D
     #allow-empty: false     #allow-empty: false
Line 146: Line 148:
       hour: "*"       hour: "*"
 replication-tasks: replication-tasks:
-  shared-ZFS-1-shared-ZFS-2:+  nfsshare-nfsshare-mirror:
     # Either push or pull     # Either push or pull
     direction: push     direction: push
Line 155: Line 157:
       type: local       type: local
     # Source dataset     # Source dataset
-    source-dataset: shared-ZFS-1+    source-dataset: nfsshare
     # Target dataset     # Target dataset
-    target-dataset: shared-ZFS-2+    target-dataset: nfsshare-mirror
     # "recursive" and "exclude" work exactly like they work for periodic     # "recursive" and "exclude" work exactly like they work for periodic
     # snapshot tasks     # snapshot tasks
Line 178: Line 180:
     # exclude all child snapshots that your periodic snapshot tasks exclude     # exclude all child snapshots that your periodic snapshot tasks exclude
     periodic-snapshot-tasks:     periodic-snapshot-tasks:
-      - shared-ZFS-1-qh +      - nfsshare-qh 
-      - shared-ZFS-1-hour+      - nfsshare-hour
     # If true, replication task will run automatically either after bound     # If true, replication task will run automatically either after bound
     # periodic snapshot task or on schedule     # periodic snapshot task or on schedule
Line 191: Line 193:
 sudo systemctl daemon-reload sudo systemctl daemon-reload
 sudo pcs resource create zettarepl systemd\:zettarepl.service sudo pcs resource create zettarepl systemd\:zettarepl.service
-sudo pcs resource group add nfsgroup zettarepl --after shared-ZFS-2+sudo pcs resource group add nfsgroup zettarepl --after nfsshare-mirror
 ``` ```
  
Line 204: Line 206:
  
 ```diff ```diff
---- /usr/lib/ocf/resource.d/heartbeat/exportfs.old      2024-12-20 23:46:19.223840200 +0100 +--- /usr/lib/ocf/resource.d/heartbeat/exportfs.orig     2025-01-18 20:59:11.511427785 +0100 
-+++ /usr/lib/ocf/resource.d/heartbeat/exportfs  2024-12-20 23:49:52.546388172 +0100++++ /usr/lib/ocf/resource.d/heartbeat/exportfs  2025-01-18 21:00:19.526826188 +0100
 @@ -339,6 +339,7 @@ @@ -339,6 +339,7 @@
         fi         fi
Line 223: Line 225:
  {  {
 ``` ```
 +
 +## On ZFS virtual device types
 +
 +[Write up](https://klarasystems.com/articles/openzfs-understanding-zfs-vdev-types/)
 +* LOG: Maybe useful for NFS exports. Can be very small (32 GB), should have very low latency.
 +* L2ARC: Read cache, rarely useful.
 +* SPECIAL: If enabled contains the whole metadata so is a point of failure. Can contain small files. Should be an SSD.
nas/zfs/options_for_zfs_on_raid.1734900341.txt.gz · Last modified: by admin