k8s:services:setting_up_an_opensearch_cluster
Setting up an opensearch cluster
Kubernetes YAML:
apiVersion: v1 kind: Secret metadata: annotations: field.cattle.io/description: Access credentials for Ceph S3 storage name: s3-credentials namespace: opensearch stringData: accessKey: <from S3 service> secretKey: <from S3 service> --- apiVersion: v1 kind: Secret metadata: annotations: field.cattle.io/description: Access credentials for the opensearch admin user name: opensearch-admin-credentials-prod namespace: opensearch stringData: username: admin password: <generate a random one> --- apiVersion: v1 kind: Secret metadata: name: opensearch-securityconfig-prod namespace: opensearch stringData: internal_users.yml: | _meta: type: "internalusers" config_version: 2 admin: hash: "<bcrypt the password from above>" reserved: true backend_roles: - "admin" description: "Admin user managed by the Kubernetes operator" --- apiVersion: opensearch.opster.io/v1 kind: OpenSearchCluster metadata: name: opensearch-prod namespace: opensearch spec: bootstrap: resources: {} confMgmt: {} dashboards: enable: true opensearchCredentialsSecret: name: opensearch-admin-credentials-prod replicas: 1 resources: limits: cpu: '1' memory: 2Gi requests: cpu: '1' memory: 1Gi service: type: ClusterIP tls: caSecret: {} enable: true generate: true secret: {} version: 2.8.0 general: drainDataNodes: true httpPort: 9200 keystore: - keyMappings: accessKey: s3.client.default.access_key secretKey: s3.client.default.secret_key secret: name: s3-credentials pluginsList: - repository-s3 serviceName: opensearch-prod version: 2.8.0 initHelper: {} nodePools: - additionalConfig: http.cors.allow-credentials: 'true' http.cors.allow-origin: >- /https?:\/\/(localhost|dboefrontend(-test)?\.cluster\.machine-deck\.jeffries-tube\.at)(:[0-9]+)?/ http.cors.enabled: 'true' s3.client.default.endpoint: s3.cluster.machine-deck.jeffries-tube.at s3.client.default.path_style_access: 'true' s3.client.default.protocol: https component: masters diskSize: 25Gi jvm: '-Xmx2g -Xms2g' persistence: pvc: accessModes: - ReadWriteOnce storageClass: local-disks replicas: 3 resources: limits: cpu: '2' memory: 4Gi requests: cpu: '2' memory: 4Gi roles: - data - cluster_manager security: config: adminCredentialsSecret: name: opensearch-admin-credentials-prod adminSecret: {} securityConfigSecret: name: opensearch-securityconfig-prod tls: http: caSecret: {} generate: true secret: {} transport: caSecret: {} generate: true perNode: true secret: {} --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: cert-manager.io/cluster-issuer: letsencrypt-prod nginx.ingress.kubernetes.io/backend-protocol: HTTPS name: opensearch-api namespace: opensearch spec: rules: - host: opensearch-api.cluster.machine-deck.jeffries-tube.at http: paths: - backend: service: name: opensearch-prod-masters port: number: 9200 path: / pathType: Prefix tls: - hosts: - opensearch-api.cluster.machine-deck.jeffries-tube.at secretName: opensearch-api-tls --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: cert-manager.io/cluster-issuer: letsencrypt-prod nginx.ingress.kubernetes.io/backend-protocol: HTTPS name: opensearch-dashboards namespace: opensearch spec: rules: - host: opensearch.cluster.machine-deck.jeffries-tube.at http: paths: - backend: service: name: opensearch-prod-dashboards port: number: 5601 path: / pathType: Prefix tls: - hosts: - opensearch.cluster.machine-deck.jeffries-tube.at secretName: opensearch-dashboards-tls
Troubleshooting
- Make sure the sysctl setting vm.max_map_count is 262144 on all cluster nodes
- If something goes wrong start all over, first install tends to hang if certain race conditions are met
- delete the opensearch cluster definition
- delete the PVC
- delete the released local-disk PV
- For Ceph S3: there is usually no bucket path, just the name
k8s/services/setting_up_an_opensearch_cluster.txt · Last modified: by admin