Skip to content

Commit 737cfb6

Browse files
committed
Add kubevirt configuration and troubleshoot rabbitmq disk issue
1 parent 44e6eb4 commit 737cfb6

File tree

3 files changed

+110
-0
lines changed

3 files changed

+110
-0
lines changed

41-kubernetes-single-computer.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -120,3 +120,13 @@ volumes:
120120
persistentVolumeClaim:
121121
claimName: asreview-storage
122122
```
123+
124+
## Multi-node minikube
125+
126+
If you are using a multi-node minikube setup (for testing reasons, hopefully), also run the following:
127+
128+
```bash
129+
minikube addons disable storage-provisioner
130+
kubectl delete storageclasses.storage.k8s.io standard
131+
kubectl apply -f kubevirt-hostpath-provisioner.yml
132+
```

42-kubernetes-cloud-provider.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -59,3 +59,11 @@ volumes:
5959
server: NFS_SERVICE_IP
6060
path: "/"
6161
```
62+
63+
## StorageClass provisioner
64+
65+
If your cluster does not have a StorageClass provisioner, you can try the following:
66+
67+
```bash
68+
kubectl apply -f kubevirt-hostpath-provisioner.yml
69+
```
Lines changed: 92 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,92 @@
1+
# https://stackoverflow.com/questions/75175620/why-cant-my-rabbitmq-cluster-on-k8s-multi-node-minikube-create-its-mnesia-dir
2+
apiVersion: storage.k8s.io/v1
3+
kind: StorageClass
4+
metadata:
5+
name: standard
6+
annotations:
7+
storageclass.kubernetes.io/is-default-class: "true"
8+
provisioner: kubevirt.io/hostpath-provisioner
9+
volumeBindingMode: WaitForFirstConsumer
10+
reclaimPolicy: Delete
11+
---
12+
apiVersion: rbac.authorization.k8s.io/v1
13+
kind: ClusterRoleBinding
14+
metadata:
15+
name: kubevirt-hostpath-provisioner
16+
subjects:
17+
- kind: ServiceAccount
18+
name: kubevirt-hostpath-provisioner-admin
19+
namespace: kube-system
20+
roleRef:
21+
kind: ClusterRole
22+
name: kubevirt-hostpath-provisioner
23+
apiGroup: rbac.authorization.k8s.io
24+
---
25+
kind: ClusterRole
26+
apiVersion: rbac.authorization.k8s.io/v1
27+
metadata:
28+
name: kubevirt-hostpath-provisioner
29+
rules:
30+
- apiGroups: [""]
31+
resources: ["nodes"]
32+
verbs: ["get"]
33+
- apiGroups: [""]
34+
resources: ["persistentvolumes"]
35+
verbs: ["get", "list", "watch", "create", "delete"]
36+
- apiGroups: [""]
37+
resources: ["persistentvolumeclaims"]
38+
verbs: ["get", "list", "watch", "update"]
39+
40+
- apiGroups: ["storage.k8s.io"]
41+
resources: ["storageclasses"]
42+
verbs: ["get", "list", "watch"]
43+
44+
- apiGroups: [""]
45+
resources: ["events"]
46+
verbs: ["list", "watch", "create", "update", "patch"]
47+
---
48+
apiVersion: v1
49+
kind: ServiceAccount
50+
metadata:
51+
name: kubevirt-hostpath-provisioner-admin
52+
namespace: kube-system
53+
---
54+
apiVersion: apps/v1
55+
kind: DaemonSet
56+
metadata:
57+
name: kubevirt-hostpath-provisioner
58+
labels:
59+
k8s-app: kubevirt-hostpath-provisioner
60+
namespace: kube-system
61+
spec:
62+
selector:
63+
matchLabels:
64+
k8s-app: kubevirt-hostpath-provisioner
65+
template:
66+
metadata:
67+
labels:
68+
k8s-app: kubevirt-hostpath-provisioner
69+
spec:
70+
serviceAccountName: kubevirt-hostpath-provisioner-admin
71+
containers:
72+
- name: kubevirt-hostpath-provisioner
73+
image: quay.io/kubevirt/hostpath-provisioner
74+
imagePullPolicy: Always
75+
env:
76+
- name: USE_NAMING_PREFIX
77+
value: "false" # change to true, to have the name of the pvc be part of the directory
78+
- name: NODE_NAME
79+
valueFrom:
80+
fieldRef:
81+
fieldPath: spec.nodeName
82+
- name: PV_DIR
83+
value: /tmp/hostpath-provisioner
84+
volumeMounts:
85+
- name: pv-volume # root dir where your bind mounts will be on the node
86+
mountPath: /tmp/hostpath-provisioner/
87+
#nodeSelector:
88+
#- name: xxxxxx
89+
volumes:
90+
- name: pv-volume
91+
hostPath:
92+
path: /tmp/hostpath-provisioner/

0 commit comments

Comments
 (0)