Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to provision volume with StorageClass #116

Open
sd2020bs opened this issue May 4, 2021 · 4 comments
Open

failed to provision volume with StorageClass #116

sd2020bs opened this issue May 4, 2021 · 4 comments

Comments

@sd2020bs
Copy link

sd2020bs commented May 4, 2021

i have linstor with 2 nodes with lvm-thin:
linstor storage-pool list
┊ data ┊ linstor-ctrl ┊ LVM_THIN ┊ drbdpool/thinpool ┊ 19.96 GiB ┊ 19.96 GiB ┊ True ┊ Ok ┊
┊ data ┊ linstor-satel1 ┊ LVM_THIN ┊ drbdpool/thinpool ┊ 19.96 GiB ┊ 19.96 GiB ┊ True ┊ Ok ┊
and group named linstor-basic-storage.
i deployed csi-driver linstor in my k8-cluster. created sc, pvc and got error in command
kubectl describe pvc my-first-linstor-volume

Warning ProvisioningFailed 13m (x16 over 51m) linstor.csi.linbit.com_linstor-csi-controller-0_b3c98016-4e61-4650-ab86-2c1167f5d047 failed to provision volume with StorageClass "linstor-basic-storage": error generating accessibility requirements: no available topology found
my sc:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: linstor-basic-storage
provisioner: linstor.csi.linbit.com
parameters:
placementCount: "2"
storagePool: "data"
resourceGroup: "linstor-basic-storage"

my pvc:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-first-linstor-volume
spec:
storageClassName: linstor-basic-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi

How can i solve this problem?

@sd2020bs
Copy link
Author

sd2020bs commented May 5, 2021

i have find the reason.
linstor csi nodes are down.
linstor-csi-node-bvtkh 1/2 CrashLoopBackOff 35 15h
linstor-csi-node-rp9bf 1/2 CrashLoopBackOff 31 15h
linstor-csi-node-zrdh7 1/2 CrashLoopBackOff 37 15h
kubectl logs linstor-csi-node-zrdh7 -n kube-system csi-node-driver-registrar
time="2021-05-05T03:18:04Z" level=debug msg="curl -X 'GET' -H 'Accept: application/json' 'http://192.168.1.150:3370/v1/nodes/node1/storage-pools'"
time="2021-05-05T03:18:04Z" level=debug msg="Status code not within 200 to 400, but 404 (Not Found)\n"
time="2021-05-05T03:18:04Z" level=error msg="method failed" func="github.com/sirupsen/logrus.(*Entry).Error" file="/go/pkg/mod/github.com/sirupsen/logrus@v1.4.2/entry.go:297" error="failed to retrieve node topology: failed to get storage pools for node: 404 Not Found" linstorCSIComponent=driver method=/csi.v1.Node/NodeGetInfo nodeID=node1 provisioner=linstor.csi.linbit.com req= resp="" version=v0.12.1
time="2021-05-05T03:18:17Z" level=debug msg="method called" func="github.com/sirupsen/logrus.(*Entry).Debug" file="/go/pkg/mod/github.com/sirupsen/logrus@v1.4.2/entry.go:277" linstorCSIComponent=driver method=/csi.v1.Identity/GetPluginInfo nodeID=node1 provisioner=linstor.csi.linbit.com req= resp="name:"linstor.csi.linbit.com" vendor_version:"v0.12.1" " version=v0.12.1

But i don't understand why this variable $(KUBE_NODE_NAME) is used for getting storage pools? I have storage pool on different vms.

@WanzenBug
Copy link
Member

Hello!

You don't need a storage pool, but at least a satellite running on the host. This can be directly on the host or as a daemonset (like the one configured by the piraeus-operator. The LINSTOR satellite is responsible for creating the DRBD diskless resource that the node attaches to, the CSI-Node pod just prepares/mounts the device created by the satellite.

@sd2020bs
Copy link
Author

sd2020bs commented May 5, 2021

Does this storage work like openebs with block devices? I mean, every k8-node must have the same disks? Can this storage work in k8-cluster with external pools (on non-clustered vms)?

@WanzenBug
Copy link
Member

I mean, every k8-node must have the same disks

No you, can have completely different disk and storage pools on every node

Can this storage work in k8-cluster with external pools

Yes. the only requirement is that your kubernetes cluster nodes are part of the overall linstor cluster (i.e. they have to have a satellite configured). The kubernetes nodes don't need a disk configured.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants