You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are trying to install this csi-driver on Kubernetes version 1.24.6, all nodes with OS Ubuntu 22.04. Installation from your manifests succeeds. Next, we create pvs and pod. A disk of the required size is created on the node (/dev/sdd with 12Mi), but not mounted. The container csi-vcd-nodeplugin logs are as follows:
I0120 02:35:17.154778 1 node.go:479] Checking file: [/dev/disk/by-path/pci-0000:0b:00.0-scsi-0:0:1:0] => [/dev/sdd]
I0120 02:35:17.160375 1 node.go:508] Obtained matching disk [/dev/sdd]
I0120 02:35:17.213342 1 node.go:155] Mounting device [/dev/sdd] to folder [/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount] of type [ext4] with flags [[rw]]
time="2023-01-20T02:35:17Z" level=info msg="attempting to mount disk" fsType=ext4 options="[rw defaults]" source=/dev/sdd target=/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount
time="2023-01-20T02:35:17Z" level=info msg="mount command" args="-t ext4 -o rw,defaults /dev/sdd /var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount" cmd=mount
time="2023-01-20T02:35:17Z" level=error msg="mount Failed" args="-t ext4 -o rw,defaults /dev/sdd /var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount" cmd=mount error="exit status 32" output="mount: /var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount: mount point does not exist.\n"
time="2023-01-20T02:35:17Z" level=info msg="checking if disk is formatted using lsblk" args="[-n -o FSTYPE /dev/sdd]" disk=/dev/sdd
E0120 02:35:17.243250 1 driver.go:172] GRPC error: function [/csi.v1.Node/NodeStageVolume] req [&csi.NodeStageVolumeRequest{VolumeId:"pvc-72ad8d45-a48c-4898-a0ba-41e9fcb8adc3", PublishContext:map[string]string{"diskID":"pvc-72ad8d45-a48c-4898-a0ba-41e9fcb8adc3", "diskUUID":"6000c29b-da53-f4bc-f0e4-9af9c4f15aea", "filesystem":"ext4", "vmID":"dev-k8s-worker02"}, StagingTargetPath:"/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount", VolumeCapability:(*csi.VolumeCapability)(0xc000042500), Secrets:map[string]string(nil), VolumeContext:map[string]string{"busSubType":"VirtualSCSI", "busType":"SCSI", "diskID":"urn:vcloud:disk:5b42562f-d15e-42b6-96af-f7e3d7b2636e", "filesystem":"ext4", "storage.kubernetes.io/csiProvisionerIdentity":"1674143428117-8081-named-disk.csi.cloud-director.vmware.com", "storageProfile":"SATA"}, XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]: [rpc error: code = Internal desc = unable to format and mount device [/dev/sdd] at path [/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount] with fs [[rw]] and flags [mount failed: exit status 32
mounting arguments: -t ext4 -o rw,defaults /dev/sdd /var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount
output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount: mount point does not exist.
]: [%!v(MISSING)]]
If go to the node (where pod is created), then manually mounting the /dev/sdd disk, for example, to the /mnt directory is successful. If you try to create the 2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount (directories from log file) in the /var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com directory, these directories are deleted after a few seconds.
Maybe someone faced a similar problem?
Disk /dev/sdd have to mount in /var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount
Describe the bug
We are trying to install this csi-driver on Kubernetes version 1.24.6, all nodes with OS Ubuntu 22.04. Installation from your manifests succeeds. Next, we create pvs and pod. A disk of the required size is created on the node (
/dev/sdd
with 12Mi), but not mounted. The container csi-vcd-nodeplugin logs are as follows:If go to the node (where pod is created), then manually mounting the
/dev/sdd
disk, for example, to the /mnt directory is successful. If you try to create the2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount
(directories from log file) in the/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com
directory, these directories are deleted after a few seconds.Maybe someone faced a similar problem?
Reproduction steps
Expected behavior
Disk
/dev/sdd
have to mount in/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount
Additional context
Kubernetes version: 1.24.6
CSI Version: 1.3.1
Node OS: Ubuntu 22.04 LTS
The text was updated successfully, but these errors were encountered: