Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mount failed: exit status 32 (mount point does not exist) #132

Open
autokilla47 opened this issue Jan 20, 2023 · 5 comments
Open

Mount failed: exit status 32 (mount point does not exist) #132

autokilla47 opened this issue Jan 20, 2023 · 5 comments
Assignees
Labels
bug Something isn't working

Comments

@autokilla47
Copy link

Describe the bug

We are trying to install this csi-driver on Kubernetes version 1.24.6, all nodes with OS Ubuntu 22.04. Installation from your manifests succeeds. Next, we create pvs and pod. A disk of the required size is created on the node (/dev/sdd with 12Mi), but not mounted. The container csi-vcd-nodeplugin logs are as follows:

I0120 02:35:17.154778       1 node.go:479] Checking file: [/dev/disk/by-path/pci-0000:0b:00.0-scsi-0:0:1:0] => [/dev/sdd]
I0120 02:35:17.160375       1 node.go:508] Obtained matching disk [/dev/sdd]
I0120 02:35:17.213342       1 node.go:155] Mounting device [/dev/sdd] to folder [/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount] of type [ext4] with flags [[rw]]
time="2023-01-20T02:35:17Z" level=info msg="attempting to mount disk" fsType=ext4 options="[rw defaults]" source=/dev/sdd target=/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount
time="2023-01-20T02:35:17Z" level=info msg="mount command" args="-t ext4 -o rw,defaults /dev/sdd /var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount" cmd=mount
time="2023-01-20T02:35:17Z" level=error msg="mount Failed" args="-t ext4 -o rw,defaults /dev/sdd /var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount" cmd=mount error="exit status 32" output="mount: /var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount: mount point does not exist.\n"
time="2023-01-20T02:35:17Z" level=info msg="checking if disk is formatted using lsblk" args="[-n -o FSTYPE /dev/sdd]" disk=/dev/sdd
E0120 02:35:17.243250       1 driver.go:172] GRPC error: function [/csi.v1.Node/NodeStageVolume] req [&csi.NodeStageVolumeRequest{VolumeId:"pvc-72ad8d45-a48c-4898-a0ba-41e9fcb8adc3", PublishContext:map[string]string{"diskID":"pvc-72ad8d45-a48c-4898-a0ba-41e9fcb8adc3", "diskUUID":"6000c29b-da53-f4bc-f0e4-9af9c4f15aea", "filesystem":"ext4", "vmID":"dev-k8s-worker02"}, StagingTargetPath:"/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount", VolumeCapability:(*csi.VolumeCapability)(0xc000042500), Secrets:map[string]string(nil), VolumeContext:map[string]string{"busSubType":"VirtualSCSI", "busType":"SCSI", "diskID":"urn:vcloud:disk:5b42562f-d15e-42b6-96af-f7e3d7b2636e", "filesystem":"ext4", "storage.kubernetes.io/csiProvisionerIdentity":"1674143428117-8081-named-disk.csi.cloud-director.vmware.com", "storageProfile":"SATA"}, XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}]: [rpc error: code = Internal desc = unable to format and mount device [/dev/sdd] at path [/var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount] with fs [[rw]] and flags [mount failed: exit status 32
mounting arguments: -t ext4 -o rw,defaults /dev/sdd /var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount
output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount: mount point does not exist.
]: [%!v(MISSING)]]

If go to the node (where pod is created), then manually mounting the /dev/sdd disk, for example, to the /mnt directory is successful. If you try to create the 2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount (directories from log file) in the /var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com directory, these directories are deleted after a few seconds.
Maybe someone faced a similar problem?

Reproduction steps

  1. Install csi-driver
  2. creare kind: StorageClass
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "false"
  name: vcd-disk-dev
provisioner: named-disk.csi.cloud-director.vmware.com
reclaimPolicy: Delete
parameters:
  storageProfile: "SATA"
  filesystem: "ext4"
  1. create kind: persistentVolumeClaim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc1
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 12Mi
  storageClassName: "vcd-disk-dev"
  1. create pod with persistentVolumeClaim:
---
apiVersion: v1
kind: Pod
...
spec:
  volumes:
    - name: my-pod-storage
      persistentVolumeClaim:
        claimName: my-pvc1
  containers:
    - name: my-pod-container
...
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: my-pod-storage

Expected behavior

Disk /dev/sdd have to mount in /var/lib/kubelet/plugins/kubernetes.io/csi/named-disk.csi.cloud-director.vmware.com/2aa75d1684f43d1029e9db0bd05c5739be8957e9a70d85d9449c69f8e34c145f/globalmount

Additional context

Kubernetes version: 1.24.6
CSI Version: 1.3.1
Node OS: Ubuntu 22.04 LTS

@autokilla47 autokilla47 added the bug Something isn't working label Jan 20, 2023
@ofedotov51
Copy link

Hi! My cluster OKD 4.11.0-0.okd-2022-11-05-030711 has the same bug.

@dtarasov7
Copy link
Contributor

This PR fix this error.
#134

@autokilla47
Copy link
Author

I checked the changes in the PR #134 and it solved my problem.

@ofedotov51
Copy link

is there a ready-made image shared to solve this problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants