Comment
Author: Admin | 2025-04-28
I have an application running over a POD in Kubernetes.I would like to store some output file logs on a persistent storage volume.In order to do that, I created a volume over the NFS and bound it to the POD through the related volume claim.When I try to write or accede the shared folder I got a "permission denied" message, since the NFS is apparently read-only.The following is the json file I used to create the volume:{ "kind": "PersistentVolume", "apiVersion": "v1", "metadata": { "name": "task-pv-test" }, "spec": { "capacity": { "storage": "10Gi" }, "nfs": { "server": , "path": "/export" }, "accessModes": [ "ReadWriteMany" ], "persistentVolumeReclaimPolicy": "Delete", "storageClassName": "standard" } }The following is the POD configuration filekind: PodapiVersion: v1metadata: name: volume-testspec: volumes: - name: task-pv-test-storage persistentVolumeClaim: claimName: task-pv-test-claim containers: - name: volume-test image: volumeMounts: - mountPath: /home name: task-pv-test-storage readOnly: falseIs there a way to change permissions?UPDATEHere are the PVC and NFS config:PVC:kind: PersistentVolumeClaimapiVersion: v1metadata: name: task-pv-test-claimspec: storageClassName: standard accessModes: - ReadWriteMany resources: requests: storage: 3GiNFS CONFIG{ "kind": "Pod", "apiVersion": "v1", "metadata": { "name": "nfs-client-provisioner-557b575fbc-hkzfp", "generateName": "nfs-client-provisioner-557b575fbc-", "namespace": "default", "selfLink": "/api/v1/namespaces/default/pods/nfs-client-provisioner-557b575fbc-hkzfp", "uid": "918b1220-423a-11e8-8c62-8aaf7effe4a0", "resourceVersion": "27228", "creationTimestamp": "2018-04-17T12:26:35Z", "labels": { "app": "nfs-client-provisioner", "pod-template-hash": "1136131967" }, "ownerReferences": [ { "apiVersion": "extensions/v1beta1", "kind": "ReplicaSet", "name": "nfs-client-provisioner-557b575fbc", "uid": "3239b14a-4222-11e8-8c62-8aaf7effe4a0", "controller": true, "blockOwnerDeletion": true } ] }, "spec": { "volumes": [ { "name": "nfs-client-root", "nfs": { "server": , "path": "/Kubernetes" } }, { "name": "nfs-client-provisioner-token-fdd2c", "secret": { "secretName": "nfs-client-provisioner-token-fdd2c", "defaultMode": 420 } } ], "containers": [ { "name": "nfs-client-provisioner", "image": "quay.io/external_storage/nfs-client-provisioner:latest", "env": [ { "name": "PROVISIONER_NAME", "value": "/Kubernetes" }, { "name": "NFS_SERVER", "value": }, { "name": "NFS_PATH", "value": "/Kubernetes" } ], "resources": {}, "volumeMounts": [ { "name": "nfs-client-root", "mountPath": "/persistentvolumes" }, { "name": "nfs-client-provisioner-token-fdd2c", "readOnly": true, "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount" } ], "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "imagePullPolicy": "Always" } ], "restartPolicy": "Always", "terminationGracePeriodSeconds": 30, "dnsPolicy": "ClusterFirst", "serviceAccountName": "nfs-client-provisioner", "serviceAccount": "nfs-client-provisioner", "nodeName": "det-vkube-s02", "securityContext": {}, "schedulerName": "default-scheduler", "tolerations": [ { "key": "node.kubernetes.io/not-ready", "operator": "Exists", "effect": "NoExecute", "tolerationSeconds": 300 }, { "key": "node.kubernetes.io/unreachable", "operator": "Exists", "effect": "NoExecute", "tolerationSeconds": 300 } ] }, "status": { "phase": "Running", "hostIP": , "podIP": ", "startTime": "2018-04-17T12:26:35Z", "qosClass": "BestEffort" }}I have just removed some status
Add Comment