-
Notifications
You must be signed in to change notification settings - Fork 92
1.24 breaking change - No Secret by default for Service Account token #31
Comments
I've tested on 1.25 and this does not seem to be an issue. Granted there are no tokens, the patcher is still able to sync the secret and patch the sa accounts with imagePullSecrets:
- name: image-pull-secret I was able to pull an image from a private registry using the appropriate secret and no token in the sa account. I do not believe patcher should be updated to create the tokens. This seems out of scope. Perhaps I'm missing something. |
This commit addresses the issue described in GitHub titansoft-pte-ltd#31 where pods were unable to find the specified ConfigMap. By explicitly defining the service account token in the Pod specification, we ensure that the token is available to the Pod, thereby mitigating the issue. Related: titansoft-pte-ltd#31
We started our deployment on a k3s cluster hosted by Civo, where the manifest initially worked as expected. At some point, though, the deployment failed with the following error: MountVolume.SetUp failed for volume "cdk8s-plugin-cm" : configmap "cdk8s-plugin-cm" not found. After some digging, we discovered that explicitly specifying the service account token resolved the issue, contrary to the current behavior that relies on Kubernetes' auto-mounting of service account tokens. This seems to align with the original poster's experience on Kubernetes 1.25. Here is the modified YAML for the Deployment spec: # ... (other parts of the Deployment)
spec:
# ...
template:
spec:
automount_service_account_token: false
volumes:
- name: explicit-token
projected:
sources:
- secret:
name: imagepullsecret-patcher-token
items:
- key: token
path: token
containers:
- name: imagepullsecret-patcher
# ...
volumeMounts:
- name: explicit-token
mountPath: /var/run/secrets/kubernetes.io/serviceaccount/
readOnly: true This approach avoids the issue for all providers. To see the full changes, you can view the pull request here: [PR] Related stack overflow: https://stackoverflow.com/questions/69038012/mountvolume-setup-failed-for-volume-kube-api-access-fcz9j-object-default |
Hello contributors,
I have used this Patcher in my environment to automate the propagation of Docker credentials across the cluster for existing & new namespaces. However, in Kubernetes v1.24, a change to the Token APIs may break some functionality of this module. I have tried to describe the scenario below. Related issue
Description of Issue
Kubernetes is moving to short-lived Tokens as a security improvement. A change to the TokenRequest API in Kubernetes v1.24 requires new Service Accounts to manually create a Secret which contains its API access token. Similarly, when a new Pod is created, kubelet will use the TokenRequest API to generate a token that is specific to that Pod (and Service Account/Namespace?). The token is mounted to the Pod by a projected volume & updated regularly before expiration.
Replicate the Issue
$ kubectl describe sa default
Proposed Change
The Patcher should also create a "Service Account Token Secret" for each Service Account included in its scope, if one does not already exist. This means that new Service Accounts do not need to have this Token Secret manually created. The Secret containing registry credentials is then accessible by the Service Account and its Pods. Relevant documentation
Disclaimer: I am relatively new to Kubernetes and am not a Go developer. I hope I have presented the issue clearly. Please let me know if more information is needed or an alternative solution exists. Thank you!
The text was updated successfully, but these errors were encountered: