Error: UPGRADE FAILED: post-upgrade hooks failed: 1 error occurred #31500
-
I'm trying to install a on-premise I'm using the next versions code minikube version kubectl version helm version helm upgrade --install --values my-values.yaml --debug superset superset/superset But I just get the next error
Reading this log I see the Any idea? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
Hey @Standard-IO! 👋 I'm here to help you with your tech issues and questions. Let's squash those bugs together! I found a similar open issue titled "Unable to install superset using helm and kubernetes" that might be relevant to your problem. However, it doesn't have any comments or solutions provided [1]. Regarding your issue with the
By addressing these areas, you may be able to resolve the timeout issue during the Helm upgrade process. To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
Beta Was this translation helpful? Give feedback.
-
I found related issues/questions (#28125, #27575, #28782) related to this topic but no one show a clear way how to debug this problem. Maybe because this is very related to k8s and helms charts. The failing is a Job in k8s to init the postgres db. We can change the way to install/upgrade the deployment to show more information by just add helm upgrade --install --values my-values.yaml --debug superset superset/superset After realice that the erros is a Job we can list the related pods by run the next command; a list will be deployed and we choose the job with the errors state: kubectl get go -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default superset-866f4645d7-29r89 1/1 Running 0 74s
default superset-init-db-ms85r 0/1 Error 0 74s
default superset-postgresql-0 1/1 Running 0 6h49m
default superset-redis-master-0 1/1 Running 0 6h49m
default superset-worker-59c7866db5-hzwt6 1/1 Running 0 74s In my case was the kubectl logs superset-init-db-ms85r The displayed errors were no provide a secure
bootstrapScript: |
#!/bin/bash
python -m pip install -U psycopg2-binary
if [ ! -f ~/bootstrap ]; then echo "Running Superset with uid {{ .Values.runAsUser }}" > ~/bootstrap; fi After this steps everything is working fine. |
Beta Was this translation helpful? Give feedback.
I found related issues/questions (#28125, #27575, #28782) related to this topic but no one show a clear way how to debug this problem. Maybe because this is very related to k8s and helms charts.
The failing is a Job in k8s to init the postgres db.
We can change the way to install/upgrade the deployment to show more information by just add
--debug
and install the chartsAfter realice that the erros is a Job we can list the related pods by run the next command; a list will be deployed and we choose the job with the errors state: