The situation
I have a kubernetes pod stuck in "Terminating" state that resists pod deletions
NAME READY STATUS RESTARTS AGE ... funny-turtle-myservice-xxx-yyy 1/1 Terminating 1 11d ...
Where funny-turtle is the name of the helm release that has since been deleted.
What I have triedtry to delete the pod.
Output:
pod "funny-turtle-myservice-xxx-yyy" deleted
Outcome: it still shows up in the same state. - also tried with --force --grace-period=0, the same outcome with extra warning
Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
try to read the logs (kubectl logs ...).
Outcome:
Error from server (NotFound): nodes "ip-xxx.yyy.compute.internal" not found
try to delete the kubernetes deployment.
but it does not exist.
So I assume this pod somehow got "disconnected" from the aws API, reasoning from the error message that kubectl logs printed.
I'll take any suggestions or guidance to explain what happened here and how I can get rid of it.
EDIT 1
Tried to see if the "ghost" node was still there (kubectl delete node ip-xxx.yyy.compute.internal) but it does not exist.