Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cluster stuck in DELETE_IN_PROGRESS due to keystone error 403 Forbidden #485

Open
MaximMonin opened this issue Jan 29, 2025 · 0 comments
Open

Comments

@MaximMonin
Copy link

Deleting existing cluster from Horizon interface or by using Magnum Api leads to stucking in DELETE_IN_PROGRESS.

https://github.com/vexxhost/magnum-cluster-api/blob/main/magnum_cluster_api/driver.py#L230

            # NOTE(mnaser): We delete the application credentials at this stage
            #               to make sure CAPI doesn't lose access to OpenStack.
            try:
                osc.keystone().client.application_credentials.find(
                    name=cluster.uuid,
                    user=cluster.user_id,
                ).delete()
            except keystoneauth1.exceptions.http.NotFound:
                pass

by adding:

            except keystoneauth1.exceptions.http.Forbidden:
                pass

issue is fixed and cluster deleting is successful.

Here is parts of logs on keystone side.

  1. mangum-cluster-api trying is find application credentials and fails.
  2. api client use the same project token (used by delete cluster command) and successfully gets application credentials.
127.0.0.1 - - [29/Jan/2025:12:04:46 +0000] "GET /v3/users/c81850bf7c5f4667b40d9e428a3635cf/application_credentials?name=9c20ee6d-cc8f-4517-98e1-c794a3784142 HTTP/1.1" 403 441 "-" "python-keystoneclient"
127.0.0.1 - - [29/Jan/2025:12:05:16 +0000] "GET /v3/users/c81850bf7c5f4667b40d9e428a3635cf/application_credentials?name=9c20ee6d-cc8f-4517-98e1-c794a3784142 HTTP/1.1" 200 1510 "-" "axios/1.7.9"

we are using default policy for keystone

# List application credentials for a user.
# GET  /v3/users/{user_id}/application_credentials
# HEAD  /v3/users/{user_id}/application_credentials
# Intended scope(s): system, project
#"identity:list_application_credentials": "(role:reader and system_scope:all) or rule:owner"

I think openstack context is wrong in update_cluster_status procedure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant