Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add stop job for aws #96

Merged
merged 1 commit into from
Jan 26, 2025

Conversation

dudizimber
Copy link
Collaborator

@dudizimber dudizimber commented Jan 26, 2025

User description

fix #95


PR Type

Enhancement, Tests


Description

  • Added AWS EKS and STS client integration for Kubernetes credential retrieval in K8sRepository.

  • Introduced multi-cloud support by adding cloudProvider parameter across various methods and schemas.

  • Updated OmnistrateRepository to include cloudProvider field and consistent clusterId formatting.

  • Enhanced handleFreeInstance logic to handle cloud provider-specific operations.

  • Added CloudProviders enum and schema field for defining supported cloud providers.

  • Updated tests to validate multi-cloud support in K8sRepository.

  • Added AWS-related dependencies (@aws-sdk/client-eks, @aws-sdk/client-sts, aws-eks-token) and updated pnpm-lock.yaml.

  • Updated Cloud Build configurations to include AWS_ROLE_ARN for AWS integration.


Changes walkthrough 📝

Relevant files
Enhancement
4 files
K8sRepository.ts
Added AWS EKS support and multi-cloud handling in K8sRepository.

backend/jobs/free-tier-shutdown/src/repositories/k8s/K8sRepository.ts

  • Added AWS EKS and STS client integration for retrieving Kubernetes
    credentials.
  • Introduced _getEKSCredentials method to handle AWS-specific credential
    fetching.
  • Modified _getK8sConfig to support both GCP and AWS cloud providers.
  • Updated methods to accept cloudProvider as a parameter for multi-cloud
    support.
  • +76/-10 
    OmnistrateRepository.ts
    Added cloud provider field to OmnistrateRepository.           

    backend/jobs/free-tier-shutdown/src/repositories/omnistrate/OmnistrateRepository.ts

  • Added cloudProvider field to the resource instance mapping.
  • Modified clusterId formatting logic for consistency.
  • +2/-1     
    index.ts
    Integrated cloud provider parameter in free instance handling.

    backend/jobs/free-tier-shutdown/src/index.ts

  • Updated handleFreeInstance to include cloudProvider when fetching last
    used time.
  • Adjusted method calls to pass cloudProvider for multi-cloud support.
  • +8/-2     
    OmnistrateInstance.ts
    Added cloud provider enum and schema field.                           

    backend/jobs/free-tier-shutdown/src/schemas/OmnistrateInstance.ts

  • Introduced CloudProviders enum for defining supported cloud providers.
  • Added cloudProvider field to the OmnistrateInstanceSchema.
  • +7/-0     
    Tests
    1 files
    K8sRepository.test.ts
    Updated K8sRepository test for multi-cloud support.           

    backend/jobs/free-tier-shutdown/src/repositories/k8s/K8sRepository.test.ts

  • Updated test case to include cloudProvider parameter for
    getFalkorDBInfo.
  • +1/-1     
    Configuration changes
    2 files
    cloudbuild.dev.yaml
    Updated dev Cloud Build configuration for AWS integration.

    backend/jobs/free-tier-shutdown/cloudbuild.dev.yaml

  • Added AWS_ROLE_ARN environment variable to the dev Cloud Build
    configuration.
  • +2/-1     
    cloudbuild.prod.yaml
    Updated prod Cloud Build configuration for AWS integration.

    backend/jobs/free-tier-shutdown/cloudbuild.prod.yaml

  • Added AWS_ROLE_ARN environment variable to the prod Cloud Build
    configuration.
  • +2/-1     
    Dependencies
    2 files
    package.json
    Added AWS SDK and EKS token dependencies.                               

    backend/jobs/free-tier-shutdown/package.json

  • Added @aws-sdk/client-eks and @aws-sdk/client-sts dependencies for AWS
    integration.
  • Added aws-eks-token dependency for generating EKS tokens.
  • +3/-0     
    pnpm-lock.yaml
    Added AWS SDK dependencies and utility libraries.               

    backend/pnpm-lock.yaml

  • Added new dependencies for AWS SDK, including @aws-sdk/client-eks and
    @aws-sdk/client-sts.
  • Included additional libraries such as aws-eks-token and crypto-js.
  • Updated dependency versions and added new entries for various utility
    libraries.
  • Introduced new typed array and intrinsic-related utilities.
  • +1237/-63

    Need help?
  • Type /help how to ... in the comments thread for any questions about Qodo Merge usage.
  • Check out the documentation for more information.
  • Summary by CodeRabbit

    • New Features

      • Added support for AWS EKS alongside existing GCP GKE infrastructure
      • Introduced cloud provider context for instance management
    • Dependencies

      • Added AWS SDK clients for EKS and STS
      • Added AWS EKS token management library
    • Configuration Updates

      • Updated deployment configurations to include AWS role ARN
      • Modified Kubernetes repository to handle multi-cloud credentials
    • Schema Changes

      • Added cloud provider enumeration for type safety
      • Enhanced instance schema to include cloud provider information

    @dudizimber dudizimber linked an issue Jan 26, 2025 that may be closed by this pull request
    Copy link

    vercel bot commented Jan 26, 2025

    The latest updates on your projects. Learn more about Vercel for Git ↗︎

    Name Status Preview Comments Updated (UTC)
    falkordb-dbaas ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jan 26, 2025 10:12am

    Copy link

    coderabbitai bot commented Jan 26, 2025

    Walkthrough

    This pull request introduces multi-cloud support for the free-tier shutdown job, specifically extending functionality to include AWS EKS alongside existing GCP GKE support. The changes span multiple files in the backend jobs directory, adding AWS SDK dependencies, modifying repository methods to handle cloud provider-specific credentials, and updating schemas and deployment configurations to accommodate the new cloud provider context.

    Changes

    File Change Summary
    backend/jobs/free-tier-shutdown/cloudbuild.dev.yaml Added _AWS_ROLE_ARN substitution variable and environment variable for AWS role
    backend/jobs/free-tier-shutdown/cloudbuild.prod.yaml Added _AWS_ROLE_ARN substitution variable and environment variable for AWS role
    backend/jobs/free-tier-shutdown/package.json Added AWS SDK dependencies: @aws-sdk/client-eks, @aws-sdk/client-sts, aws-eks-token
    backend/jobs/free-tier-shutdown/src/index.ts Updated handleFreeInstance to include cloudProvider parameter
    backend/jobs/free-tier-shutdown/src/repositories/k8s/K8sRepository.ts Major refactoring to support AWS EKS credentials retrieval, added _getEKSCredentials method
    backend/jobs/free-tier-shutdown/src/repositories/omnistrate/OmnistrateRepository.ts Modified getInstancesFromTier to extract cloudProvider from instance data
    backend/jobs/free-tier-shutdown/src/schemas/OmnistrateInstance.ts Added CloudProviders enum and cloudProvider property to schema

    Sequence Diagram

    sequenceDiagram
        participant OmnistrateRepo as OmnistrateRepository
        participant K8sRepo as K8sRepository
        participant EKSClient as AWS EKS Client
        participant GKEClient as GCP GKE Client
    
        OmnistrateRepo->>OmnistrateRepo: Extract cloud provider
        OmnistrateRepo->>K8sRepo: Call with cloud provider
        alt Cloud Provider is AWS
            K8sRepo->>EKSClient: Get AWS Credentials
        else Cloud Provider is GCP
            K8sRepo->>GKEClient: Get GCP Credentials
        end
        K8sRepo-->>OmnistrateRepo: Return Cluster Information
    
    Loading

    Possibly related PRs

    Suggested labels

    enhancement

    Poem

    🐰 Hopping through clouds, GCP and AWS so bright,
    Our free-tier job now has multi-cloud might!
    With SDKs dancing and roles set just right,
    Credentials flowing, a technological delight!
    Kubernetes clusters, no longer confined,
    A rabbit's code journey, brilliantly designed! 🚀

    ✨ Finishing Touches
    • 📝 Generate Docstrings (Beta)

    Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

    ❤️ Share
    🪧 Tips

    Chat

    There are 3 ways to chat with CodeRabbit:

    • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
      • I pushed a fix in commit <commit_id>, please review it.
      • Generate unit testing code for this file.
      • Open a follow-up GitHub issue for this discussion.
    • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
      • @coderabbitai generate unit testing code for this file.
      • @coderabbitai modularize this function.
    • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
      • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
      • @coderabbitai read src/utils.ts and generate unit testing code.
      • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
      • @coderabbitai help me debug CodeRabbit configuration file.

    Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

    CodeRabbit Commands (Invoked using PR comments)

    • @coderabbitai pause to pause the reviews on a PR.
    • @coderabbitai resume to resume the paused reviews.
    • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
    • @coderabbitai full review to do a full review from scratch and review all the files again.
    • @coderabbitai summary to regenerate the summary of the PR.
    • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
    • @coderabbitai resolve resolve all the CodeRabbit review comments.
    • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
    • @coderabbitai help to get help.

    Other keywords and placeholders

    • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
    • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
    • Add @coderabbitai anywhere in the PR title to generate the title automatically.

    CodeRabbit Configuration File (.coderabbit.yaml)

    • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
    • Please see the configuration documentation for more information.
    • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

    Documentation and Community

    • Visit our Documentation for detailed information on how to use CodeRabbit.
    • Join our Discord Community to get help, request features, and share feedback.
    • Follow us on X/Twitter for updates and announcements.

    Copy link

    Qodo Merge was enabled for this repository. To continue using it, please link your Git account with your Qodo account here.

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    🎫 Ticket compliance analysis ✅

    95 - PR Code Verified

    Compliant requirements:

    • Add AWS support for the free tier shutdown job
    • Integrate AWS EKS and STS client for Kubernetes credential retrieval
    • Add multi-cloud support with cloudProvider parameter
    • Update OmnistrateRepository to handle cloud provider field
    • Update Cloud Build configurations for AWS integration

    Requires further human verification:

    • Verify AWS credentials and permissions work correctly in production environment
    • Test the AWS EKS cluster access and token generation in real environment
    ⏱️ Estimated effort to review: 4 🔵🔵🔵🔵⚪
    🧪 PR contains tests
    🔒 Security concerns

    Sensitive information exposure:
    The code retrieves and handles sensitive AWS credentials (AccessKeyId, SecretAccessKey, SessionToken) but doesn't ensure they are properly secured in memory or logs. Consider implementing secure credential handling practices and ensuring logs don't expose these values.

    ⚡ Recommended focus areas for review

    Error Handling

    The AWS credential retrieval and token generation lacks proper error handling and validation. Missing try-catch blocks and error checks could lead to runtime failures.

    private async _getEKSCredentials(clusterId: string, region: string) {
      // get ID token from default GCP SA
      const targetAudience = process.env.AWS_TARGET_AUDIENCE;
    
      const res = await axios.get(
        'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=' +
          targetAudience,
        {
          headers: {
            'Metadata-Flavor': 'Google',
          },
        },
      );
    
      const idToken = res.data;
    
      const sts = new STSClient({ region });
    
      const { Credentials } = await sts.send(
        new AssumeRoleWithWebIdentityCommand({
          RoleArn: process.env.AWS_ROLE_ARN,
          RoleSessionName: 'free-tier-shutdown',
          WebIdentityToken: idToken,
        }),
      );
    
      const eks = new EKSClient({
        credentials: {
          accessKeyId: Credentials?.AccessKeyId,
          secretAccessKey: Credentials?.SecretAccessKey,
          sessionToken: Credentials?.SessionToken,
        },
        region,
      });
    
      const { cluster } = await eks.send(new DescribeClusterCommand({ name: clusterId }));
    
      // eslint-disable-next-line @typescript-eslint/no-var-requires
      const EKSToken = require('aws-eks-token');
      EKSToken.config = {
        accessKeyId: Credentials?.AccessKeyId,
        secretAccessKey: Credentials?.SecretAccessKey,
        sessionToken: Credentials?.SessionToken,
        region,
      };
    
      const token = await EKSToken.renew(clusterId);
    
      return {
        endpoint: cluster.endpoint,
        certificateAuthority: cluster.certificateAuthority.data,
        accessToken: token,
      };
    }
    Resource Cleanup

    The AWS STS and EKS clients are created but never properly closed or cleaned up, which could lead to resource leaks.

    const sts = new STSClient({ region });
    
    const { Credentials } = await sts.send(
      new AssumeRoleWithWebIdentityCommand({
        RoleArn: process.env.AWS_ROLE_ARN,
        RoleSessionName: 'free-tier-shutdown',
        WebIdentityToken: idToken,
      }),
    );
    
    const eks = new EKSClient({
      credentials: {
        accessKeyId: Credentials?.AccessKeyId,
        secretAccessKey: Credentials?.SecretAccessKey,
        sessionToken: Credentials?.SessionToken,
      },
      region,
    });
    
    const { cluster } = await eks.send(new DescribeClusterCommand({ name: clusterId }));

    Copy link

    qodo-merge-pro bot commented Jan 26, 2025

    CI Feedback 🧐

    (Feedback updated until commit 8ea9627)

    A test triggered by this PR failed. Here is an AI-generated analysis of the failure:

    Action: apply

    Failed stage: Retrieve artifacts from last plan [❌]

    Failed test name: gcp-full-infra-test-plan.yaml

    Failure summary:

    The action failed due to two main issues:

  • No matching workflow artifacts were found for the PR add stop job for aws #96 (artifacts-96)
  • Terraform backend initialization failed with the error "Backend initialization required", indicating
    that tofu init needs to be run with either -reconfigure or -migrate-state flags before proceeding

  • Relevant error logs:
    1:  ##[group]Operating System
    2:  Ubuntu
    ...
    
    182:  workflow_search: false
    183:  workflow_conclusion: success
    184:  repo: FalkorDB/falkordb-dbaas
    185:  name_is_regexp: false
    186:  allow_forks: true
    187:  check_artifacts: false
    188:  search_artifacts: false
    189:  skip_unpack: false
    190:  if_no_artifact_found: fail
    ...
    
    208:  ==> Repository: FalkorDB/falkordb-dbaas
    209:  ==> Artifact name: artifacts-96
    210:  ==> Local path: artifacts
    211:  ==> Workflow name: gcp-full-infra-test-plan.yaml
    212:  ==> Workflow conclusion: success
    213:  ==> PR: 96
    214:  ==> Commit: 8ea9627f3ac37d86f8c83b72e90b14f95e4123f6
    215:  ==> Allow forks: true
    216:  ##[error]no matching workflow run found with any artifacts?
    ...
    
    248:  TF_VAR_falkordb_cpu: 500m
    249:  TF_VAR_falkordb_memory: 1Gi
    250:  TF_VAR_persistence_size: 11Gi
    251:  TF_VAR_falkordb_replicas: 2
    252:  TF_VAR_backup_schedule: 0 * * * *
    253:  TF_VAR_dns_domain: t-1.cloud.falkordb.io
    254:  ##[endgroup]
    255:  �[31m╷�[0m�[0m
    256:  �[31m│�[0m �[0m�[1m�[31mError: �[0m�[0m�[1mBackend initialization required, please run "tofu init"�[0m
    ...
    
    267:  �[31m│�[0m �[0mrun
    268:  �[31m│�[0m �[0m"tofu init" with either the "-reconfigure" or "-migrate-state" flags to
    269:  �[31m│�[0m �[0muse the current configuration.
    270:  �[31m│�[0m �[0m
    271:  �[31m│�[0m �[0mIf the change reason above is incorrect, please verify your configuration
    272:  �[31m│�[0m �[0mhasn't changed and try again. At this point, no changes to your existing
    273:  �[31m│�[0m �[0mconfiguration or state have been made.
    274:  �[31m╵�[0m�[0m
    275:  ##[error]Process completed with exit code 1.
    ...
    
    291:  CLOUDSDK_CORE_PROJECT: pipelines-development-f7a2434f
    292:  CLOUDSDK_PROJECT: pipelines-development-f7a2434f
    293:  GCLOUD_PROJECT: pipelines-development-f7a2434f
    294:  GCP_PROJECT: pipelines-development-f7a2434f
    295:  GOOGLE_CLOUD_PROJECT: pipelines-development-f7a2434f
    296:  CLOUDSDK_METRICS_ENVIRONMENT: github-actions-setup-gcloud
    297:  CLOUDSDK_METRICS_ENVIRONMENT_VERSION: 2.1.0
    298:  ##[endgroup]
    299:  ##[error]Process completed with exit code 1.
    

    Copy link

    Qodo Merge was enabled for this repository. To continue using it, please link your Git account with your Qodo account here.

    PR Code Suggestions ✨

    Explore these optional code suggestions:

    CategorySuggestion                                                                                                                                    Score
    Possible issue
    Add credential validation checks

    Add null checks for AWS credentials to prevent potential runtime errors. The current
    code assumes Credentials will always be defined, which could lead to crashes.

    backend/jobs/free-tier-shutdown/src/repositories/k8s/K8sRepository.ts [58-65]

    +if (!Credentials?.AccessKeyId || !Credentials?.SecretAccessKey) {
    +  throw new Error('Failed to obtain AWS credentials');
    +}
     const eks = new EKSClient({
       credentials: {
    -    accessKeyId: Credentials?.AccessKeyId,
    -    secretAccessKey: Credentials?.SecretAccessKey,
    -    sessionToken: Credentials?.SessionToken,
    +    accessKeyId: Credentials.AccessKeyId,
    +    secretAccessKey: Credentials.SecretAccessKey,
    +    sessionToken: Credentials.SessionToken,
       },
       region,
     });
    • Apply this suggestion
    Suggestion importance[1-10]: 8

    Why: The suggestion addresses a critical issue by adding validation for AWS credentials, which could prevent runtime crashes in production. The current code uses optional chaining but doesn't explicitly handle undefined credentials.

    8
    Add metadata request error handling

    Add error handling for the metadata server request. The current implementation
    doesn't handle potential network failures or invalid responses when fetching the ID
    token.

    backend/jobs/free-tier-shutdown/src/repositories/k8s/K8sRepository.ts [36-44]

    -const res = await axios.get(
    -  'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=' +
    -    targetAudience,
    -  {
    -    headers: {
    -      'Metadata-Flavor': 'Google',
    +try {
    +  const res = await axios.get(
    +    'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=' +
    +      targetAudience,
    +    {
    +      headers: {
    +        'Metadata-Flavor': 'Google',
    +      },
    +      timeout: 5000,
         },
    -  },
    -);
    +  );
    +  if (!res.data) {
    +    throw new Error('Empty response from metadata server');
    +  }
    +} catch (error) {
    +  throw new Error(`Failed to fetch ID token: ${error.message}`);
    +}
    • Apply this suggestion
    Suggestion importance[1-10]: 7

    Why: The suggestion improves error handling for a critical authentication step by adding timeout, response validation, and proper error propagation. This would make the system more robust and provide better error messages.

    7
    Security
    Store sensitive IAM role as secret

    Consider using a more secure method to pass the AWS role ARN by storing it as a
    secret rather than an environment variable, since it contains sensitive IAM
    information.

    backend/jobs/free-tier-shutdown/cloudbuild.dev.yaml [36]

    -AWS_ROLE_ARN=${_AWS_ROLE_ARN}
    +AWS_ROLE_ARN=projects/${_PROJECT_NUMBER}/secrets/AWS_ROLE_ARN/versions/latest
    • Apply this suggestion
    Suggestion importance[1-10]: 8

    Why: Moving sensitive AWS IAM role ARN from environment variable to secret storage significantly improves security by preventing exposure in logs and environment inspections.

    8

    Copy link

    @coderabbitai coderabbitai bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Actionable comments posted: 3

    🧹 Nitpick comments (5)
    backend/jobs/free-tier-shutdown/src/repositories/k8s/K8sRepository.ts (3)

    32-85: Add robust error handling and consider EKS token import location.

    1. Validate edge cases for axios calls, STS, and EKS commands, to gracefully handle failures such as invalid tokens or insufficient AWS permissions.
    2. Since aws-eks-token is a direct dependency, a top-level import statement could be more conventional than require, unless lazy loading is intentional.

    87-95: Allow easy extension for future cloud providers.

    Using a simple ternary for 'gcp' vs. 'aws' is sufficient, but if you anticipate supporting more providers, consider a more extensible pattern or switch case for improved maintainability.


    279-287: Use cloudProvider enum for type safety.

    Leveraging cloudProvider: 'gcp' | 'aws' is cohesive, though referencing the CloudProviders enum type directly might add clarity while aligning with the schema. This method’s logic is otherwise well-structured and consistent with getFalkorDBLastQueryTime.

    backend/jobs/free-tier-shutdown/src/index.ts (1)

    26-32: Consider adding specific error handling for cloud provider-specific failures.

    The function call looks good, but consider adding specific error handling for cloud provider-specific failures to improve debugging and error reporting.

    -    const lastUsedTime = await k8sRepo.getFalkorDBLastQueryTime(
    -      cloudProvider,
    -      clusterId,
    -      region,
    -      instanceId,
    -      instance.tls,
    -    );
    +    let lastUsedTime;
    +    try {
    +      lastUsedTime = await k8sRepo.getFalkorDBLastQueryTime(
    +        cloudProvider,
    +        clusterId,
    +        region,
    +        instanceId,
    +        instance.tls,
    +      );
    +    } catch (error) {
    +      logger.error(
    +        { error, cloudProvider, clusterId, region, instanceId },
    +        `Failed to get last query time for ${cloudProvider} instance`,
    +      );
    +      throw error;
    +    }
    backend/jobs/free-tier-shutdown/package.json (1)

    22-23: Consider pinning AWS SDK versions.

    For better stability and predictability, consider pinning the AWS SDK versions instead of using caret ranges. This helps prevent unexpected breaking changes during deployments.

    Apply this diff to pin the versions:

    -    "@aws-sdk/client-eks": "^3.735.0",
    -    "@aws-sdk/client-sts": "^3.734.0",
    +    "@aws-sdk/client-eks": "3.735.0",
    +    "@aws-sdk/client-sts": "3.734.0",
    -    "aws-eks-token": "^1.0.5",
    +    "aws-eks-token": "1.0.5",

    Also applies to: 32-32

    📜 Review details

    Configuration used: CodeRabbit UI
    Review profile: CHILL
    Plan: Pro

    📥 Commits

    Reviewing files that changed from the base of the PR and between 69245ff and 8ea9627.

    ⛔ Files ignored due to path filters (1)
    • backend/pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
    📒 Files selected for processing (8)
    • backend/jobs/free-tier-shutdown/cloudbuild.dev.yaml (2 hunks)
    • backend/jobs/free-tier-shutdown/cloudbuild.prod.yaml (2 hunks)
    • backend/jobs/free-tier-shutdown/package.json (2 hunks)
    • backend/jobs/free-tier-shutdown/src/index.ts (1 hunks)
    • backend/jobs/free-tier-shutdown/src/repositories/k8s/K8sRepository.test.ts (1 hunks)
    • backend/jobs/free-tier-shutdown/src/repositories/k8s/K8sRepository.ts (4 hunks)
    • backend/jobs/free-tier-shutdown/src/repositories/omnistrate/OmnistrateRepository.ts (2 hunks)
    • backend/jobs/free-tier-shutdown/src/schemas/OmnistrateInstance.ts (2 hunks)
    🔇 Additional comments (14)
    backend/jobs/free-tier-shutdown/src/repositories/k8s/K8sRepository.ts (6)

    7-9: Nice addition of AWS and axios imports.

    Introducing STS/EKS clients and axios is appropriate for retrieving AWS credentials and cluster details. Make sure you have consistent error-handling strategies across all cloud clients (GCP and AWS) for better resilience.


    22-22: Verify cluster naming assumptions.

    Appending "c-" and removing dashes from the cluster ID is a specialized naming pattern. Ensure downstream references or naming constraints won't break if cluster IDs deviate from this pattern.


    26-26: Good practice forcing HTTPS.

    Appending https:// to the cluster endpoint ensures secure communication with GKE. This is a sound approach for GCP clusters.


    103-103: Confirm endpoint correctness for AWS clusters.

    For AWS-based clusters, verify that k8sCredentials.endpoint is valid. It might need "https://" prefix for the server field if the EKS endpoint lacks it.


    109-109: Check if setting authProvider to undefined is intended.

    When cloudProvider is 'aws', authProvider becomes undefined. This is valid, but confirm that no custom auth provider is needed for AWS contexts or consider a descriptive name (e.g., "aws-iam").


    242-258: Clean integration of the new cloud provider parameter.

    The updated signature introducing cloudProvider looks consistent. The conditional logic to retrieve AWS or GCP credentials is straightforward, and fallback to getFalkorDBInfo upon no graphs found remains lexically clear.

    backend/jobs/free-tier-shutdown/src/schemas/OmnistrateInstance.ts (2)

    3-6: Enum provides clarity for cloud provider selection.

    Defining CloudProviders with 'gcp' and 'aws' is straightforward and ensures maintainability and type safety. Consider adding doc comments if you plan to expand the enum in the future.


    18-19: Great addition of a strict enum field to the schema.

    Using Type.Enum(CloudProviders) ensures that the cloudProvider value is validated at runtime, reducing the risk of invalid inputs.

    backend/jobs/free-tier-shutdown/src/repositories/k8s/K8sRepository.test.ts (1)

    10-10: Test update aligns with new method signature.

    Including 'gcp' as the first argument matches the repository’s updated parameter order and ensures the test accurately reflects multi-cloud support.

    backend/jobs/free-tier-shutdown/cloudbuild.prod.yaml (2)

    36-36: LGTM! Environment variable addition looks good.

    The AWS role ARN is properly configured as an environment variable, consistent with the existing pattern.


    61-61: Verify AWS IAM role permissions.

    Please ensure that the AWS IAM role FreeTierJobRole follows the principle of least privilege and has only the necessary permissions to perform the free-tier shutdown operations.

    Run the following script to check the role's permissions:

    ✅ Verification successful

    AWS IAM role implementation follows security best practices

    The AWS role FreeTierJobRole is:

    • Properly scoped for EKS cluster access only
    • Uses secure OIDC federation with GCP service accounts
    • Maintains separate roles for development and production environments
    🏁 Scripts executed

    The following scripts were executed for the analysis:

    Script:

    #!/bin/bash
    # Description: Check AWS IAM role permissions using the AWS CLI
    
    # Get role details and policies
    aws iam get-role --role-name FreeTierJobRole
    aws iam list-role-policies --role-name FreeTierJobRole
    aws iam list-attached-role-policies --role-name FreeTierJobRole
    
    # Get policy details for attached policies
    for policy in $(aws iam list-attached-role-policies --role-name FreeTierJobRole --query 'AttachedPolicies[*].PolicyArn' --output text); do
      echo "Policy: $policy"
      aws iam get-policy-version --policy-arn $policy --version-id $(aws iam get-policy --policy-arn $policy --query 'Policy.DefaultVersionId' --output text)
    done
    

    Length of output: 667


    Script:

    #!/bin/bash
    # Search for AWS SDK usage and IAM-related files
    echo "=== Searching for AWS SDK imports ==="
    rg -l "require\(['\"](aws-sdk|@aws-sdk)" 
    
    echo -e "\n=== Searching for AWS API calls ==="
    rg "aws-sdk.*\.(EC2|IAM|Organizations)" -A 5
    
    echo -e "\n=== Looking for IAM role definitions ==="
    fd -e tf -e yaml -e json . | grep -i "iam\|role"
    

    Length of output: 411


    Script:

    #!/bin/bash
    echo "=== Checking free-tier-shutdown job files ==="
    fd . backend/jobs/free-tier-shutdown -t f -x cat {}
    
    echo -e "\n=== Searching for AWS configurations ==="
    rg -l "AWS" backend/jobs/free-tier-shutdown
    
    echo -e "\n=== Looking for environment variables ==="
    rg "AWS_" backend/jobs/free-tier-shutdown
    

    Length of output: 34049

    backend/jobs/free-tier-shutdown/cloudbuild.dev.yaml (2)

    36-36: LGTM! Environment variable addition looks good.

    The AWS role ARN is properly configured as an environment variable, consistent with the production configuration.


    36-36: Verify configuration consistency across environments.

    The development configuration correctly mirrors the production setup with environment-specific values:

    • Development AWS account: 637423310747
    • Production AWS account: 148761665810

    Please ensure that:

    1. The role permissions are identical across both environments
    2. The development role has been tested with the same operations that will be performed in production

    Run the following script to compare role configurations:

    Also applies to: 61-61

    backend/jobs/free-tier-shutdown/package.json (1)

    22-23: Verify AWS SDK version numbers.

    The specified versions (3.735.0 and 3.734.0) appear to be incorrect as they are higher than the current latest AWS SDK versions. Please verify and update to the correct versions.

    @@ -20,10 +20,16 @@ async function handleFreeInstance(
    logger.info(`Handling free instance ${instance.id}`);
    try {
    // 2. For each instance, get the last used time from k8s
    const { clusterId, region, id: instanceId } = instance;
    const { cloudProvider, clusterId, region, id: instanceId } = instance;
    Copy link

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    🛠️ Refactor suggestion

    Add validation for cloudProvider value.

    Consider validating the cloudProvider value against allowed values to prevent runtime errors.

    -    const { cloudProvider, clusterId, region, id: instanceId } = instance;
    +    const { cloudProvider, clusterId, region, id: instanceId } = instance;
    +    if (!['AWS', 'GCP'].includes(cloudProvider)) {
    +      throw new Error(`Unsupported cloud provider: ${cloudProvider}`);
    +    }
    📝 Committable suggestion

    ‼️ IMPORTANT
    Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    Suggested change
    const { cloudProvider, clusterId, region, id: instanceId } = instance;
    const { cloudProvider, clusterId, region, id: instanceId } = instance;
    if (!['AWS', 'GCP'].includes(cloudProvider)) {
    throw new Error(`Unsupported cloud provider: ${cloudProvider}`);
    }

    @@ -96,6 +96,7 @@ export class OmnistrateRepository {
    resourceId: Object.entries(d?.['consumptionResourceInstanceResult']?.['detailedNetworkTopology']).filter(
    (ob) => (ob[1] as unknown)?.['main'],
    )[0][0],
    cloudProvider: d?.['consumptionResourceInstanceResult']?.['cloud_provider'],
    Copy link

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    🛠️ Refactor suggestion

    Add type validation for cloudProvider field.

    Consider adding type validation for the cloudProvider field to ensure type safety and prevent runtime errors.

    +import { CloudProviders } from '../../schemas/OmnistrateInstance';
    +
     export class OmnistrateRepository {
       // ... existing code ...
    
       async getInstancesFromTier(
         serviceId: string,
         environmentId: string,
         tierId: string,
       ): Promise<OmnistrateInstanceSchemaType[]> {
         // ... existing code ...
    
         return response.data['resourceInstances']
           .map((d: unknown) => {
    +        const cloudProvider = d?.['consumptionResourceInstanceResult']?.['cloud_provider'];
    +        if (!Object.values(CloudProviders).includes(cloudProvider)) {
    +          this._options.logger.warn(
    +            { cloudProvider },
    +            'Unknown cloud provider, instance will be filtered out',
    +          );
    +          return null;
    +        }
             return ({
               // ... other fields ...
    -          cloudProvider: d?.['consumptionResourceInstanceResult']?.['cloud_provider'],
    +          cloudProvider,
             });
           })
    +      .filter(Boolean)
           .filter(
             (instance) => instance.productTierId === tierId && instance.status === 'RUNNING',
           ) as OmnistrateInstanceSchemaType[];
       }

    Committable suggestion skipped: line range outside the PR's diff.

    @@ -84,7 +84,7 @@
    return response.data['resourceInstances']
    .map((d: unknown) => ({
    id: d?.['consumptionResourceInstanceResult']?.['id'],
    clusterId: 'c-' + d?.['deploymentCellID']?.replace('-', ''),
    clusterId: d?.['deploymentCellID'],
    Copy link

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    💡 Codebase verification

    The clusterId format change breaks GCP cluster name construction

    The change from concatenated string to raw deploymentCellID breaks the GCP cluster name construction in K8sRepository._getGKECredentials. The method expects to transform the clusterId by adding 'c-' prefix, but now receives raw deploymentCellID which results in incorrect cluster name format.

    • backend/jobs/free-tier-shutdown/src/repositories/k8s/K8sRepository.ts: Update _getGKECredentials to handle raw deploymentCellID format.
    🔗 Analysis chain

    Verify the impact of clusterId format change.

    The clusterId format has changed from a concatenated string to using the raw deploymentCellID. Please verify that this change doesn't break existing functionality.

    🏁 Scripts executed

    The following scripts were executed for the analysis:

    Script:

    #!/bin/bash
    # Description: Search for usages of clusterId to verify compatibility with the new format
    
    # Search for clusterId usage patterns
    rg -A 5 'clusterId'
    
    # Search for specific GCP cluster ID patterns to ensure we haven't missed any legacy format dependencies
    rg -A 5 'c-[a-zA-Z0-9-]+'
    

    Length of output: 66332

    @dudizimber dudizimber merged commit 75c97c2 into main Jan 26, 2025
    7 of 9 checks passed
    @dudizimber dudizimber deleted the 95-add-aws-support-for-the-free-tier-shutdown-job branch January 26, 2025 11:13
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    None yet
    Projects
    None yet
    Development

    Successfully merging this pull request may close these issues.

    add aws support for the free tier shutdown job
    2 participants