# capi-ci **Repository Path**: mirrors_cloudfoundry/capi-ci ## Basic Information - **Project Name**: capi-ci - **Description**: No description available - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2020-09-24 - **Last Updated**: 2026-03-21 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # capi-ci Hello, this is the CAPI team's CI repo. It houses Concourse configuration settings for our CI environments. Check it out! https://concourse.app-runtime-interfaces.ci.cloudfoundry.org/teams/capi-team/pipelines/capi :pushpin: This repository has gone through a complete overhaul. The old artifacts can still be found on the [legacy branch](https://github.com/cloudfoundry/capi-ci/tree/legacy). ## Environments See [pipeline.yml](https://github.com/cloudfoundry/capi-ci/blob/main/ci/pipeline.yml) for more details. All environments are short-lived. The webserver is now "Puma" for all environments. ``` ________________________________________________________________________ / \ \ | | Elsa: biggest and most "real" environment | \_ | · HA / Multi-AZ | | · Windows cell | | · Encrypted database | | · Clustered database | | · Runtime CredHub (assisted mode) | | · Database: MySQL | | · Platform: GCP | | · Blobstore: GCP blobstore | | · Runs rotate-cc-database-key errand for smoke testing | | | | Kiki: used for testing that db migrations are backwards compatible | | · Database: PostgreSQL | | · Platform: GCP | | · Blobstore: WebDAV | | | | Asha: used for testing CATS and CAPI-BARA tests on MySQL | | · Database: MySQL | | · Platform: GCP | | · Blobstore: WebDAV | | | | Olaf: used for running CATS and CAPI-BARA tests on AWS with MySQL | | · Database: MySQL | | · Platform: AWS | | · Blobstore: S3 | | | | Scar: used for testing CATS and CAPI-BARA tests on PostgreSQL | | · Database: PostgreSQL | | · Platform: Azure | | · Blobstore: Azure Blob Storage | | | | Gyro: used for testing experimental features on GCP / PostgreSQL | | · Database: PostgreSQL | | · Platform: GCP | | · Blobstore: WebDAV | | ___________________________________________________________________|___ | / / \_/______________________________________________________________________/ ``` ### What's Up with Kiki Kiki starts with an older version of cf-deployment. It then runs the new migrations, but keeps the old Cloud Controller code. This catches any backwards-incompatible migrations. This is important because Cloud Controller instances do rolling upgrades. For example: if you write a migration that drops a table, old CC instances that depend on that table existing will crash during the rolling deploy. ### Renewing the Scar Client Secret The "Scar" environment runs on Azure. It uses an application client secret for authentication. If the secret expires, you can renew it by following these steps: 1. Log on to the Azure portal: https://portal.azure.com/ 2. Make sure the "Cloud Foundry Foundation" directory is selected in the top right corner. 3. Navigate to "Microsoft Entra ID" > "Manage" > "App registrations" > "All applications" > "sp-ari-bbl" > "Client secrets". 4. Click on "New client secret". Choose 12 months for the expiration and click "Add". 5. Copy the value of the secret. (The secret ID is not needed, just the value.) 6. Log on to the ARI WG Concourse CredHub using the [start-credhub-cli.sh](https://github.com/cloudfoundry/app-runtime-interfaces-infrastructure/blob/main/terragrunt/scripts/concourse/start-credhub-cli.sh) script. 7. Run the following command to set the new secret value in CredHub: ``` credhub set -n /concourse/capi-team/capi-bbl-scar-azure-client-secret -t password ``` 8. Make sure the [bbl-up-scar-psql](https://concourse.app-runtime-interfaces.ci.cloudfoundry.org/teams/capi-team/pipelines/capi/jobs/bbl-up-scar-psql) Concourse job runs. It updates the secret in the BOSH Azure CPI. 9. Now the Scar [scar-psql-deploy-cf](https://concourse.app-runtime-interfaces.ci.cloudfoundry.org/teams/capi-team/pipelines/capi/jobs/scar-psql-deploy-cf) job should succeed again. Delete the old expired secret in the Azure portal to keep things tidy. ## Pipelines ### capi This pipeline is responsible for testing, building, and releasing capi-release. For guidance on releasing CAPI, see [this document](https://github.com/cloudfoundry/capi-release/blob/develop/docs/releasing-capi.md). #### capi-release This is where the majority of testing for capi-release components live. - Runs unit tests for Cloud Controller and bridge components - Builds capi-release release candidates and deploys to Elsa, Kiki, Asha, Olaf, and Scar - Runs appropriate integration tests for each environment - Bumps the `ci-passed` branch of capi-release - Updates release candidate in v3 docs every time `ci-passed` branch is updated. #### bump-dependencies Automatically bumps golang version for capi-release components every time a new [golang-release](https://github.com/bosh-packages/golang-release) is available. Also bumps Valkey and nginx. #### ship-it Jobs responsible for cutting a capi-release. - Bump API versions - Update API docs - Release capi-release #### bbl-up Updates the bosh deployments for all the pipeline environments (using `bbl up`). #### bbl-destroy Theoretically useful for destroying broken bosh deployments for all the pipeline environments. Often doesn't work because the directors are in such bad state. There are also jobs to manually release pool resources for the following environments: Elsa, Asha, and Scar. ### bosh-lites Pipeline responsible for managing the development [bosh-lite pool](https://github.com/cloudfoundry/capi-env-pool/). - Create new bosh-lites if there is room in the pool - Delete released bosh-lites #### Using Pooled Environments There are a number of helpful scripts in [capi-workspace](https://github.com/cloudfoundry/capi-workspace) for using the bosh-lite pool. Most notably, `claim_bosh_lite`, `unclaim_bosh_lite`, and `print_env_info`. See [the commands list](https://github.com/cloudfoundry/capi-workspace#capi-commands) for a full list of useful commands for interacting with the pool.