This guide covers building container images and deploying HyperFleet API to Kubernetes using Helm.
Build and push container images:
# Build container image with default tag
make image
# Build with custom tag
make image IMAGE_TAG=v1.0.0
# Build and push to default registry
make image-push
# Build and push to personal Quay registry (for development)
QUAY_USER=myuser make image-devThe default container image is:
quay.io/openshift-hyperfleet/hyperfleet-api:latest
To use a custom container registry:
# Build with custom registry
make image \
IMAGE_REGISTRY=your-registry.io/yourorg \
IMAGE_TAG=v1.0.0
# Push to custom registry
podman push your-registry.io/yourorg/hyperfleet-api:v1.0.0HyperFleet API is configured via environment variables.
OPENAPI_SCHEMA_PATH - Path to OpenAPI specification for spec validation
The API validates cluster and nodepool spec fields against an OpenAPI schema. This allows different providers (GCP, AWS, Azure) to have different spec structures.
- Default: Uses
openapi/openapi.yamlfrom the repository - Custom: Set via
OPENAPI_SCHEMA_PATHenvironment variable for provider-specific schemas
export OPENAPI_SCHEMA_PATH=/path/to/custom-schema.yamlDatabase:
DB_HOST- PostgreSQL hostname (default:localhost)DB_PORT- PostgreSQL port (default:5432)DB_NAME- Database name (default:hyperfleet)DB_USER- Database username (default:hyperfleet)DB_PASSWORD- Database password (required)DB_SSLMODE- SSL mode:disable,require,verify-ca,verify-full(default:disable)
Authentication:
AUTH_ENABLED- Enable JWT authentication (default:true)OCM_URL- OpenShift Cluster Manager API URL (default:https://api.openshift.com)JWT_ISSUER- JWT token issuer URL (default:https://sso.redhat.com/auth/realms/redhat-external)JWT_AUDIENCE- JWT token audience (default:https://api.openshift.com)
Server:
PORT- API server port (default:8000)HEALTH_PORT- Health endpoints port (default:8080)METRICS_PORT- Metrics endpoint port (default:9090)
Logging:
LOG_LEVEL- Logging level:debug,info,warn,error(default:info)LOG_FORMAT- Log format:json,text(default:json)
Adapter Requirements (REQUIRED):
Configure which adapters must be ready for resources to be marked as "Ready". These environment variables are required - the application will not start without them.
HYPERFLEET_CLUSTER_ADAPTERS- JSON array of required cluster adapters (e.g.,["validation","dns","pullsecret","hypershift"])HYPERFLEET_NODEPOOL_ADAPTERS- JSON array of required nodepool adapters (e.g.,["validation","hypershift"])
Option 1: Using structured values (Helm only, recommended)
# values.yaml
adapters:
cluster:
- validation
- dns
- pullsecret
- hypershift
nodepool:
- validation
- hypershiftOption 2: Direct environment variable (non-Helm)
export HYPERFLEET_CLUSTER_ADAPTERS='["validation","dns","pullsecret","hypershift"]'
export HYPERFLEET_NODEPOOL_ADAPTERS='["validation","hypershift"]'Note: Empty arrays ([]) are valid if you want no adapters to be required for the Ready state.
The project includes a Helm chart for Kubernetes deployment with configurable PostgreSQL support.
Deploy with built-in PostgreSQL for development and testing:
helm install hyperfleet-api ./charts/ \
--namespace hyperfleet-system \
--create-namespace \
--set adapters.cluster='{validation,dns,pullsecret,hypershift}' \
--set adapters.nodepool='{validation,hypershift}'This creates:
- HyperFleet API deployment
- PostgreSQL StatefulSet
- Services for both components
- ConfigMaps and Secrets
Deploy with external database (recommended for production):
kubectl create secret generic hyperfleet-db-external \
--namespace hyperfleet-system \
--from-literal=db.host=<your-db-host> \
--from-literal=db.port=5432 \
--from-literal=db.name=hyperfleet \
--from-literal=db.user=hyperfleet \
--from-literal=db.password=<your-password>helm install hyperfleet-api ./charts/ \
--namespace hyperfleet-system \
--set database.postgresql.enabled=false \
--set database.external.enabled=true \
--set database.external.secretName=hyperfleet-db-external \
--set adapters.cluster='{validation,dns,pullsecret,hypershift}' \
--set adapters.nodepool='{validation,hypershift}'Deploy with custom container image:
helm install hyperfleet-api ./charts/ \
--namespace hyperfleet-system \
--set image.registry=quay.io \
--set image.repository=myuser/hyperfleet-api \
--set image.tag=v1.0.0 \
--set adapters.cluster='{validation,dns,pullsecret,hypershift}' \
--set adapters.nodepool='{validation,hypershift}'Upgrade to a new version:
helm upgrade hyperfleet-api ./charts/ \
--namespace hyperfleet-system \
--set image.tag=v1.1.0Remove the deployment:
helm uninstall hyperfleet-api --namespace hyperfleet-system| Parameter | Description | Default |
|---|---|---|
image.registry |
Container registry | quay.io |
image.repository |
Image repository | openshift-hyperfleet/hyperfleet-api |
image.tag |
Image tag | latest |
image.pullPolicy |
Image pull policy | IfNotPresent |
adapters.cluster |
Required cluster adapters (REQUIRED) | - |
adapters.nodepool |
Required nodepool adapters (REQUIRED) | - |
auth.enableJwt |
Enable JWT authentication | true |
database.postgresql.enabled |
Enable built-in PostgreSQL | true |
database.external.enabled |
Use external database | false |
database.external.secretName |
Secret containing database credentials | hyperfleet-db-external |
serviceMonitor.enabled |
Enable Prometheus Operator ServiceMonitor | false |
serviceMonitor.interval |
Metrics scrape interval | 30s |
serviceMonitor.scrapeTimeout |
Metrics scrape timeout | 10s |
serviceMonitor.labels |
Additional labels for Prometheus selector | {} |
serviceMonitor.namespace |
Namespace for ServiceMonitor (if different) | "" |
replicaCount |
Number of API replicas | 1 |
resources.limits.cpu |
CPU limit | 500m |
resources.limits.memory |
Memory limit | 512Mi |
podDisruptionBudget.enabled |
Enable PodDisruptionBudget | false |
podDisruptionBudget.minAvailable |
Minimum available pods during disruption | 1 |
podDisruptionBudget.maxUnavailable |
Maximum unavailable pods during disruption | - |
Create a values.yaml file:
# values.yaml
image:
registry: quay.io
repository: myuser/hyperfleet-api
tag: v1.0.0
auth:
enableJwt: true
database:
postgresql:
enabled: false
external:
enabled: true
secretName: hyperfleet-db-external
# Required: specify adapter requirements
adapters:
cluster:
- validation
- dns
- pullsecret
- hypershift
nodepool:
- validation
- hypershift
replicaCount: 3
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 500m
memory: 512MiDeploy with custom values:
helm install hyperfleet-api ./charts/ \
--namespace hyperfleet-system \
--values values.yaml# Get deployment status
helm status hyperfleet-api --namespace hyperfleet-system
# List all releases
helm list --namespace hyperfleet-system
# Check pods
kubectl get pods --namespace hyperfleet-system
# Check services
kubectl get svc --namespace hyperfleet-system# View API logs
kubectl logs -f deployment/hyperfleet-api --namespace hyperfleet-system
# View logs from all pods
kubectl logs -f -l app=hyperfleet-api --namespace hyperfleet-system
# View PostgreSQL logs (if using built-in)
kubectl logs -f statefulset/hyperfleet-postgresql --namespace hyperfleet-system# Describe pod for events and status
kubectl describe pod <pod-name> --namespace hyperfleet-system
# Check deployment events
kubectl get events --namespace hyperfleet-system --sort-by='.lastTimestamp'
# Exec into pod for debugging
kubectl exec -it deployment/hyperfleet-api --namespace hyperfleet-system -- /bin/sh
# Check secrets
kubectl get secrets --namespace hyperfleet-system
# Verify ConfigMaps
kubectl get configmaps --namespace hyperfleet-systemThe deployment includes:
- Liveness probe:
GET /healthz(port 8080) - Returns 200 if the process is alive - Readiness probe:
GET /readyz(port 8080) - Returns 200 when ready to receive traffic, 503 during startup/shutdown - Metrics:
GET /metrics(port 9090) - Prometheus metrics endpoint
Scale replicas:
# Manual scaling
kubectl scale deployment hyperfleet-api --replicas=3 --namespace hyperfleet-system
# Via Helm
helm upgrade hyperfleet-api ./charts/ \
--namespace hyperfleet-system \
--set replicaCount=3Enable autoscaling via Helm values (autoscaling.enabled=true).
Prometheus metrics available at http://<service>:9090/metrics.
For clusters with Prometheus Operator, enable the ServiceMonitor to automatically discover and scrape metrics:
helm install hyperfleet-api ./charts/ \
--namespace hyperfleet-system \
--set serviceMonitor.enabled=trueIf your Prometheus requires specific labels for service discovery, add them:
helm install hyperfleet-api ./charts/ \
--namespace hyperfleet-system \
--set serviceMonitor.enabled=true \
--set serviceMonitor.labels.release=prometheusTo create the ServiceMonitor in a different namespace (e.g., monitoring):
helm install hyperfleet-api ./charts/ \
--namespace hyperfleet-system \
--set serviceMonitor.enabled=true \
--set serviceMonitor.namespace=monitoring- Use external managed database (Cloud SQL, RDS, Azure Database)
- Enable authentication with
auth.enableJwt=true - Set resource limits and use multiple replicas
- Use specific image tags instead of
latest - Enable monitoring and regular database backups
- Enable PodDisruptionBudget with
podDisruptionBudget.enabled=truefor high availability during node maintenance
# 1. Build and push image
export QUAY_USER=myuser
podman login quay.io
make image-dev
# 2. Get GKE credentials
gcloud container clusters get-credentials my-cluster \
--zone=us-central1-a \
--project=my-project
# 3. Create namespace
kubectl create namespace hyperfleet-system
kubectl config set-context --current --namespace=hyperfleet-system
# 4. Create database secret (for production)
kubectl create secret generic hyperfleet-db-external \
--from-literal=db.host=10.10.10.10 \
--from-literal=db.port=5432 \
--from-literal=db.name=hyperfleet \
--from-literal=db.user=hyperfleet \
--from-literal=db.password=secretpassword
# 5. Deploy with Helm
helm install hyperfleet-api ./charts/ \
--set image.registry=quay.io \
--set image.repository=myuser/hyperfleet-api \
--set image.tag=dev-abc123 \
--set auth.enableJwt=false \
--set database.postgresql.enabled=false \
--set database.external.enabled=true \
--set adapters.cluster='{validation,dns,pullsecret,hypershift}' \
--set adapters.nodepool='{validation,hypershift}'
# 6. Verify deployment
kubectl get pods
kubectl logs -f deployment/hyperfleet-api
# 7. Access API (port-forward for testing)
kubectl port-forward svc/hyperfleet-api 8000:8000
curl http://localhost:8000/api/hyperfleet/v1/clusters- Development Guide - Local development setup
- Authentication - Authentication configuration