Managed PostgreSQL¶
Overview¶
The Nextcloud operator can automatically provision and manage PostgreSQL databases using Percona's PG Operator. This integration allows you to:
- Automatically provision HA PostgreSQL clusters when creating a Nextcloud instance
- Manage database lifecycle alongside Nextcloud
- Configure backups with S3/pgbackrest integration
- Enable connection pooling via pgBouncer
Architecture¶
NextcloudInstance CR
↓
Operator detects spec.database.managed: true
↓
Creates PerconaPGCluster CR
↓
pg-operator provisions PostgreSQL
↓
Operator reads generated credentials secret
↓
Creates Nextcloud with database config
Prerequisites¶
- Percona PG Operator installed in the cluster
- Sufficient resources for PostgreSQL pods
- Storage class available
- (Optional) S3 credentials for backups
Install Percona PG Operator¶
helm repo add percona https://percona.github.io/percona-helm-charts/
helm repo update
helm install pgo percona/pg-operator \
--namespace pgo \
--create-namespace
# Verify installation
kubectl get pods -n pgo
Configuration¶
Managed PostgreSQL (Automatic Provisioning)¶
apiVersion: k8s.bnerd.com/v1alpha1
kind: NextcloudInstance
metadata:
name: my-nextcloud
spec:
profile: production
database:
managed: true
type: postgresql
postgres:
replicas: 3
version: "16"
resources:
requests: {cpu: "500m", memory: "1Gi"}
limits: {cpu: "2000m", memory: "4Gi"}
storage:
size: "20Gi"
storageClass: "fast-ssd"
backup:
enabled: true
s3:
bucket: my-nextcloud-db-backups
endpoint: s3.amazonaws.com
region: us-east-1
credentialsSecret: pgbackrest-s3-credentials
schedule:
full: "0 1 * * 0" # Weekly full backup
incremental: "0 */6 * * *" # Every 6 hours
External PostgreSQL (Bring Your Own)¶
apiVersion: k8s.bnerd.com/v1alpha1
kind: NextcloudInstance
metadata:
name: my-nextcloud
spec:
profile: production
database:
type: postgresql
credentialsSecret: my-db-credentials
Examples¶
Minimal Managed Database¶
Uses defaults: 1 replica, 10Gi storage, PostgreSQL 16, no backups.
Production with HA and Backups¶
database:
managed: true
type: postgresql
postgres:
replicas: 3
version: "16"
resources:
requests: {cpu: "1000m", memory: "2Gi"}
limits: {cpu: "4000m", memory: "8Gi"}
storage:
size: "100Gi"
storageClass: "ssd"
backup:
enabled: true
s3:
bucket: prod-nc-db-backups
endpoint: s3.amazonaws.com
region: us-east-1
credentialsSecret: s3-backup-creds
schedule:
full: "0 2 * * 0"
incremental: "0 */4 * * *"
With Connection Pooling¶
database:
managed: true
type: postgresql
postgres:
replicas: 2
proxy:
enabled: true
replicas: 2
poolMode: transaction
poolSize: 25
Status Tracking¶
The Nextcloud CR status includes database provisioning progress:
status:
phase: Creating
conditions:
- type: DatabaseReady
status: "False"
reason: Provisioning
message: "Waiting for PerconaPGCluster to be ready"
Once ready:
status:
phase: Ready
conditions:
- type: DatabaseReady
status: "True"
reason: Ready
message: "PostgreSQL cluster is ready"
database:
managed: true
clusterName: my-nextcloud-pg
host: my-nextcloud-pg-pgbouncer.default.svc.cluster.local
port: 5432
name: nextcloud
Cleanup Behavior¶
When you delete a Nextcloud instance with a managed database:
- The operator deletes the HelmRelease and Nextcloud secrets
- The PerconaPGCluster is kept by default (data safety)
To also delete the database:
Or manually:
Best Practices¶
- Always enable backups for production
- Use at least 3 replicas for HA
- Provision adequate resources (PostgreSQL is memory-hungry)
- Use fast storage (SSD recommended)
- Test restores regularly
- Monitor database metrics
- Keep pg-operator updated
Troubleshooting¶
# Check PerconaPGCluster status
kubectl get perconapgcluster
kubectl describe perconapgcluster <name>-pg
# Check PostgreSQL pods
kubectl get pods -l postgres-operator.crunchydata.com/cluster=<name>-pg
# Check pg-operator logs
kubectl logs -n pgo -l app.kubernetes.io/name=percona-postgresql-operator
# Check database credentials secret
kubectl get secret <name>-pguser-nextcloud -o jsonpath='{.data}' | jq
# Check backup pods
kubectl get pods -l postgres-operator.crunchydata.com/pgbackrest=<name>-pg
# Trigger manual reconcile after database change
kubectl annotate nci <name> k8s.bnerd.com/reconcile=$(date +%s) --overwrite