DevOps Guide
Overview
For a general overview of common devops tasks for Synapse services see Synapse Devops Guide - Overview.
Common DevOps Tasks
Restoring a Backup
Backups are stored as gzipped tar files, with the naming scheme <svctype>-<celliden>-<timestamp>.tar.gz
.
When restoring a backup, ensure that the service type matches the service name.
To restore from this backup, retrieve the backup file and extract it:
cp /srv/syn/00.backup/storage/backups/nettools-6064e4123577687c5ef742aba8d7e59a-1614784633494 .
tar -xvf nettools-6064e4123577687c5ef742aba8d7e59a-1614784633494.tar.gz
The backup service directory will be a timestamp (20210303101900
in this case), and the service can be started up using:
python -m synapse.servers.cell synmods.nettools.service.NetTools 20210303101900
Or deployed using your devops tool of choice.
Restoring a Backup via URL
Backups stored in cloud backed storage can be directly downloaded and extracted by Synapse services which are built using Synapse v2.110.0 or greater. To deploy a service from a backup via URL, you can perform the following steps:
Warning
This process permanently deletes any files present in the Cell service directory (/vertex/storage
) prior to
downloading and extracting the gzipped tar file. Make sure this process is executed in a service that is being
deployed to an empty volume, otherwise data loss can occur.
Identify the backup to restore in the cloud storage bucket.
Generate a URL for accessing that backup.
AWS S3 Presigned URLs can be used to generate a URL to access the tarball stored in AWS S3.
Google Cloud Storage signed URLs can be used to generate a URL to access the tarball stored in GCS.
Azure Storage shared access signatures can be used to generate a URL to access the tarball stored in Azure Blob Storage.
If you use a different storage provider that is API compatible with S3, GCS, or Azure, refer to their documentation for generating URLS that can be used to access blobs.
If neither of the support cloud services is used to store the backups, you can self-host local backups by a web service that exposes them over a HTTPS interface and use a URL pointing to them with your own service.
Set the
SYN_RESTORE_HTTPS_URL
environment variable to the URL generated in the previous step for the Synapse service using your devops tool of choice.Optional: Use AHA to generate new provisioning information for the service being deployed. This may be needed for an environment where AHA provisioning is used.
If you are using the restore feature to deploy a mirror of a backup, make sure you provide a new AHA provisioning URL for the service. This will be processed by the service after the restore process has been completed. This is currently only supported for services that are already configured and deployed in a mirroring fashion, whose backups were made after mirrors were first deployed.
During a disaster recovery scenario, where a service is being restored from a backup and any original instances of that service will not be brought back online, you do not need to provide new provisioning information for the backup.
Restart the service with the updated configuration. This will perform the following actions:
The service should identify the URL from the
SYN_RESTORE_HTTPS_URL
environment variable.The existing service directory will be deleted.
The URL will be downloaded to the service temporary directory (
/vertex/storage/tmp
).The contents of the tarball will be extracted to the service storage directory.
The tarball will be deleted.
The service will record the hash of the URL to a
restore.done
file.
This could take a while, during which the process is logged by the service with WARNING level logs.
After the service has been started and confirmed to work as expected, remove the
SYN_RESTORE_HTTPS_URL
environment variable from the Synapse service configuration. While Synapse does protect against restarts of services with theSYN_RESTORE_HTTPS_URL
value still being present, if the value of that variables changes or the service’srestore.done
file is removed, the continued presence of that environment variable will trigger the restoring process again.
Note
Synapse services assume that TLS certificate verification is used. If you are using a self signed certificate or an
endpoint that is potentially untrusted by the default Synapse service containers, you may use a URL with the scheme
https+insecure://...
to disable TLS verification on the download.
Restore Aha via URL
In this example, an Aha service is restored from a backup stored in S3. This example assumes that Aha is not available, and uses the s3cmd tool to interact with S3. Your S3 storage environment may use different tools, such as AWS CLI tools or a web based user interface.
Get a list of S3 backups:
$ s3cmd ls s3://backup.storage.corp 2022-10-24 13:13 25021 s3://backup.storage.corp/ahacell-a2b3456848262e1c573fb15d7cbc267c-1666617201651.tar.gz 2022-10-24 13:13 29381 s3://backup.storage.corp/cortex-27888f01eb935a3d7bcc4b53f919c55d-1666617224015.tar.gz 2022-10-24 13:13 12279 s3://backup.storage.corp/maxmind-873abf0cbef27f65a35293d5170bd799-1666617212853.tar.gz
We will want to use the file
ahacell-a2b3456848262e1c573fb15d7cbc267c-1666617201651.tar.gz
.Use the cloud storage provider to get a URL for retrieving that file from S3:
# Example of using s3cmd to get a presigned URL $ s3cmd signurl s3://backup.storage.corp/ahacell-a2b3456848262e1c573fb15d7cbc267c-1666617201651.tar.gz +3600 https://your.s3.storageprovider.com/backup.storage.corp/ahacell-a2b3456848262e1c573fb15d7cbc267c-1666617201651.tar.gz?AWSAccessKeyId=xxxx&Expires=xxxSignature=xxxx
Note
Your cloud storage provider may generate URLs which look different than the above URL. As long as the URL resolves to the backup file, that is okay.
If the DNS entry for your AHA service is managed outside of your devops tools, the DNS entry would need to be updated if the service is being deployed to a host which has a new IP address.
Create the deployment configuration for the Aha service, with the
SYN_RESTORE_HTTPS_URL
variable set to the presigned URL. This guide assumes that you are preparing the host and service directory according to the deployment guide for Synapse; including preparing the host, creating the storage directories, etc:# Docker compose example, showing the SYN_RESTORE_HTTPS_URL version: "3.3" services: aha: user: "999" image: vertexproject/synapse-aha:v2.x.x network_mode: host restart: unless-stopped volumes: - ./storage:/vertex/storage environment: - SYN_RESTORE_HTTPS_URL=https://your.s3.storageprovider.com/backup.storage.corp/ahacell-a2b3456848262e1c573fb15d7cbc267c-1666617201651.tar.gz?AWSAccessKeyId=xxxx&Expires=xxxSignature=xxxx - SYN_LOG_LEVEL=DEBUG - SYN_AHA_HTTPS_PORT=null - SYN_AHA_AHA_NAME=aha - SYN_AHA_AHA_NETWORK=<yournetwork> - SYN_AHA_DMON_LISTEN=ssl://aha.<yournetwork>?ca=<yournetwork> - SYN_AHA_PROVISION_LISTEN=ssl://aha.<yournetwork>:27272
Start up the service:
docker-compose -f /srv/syn/aha/docker-compose.yaml pull docker-compose -f /srv/syn/aha/docker-compose.yaml up -d
Viewing the container logs, it should show the service pulling down the backup, starting up, and existing services registering with the newly restored Aha cell:
$ docker-compose logs -f aha_1 | 2022-10-24 18:10:09,390 [INFO] log level set to DEBUG [common.py:setlogging:MainThread:MainProcess] aha_1 | 2022-10-24 18:10:09,391 [WARNING] Restoring ahacell from SYN_RESTORE_HTTPS_URL=https://..../ahacell-a2b3456848262e1c573fb15d7cbc267c-1666617201651.tar.gz [cell.py:_initBootRestore:MainThread:MainProcess] aha_1 | 2022-10-24 18:10:09,413 [WARNING] Downloading 0.025 MB of data. [cell.py:_initBootRestore:MainThread:MainProcess] aha_1 | 2022-10-24 18:10:09,413 [WARNING] Downloaded 0.025 MB, 100.000% [cell.py:_initBootRestore:MainThread:MainProcess] aha_1 | 2022-10-24 18:10:09,414 [WARNING] Extracting /vertex/storage/tmp/restore_53a387010c87dc8fa63dc93cdd865163.tgz to /vertex/storage [cell.py:_initBootRestore:MainThread:MainProcess] aha_1 | 2022-10-24 18:10:09,418 [WARNING] Extracting cell.guid [cell.py:_initBootRestore:MainThread:MainProcess] aha_1 | 2022-10-24 18:10:09,418 [WARNING] Extracting certs [cell.py:_initBootRestore:MainThread:MainProcess] aha_1 | 2022-10-24 18:10:09,419 [WARNING] Extracting certs/cas [cell.py:_initBootRestore:MainThread:MainProcess] aha_1 | 2022-10-24 18:10:09,419 [WARNING] Extracting certs/cas/synapse.corp.crt [cell.py:_initBootRestore:MainThread:MainProcess] aha_1 | 2022-10-24 18:10:09,419 [WARNING] Extracting certs/cas/synapse.corp.key [cell.py:_initBootRestore:MainThread:MainProcess] ... omitted several lines of extracting files ... aha_1 | 2022-10-24 18:10:09,425 [WARNING] Restored service from URL [cell.py:_initBootRestore:MainThread:MainProcess] aha_1 | 2022-10-24 18:10:09,426 [DEBUG] Set config valu from envar: [SYN_AHA_PROVISION_LISTEN] [config.py:setConfFromEnvs:MainThread:MainProcess] aha_1 | 2022-10-24 18:10:09,426 [DEBUG] Set config valu from envar: [SYN_AHA_DMON_LISTEN] [config.py:setConfFromEnvs:MainThread:MainProcess] aha_1 | 2022-10-24 18:10:09,426 [DEBUG] Set config valu from envar: [SYN_AHA_HTTPS_PORT] [config.py:setConfFromEnvs:MainThread:MainProcess] aha_1 | 2022-10-24 18:10:09,426 [DEBUG] Set config valu from envar: [SYN_AHA_AHA_NAME] [config.py:setConfFromEnvs:MainThread:MainProcess] aha_1 | 2022-10-24 18:10:09,427 [DEBUG] Set config valu from envar: [SYN_AHA_AHA_NETWORK] [config.py:setConfFromEnvs:MainThread:MainProcess] aha_1 | 2022-10-24 18:10:09,468 [DEBUG] AhaCell activecoro tearing down [01.cortex.synapse.corp] [aha.py:_clearInactiveSessions:MainThread:MainProcess] aha_1 | 2022-10-24 18:10:09,468 [DEBUG] Set [00.cortex.synapse.corp] offline. [aha.py:_setAhaSvcDown:MainThread:MainProcess] aha_1 | 2022-10-24 18:10:09,468 [DEBUG] AhaCell activecoro tearing down [cortex.synapse.corp] [aha.py:_clearInactiveSessions:MainThread:MainProcess] aha_1 | 2022-10-24 18:10:09,469 [DEBUG] Set [cortex.synapse.corp] offline. [aha.py:_setAhaSvcDown:MainThread:MainProcess] aha_1 | 2022-10-24 18:10:09,808 [INFO] dmon listening: ssl://aha.synapse.corp?ca=synapse.corp [cell.py:initServiceNetwork:MainThread:MainProcess] aha_1 | 2022-10-24 18:10:09,811 [INFO] ...ahacell API (telepath): ssl://aha.synapse.corp?ca=synapse.corp [cell.py:initFromArgv:MainThread:MainProcess] aha_1 | 2022-10-24 18:10:09,811 [INFO] ...ahacell API (https): disabled [cell.py:initFromArgv:MainThread:MainProcess] aha_1 | 2022-10-24 18:10:09,865 [DEBUG] Adding service [00.cortex.synapse.corp] from [ssl://10.0.0.129:46319] [aha.py:addAhaSvc:MainThread:MainProcess] ...
Remove the
SYN_RESTORE_HTTPS_URL
environment variable from the service configuration, since it is no longer needed. The previous docker compose example would look like the following:version: "3.3" services: aha: user: "999" image: vertexproject/synapse-aha:v2.x.x network_mode: host restart: unless-stopped volumes: - ./storage:/vertex/storage environment: - SYN_LOG_LEVEL=DEBUG - SYN_AHA_HTTPS_PORT=null - SYN_AHA_AHA_NAME=aha - SYN_AHA_AHA_NETWORK=<yournetwork> - SYN_AHA_DMON_LISTEN=ssl://aha.<yournetwork>?ca=<yournetwork> - SYN_AHA_PROVISION_LISTEN=ssl://aha.<yournetwork>:27272
Restore a Cortex via URL
In this example, a backup of the Cortex is restored to a environment from S3 storage. This example assumes that the Cortex is unavailable. There is no need to do any provisioning with this restoration.
Get a list of S3 backups:
$ s3cmd ls s3://backup.storage.corp 2022-10-24 13:13 25021 s3://backup.storage.corp/ahacell-a2b3456848262e1c573fb15d7cbc267c-1666617201651.tar.gz 2022-10-24 13:13 29381 s3://backup.storage.corp/cortex-27888f01eb935a3d7bcc4b53f919c55d-1666617224015.tar.gz 2022-10-24 13:13 12279 s3://backup.storage.corp/maxmind-873abf0cbef27f65a35293d5170bd799-1666617212853.tar.gz
We will want to use the file
cortex-27888f01eb935a3d7bcc4b53f919c55d-1666617224015.tar.gz
.Use the cloud storage provider to get a URL for retrieving that file from S3:
# Example of using s3cmd to get a presigned URL $ s3cmd signurl s3://backup.storage.corp/cortex-27888f01eb935a3d7bcc4b53f919c55d-1666617224015.tar.gz +3600 https://your.s3.storageprovider.com/backup.storage.corp/cortex-27888f01eb935a3d7bcc4b53f919c55d-1666617224015.tar.gz?AWSAccessKeyId=xxxx&Expires=xxxSignature=xxxx
Note
Your cloud storage provider may generate URLs which look different than the above URL. As long as the URL resolves to the backup file, that is okay.
Create the deployment configuration for the Cortex service, with the
SYN_RESTORE_HTTPS_URL
variable set to the presigned URL. This guide assumes that you are preparing the host and service directory according to the deployment guide for Synapse; including preparing the host, creating the storage directories, etc:version: "3.3" services: 00.cortex: user: "999" image: vertexproject/synapse-cortex:v2.x.x network_mode: host restart: unless-stopped volumes: - ./storage:/vertex/storage environment: - SYN_RESTORE_HTTPS_URL=https://your.s3.storageprovider.com/backup.storage.corp/cortex-27888f01eb935a3d7bcc4b53f919c55d-1666617224015.tar.gz?AWSAccessKeyId=xxxx&Expires=xxxSignature=xxxx - SYN_LOG_LEVEL=DEBUG - SYN_CORTEX_AXON=aha://axon... - SYN_CORTEX_JSONSTOR=aha://jsonstor...
Start up the service:
docker-compose -f /srv/syn/00.cortex/docker-compose.yaml pull docker-compose -f /srv/syn/00.cortex/docker-compose.yaml up -d
When reviewing the container logs, it should show the Cortex pulling down the backup and starting up. This should be similar to the Aha cell in the previous example:
$ docker-compose logs -f
Remove the
SYN_RESTORE_HTTPS_URL
environment variable from the service configuration, since it is no longer needed. The previous docker compose example would look like the following:version: "3.3" services: 00.cortex: user: "999" image: vertexproject/synapse-cortex:v2.x.x network_mode: host restart: unless-stopped volumes: - ./storage:/vertex/storage environment: - SYN_LOG_LEVEL=DEBUG - SYN_CORTEX_AXON=aha://axon... - SYN_CORTEX_JSONSTOR=aha://jsonstor...
Deploy a Cortex Mirror via URL
In this example, a mirror of a Cortex is deployed from a backup in S3 backed storage.
Get a list of S3 backups from the backup service:
storm> backup.list --s3 ahacell-a2b3456848262e1c573fb15d7cbc267c-1666617201651.tar.gz cortex-27888f01eb935a3d7bcc4b53f919c55d-1666617224015.tar.gz maxmind-873abf0cbef27f65a35293d5170bd799-1666617212853.tar.gz
In this example, we will want to use the file
cortex-27888f01eb935a3d7bcc4b53f919c55d-1666617224015.tar.gz
Note
If the Cortex is not available to perform the
backup.list
command for you, you would need to identify the backup from the storage provider.Use the cloud storage provider to get a URL for retrieving that file from S3:
# Example of using s3cmd to get a presigned URL $ s3cmd signurl s3://backup.storage.corp/cortex-27888f01eb935a3d7bcc4b53f919c55d-1666617224015.tar.gz +3600 https://your.s3.storageprovider.com/backup.storage.corp/cortex-27888f01eb935a3d7bcc4b53f919c55d-1666617224015.tar.gz?AWSAccessKeyId=xxxx&Expires=xxxSignature=xxxx
Note
Your cloud storage provider may generate URLs which look different than the above URL. As long as the URL resolves to the backup file, that is okay.
Since we are also deploying a mirror with this backup, we want to generate a mirror provisioning configuration from Aha:
# Executing this from the Aha container python -m synapse.tools.aha.provision.service 02.cortex --mirror cortex one-time use URL: ssl://aha.synapse.corp:27272/c30e9e433514f32390f3401678702eee?certhash=6e08d8c9867dd9be60b00fd055c29ee8b8548c8e39a270315ae8cd0d0e47daf0
Create the deployment configuration for the Mirror. This guide assumes that you are preparing the host and service directory according to the Devops guide for the Cortex.
version: "3.3" services: 02.cortex: user: "999" image: vertexproject/synapse-cortex:v2.x.x network_mode: host restart: unless-stopped volumes: - ./storage:/vertex/storage environment: - SYN_RESTORE_HTTPS_URL=https://your.s3.storageprovider.com/backup.storage.corp/cortex-27888f01eb935a3d7bcc4b53f919c55d-1666617224015.tar.gz?AWSAccessKeyId=xxxx&Expires=xxxSignature=xxxx - SYN_LOG_LEVEL=DEBUG - SYN_CORTEX_AXON=aha://axon... - SYN_CORTEX_JSONSTOR=aha://jsonstor... - SYN_CORTEX_AHA_PROVISION=ssl://aha.synapse.corp:27272/c30e9e433514f32390f3401678702eee?certhash=6e08d8c9867dd9be60b00fd055c29ee8b8548c8e39a270315ae8cd0d0e47daf0
Start up the service:
docker-compose -f /srv/syn/02.cortex/docker-compose.yaml pull docker-compose -f /srv/syn/02.cortex/docker-compose.yaml up -d
When reviewing the container logs, it should show the Cortex pulling down the backup and starting up, then contacting the provisioning service for new service information. This should be similar to the previous examples:
$ docker-compose logs -f
Remove the
SYN_RESTORE_HTTPS_URL
andSYN_CORTEX_AHA_PROVISION
environment variables from the service configuration, since they are no longer needed. The previous docker compose example would look like the following:version: "3.3" services: 00.cortex: user: "999" image: vertexproject/synapse-cortex:v2.x.x network_mode: host restart: unless-stopped volumes: - ./storage:/vertex/storage environment: - SYN_LOG_LEVEL=DEBUG - SYN_CORTEX_AXON=aha://axon... - SYN_CORTEX_JSONSTOR=aha://jsonstor...
Use a Custom Location for the Local Filesystem Backend
By default backups will be stored locally in ./backups
in the service storage directory.
The backup:dir
configuration option can be used to specify an alternate location.
For example, the following additions to the docker-compose.yaml
file would save backups to a different location:
volumes:
- ./storage:/vertex/storage
- /path/to/alt/location:/vertex/backups
environment:
# ...
- SYN_BACKUP_BACKUP_DIR=/vertex/backups
Use S3 as the Storage Backend
To enable backups to S3 the s3:enable
configuration option must be set to true
; enabling the storage of backups
in S3.
To configure the S3 options, you can set the following configuration values:
s3:bucket
Name of the S3 bucket to use. The Synapse-Backup service will attempt to create the bucket if it does not exist. The default bucket name is
cell_backups
.s3:boto3
This is a dictionary containing configuration options for the Boto3 SDK, which is used to access S3. For example, if you are using an access key ID and a secret access key to interact with S3, those values should be set in
aws_access_key_id
andaws_secret_access_key
values of the dictionary.If this configuration value is not provided, Boto3 will attempt to resolve its configuration information via its documented methods, including AWS IAM based roles. This allows for Synapse-Backup to be deployed in AWS environments (EC2, ECS, EKS) which may be configured to allow hosts or containers to access specific S3 buckets without the need to specify access key IDs and secret access keys to the service directly.
More information about the Boto3 options can be found at boto3 configuration options and boto3 credentials options.
s3:transfer:upload
The S3 TransferConfig for file uploads. By default, the service uses a configuration that sets the
multipart_chunksize
value to 100MB. This allows uploads of up to 1 terabyte in size. User provided options will overwrite this setting.Information about available options for
s3:transfer:upload
can be found at TransferConfig options.
An example docker-compose.yaml
file can be seen below. This example shows embedding access keys into the
configuration:
version: "3.3"
services:
00.backup:
user: "999"
image: vertexproject/synapse-backup:v2.x.x
network_mode: host
restart: unless-stopped
volumes:
- ./storage:/vertex/storage
environment:
- SYN_BACKUP_HTTPS_PORT=null
- SYN_BACKUP_S3_ENABLE=true
- SYN_BACKUP_S3_BOTO3={"aws_access_key_id":"<your_access_key>", "aws_secret_access_key":"<your_secret_key>"}
- SYN_BACKUP_AHA_PROVISION=ssl://aha.<yournetwork>:27272/<guid>?certhash=<sha256>
AWS S3 Permissions
When used in AWS, the provided credentials must have the following permissions for the backup service to operate.:
s3:ListBucket
This is used to ensure that the S3 bucket used to store backups in exists, and to enumerate backups stored in the bucket.
s3:PutObject
This is used to put backups in the S3 bucket.
s3:DeleteObject
This is used to delete backups uploaded to the S3 bucket.
s3:CreateBucket
This permission is only needed to make the bucket if the bucket does not exist. It is best practice when integrating with AWS S3 to create the bucket separately, since the Synapse-Backup service does not specify any sort of policies when it creates the bucket.
Use GCS as the Storage Backend
To enable backups to GCS the gcs:enable
configuration option must be set to true
;
enabling the storage of backups in GCS. In addition, the gcs:bucket
option must be set
to a globally unique name to specify the bucket where backups will be stored.
To configure the GCS options, you can set the following configuration values:
gcs:project
The name of the project to create the bucket in if it does not exist.
gcs:credentials
This is a JSON string containing service account or authorized user credentials. The
gcs:credentials:path
may also be used to specify a path to the JSON file containing credentials instead.If credentials are not provided, the service will attempt to use service-accounts information in Google Compute Engine metadata to generate authorization tokens. For more information about GCE metadata, see the Google Cloud metadata docs.
An example docker-compose.yaml
file can be seen below. This example shows embedding access keys into the
configuration:
version: "3.3"
services:
00.backup:
user: "999"
image: vertexproject/synapse-backup:v2.x.x
network_mode: host
restart: unless-stopped
volumes:
- ./storage:/vertex/storage
environment:
- SYN_BACKUP_HTTPS_PORT=null
- SYN_BACKUP_GCS_ENABLE=true
- SYN_BACKUP_GCS_BUCKET=uniquebucket-123456
- SYN_BACKUP_GCS_PROJECT=myproject-123456
- SYN_BACKUP_GCS_CREDENTIALS="{
\"type\": \"service_account\",
\"project_id\": \"<projectid>\",
\"private_key_id\": \"<keyid>\",
\"private_key\": \"<key>\",
\"client_email\": \"<clientemail>\",
\"client_id\": \"<clientid>\",
\"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",
\"token_uri\": \"https://oauth2.googleapis.com/token\",
\"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",
\"client_x509_cert_url\": \"<url>\"
}"
- SYN_BACKUP_AHA_PROVISION=ssl://aha.<yournetwork>:27272/<guid>?certhash=<sha256>
GCS Permissions
The provided credentials must have the following permissions for the backup service to operate:
storage.buckets.get
This is used to ensure that the GCS bucket used to store backups in exists.
storage.objects.list
This is used to enumerate backups stored in the GCS bucket.
storage.objects.create
This is used to put backups in the GCS bucket.
storage.objects.delete
This is used to delete backups uploaded to the GCS bucket.
storage.buckets.create
This permission is only needed to make the bucket if the bucket does not exist. It is best practice when integrating with GCS to create the bucket separately, since the Synapse-Backup service does not specify any sort of access policies when it creates the bucket.
Use Azure Blob Storage as the Storage Backend
To enable backups to Azure Blob Storage the azure:enable
configuration option must be set to
true
.
To configure the Azure Blob Storage options, you can set the following configuration values:
azure:container
Name of the Azure container to use. The Synapse-Backup service will attempt to create the container if it does not exist. The default bucket name is
cellbackups
.azure:connstr
This is an Azure Storage connection string to use for connecting to Azure Blob Storage.
azure:url
If no connection string is provided, the serivce will attempt to resolve credentials from the environment in the order listed in the DefaultAzureCredential documentation. Those credentials will be used with the URL provided in the
azure:url
configuration value to connect to Azure Storage.
An example docker-compose.yaml
file can be seen below. This example shows using a connection string in
the configuration:
version: "3.3"
services:
00.backup:
user: "999"
image: vertexproject/synapse-backup:v2.x.x
network_mode: host
restart: unless-stopped
volumes:
- ./storage:/vertex/storage
environment:
- SYN_BACKUP_HTTPS_PORT=null
- SYN_BACKUP_AZURE_ENABLE=true
- SYN_BACKUP_AZURE_CONTAINER=mycontainer
- SYN_BACKUP_AZURE_CONNSTR=<Azure Storage account connection string>
- SYN_BACKUP_AHA_PROVISION=ssl://aha.<yournetwork>:27272/<guid>?certhash=<sha256>
Azure Blob Storage Permissions
The provided credentials must have the following permissions for the backup service to operate:
Microsoft.Storage/storageAccounts/blobServices/containers/read
This is used to ensure that the Azure container used to store backups in exists.
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read
This is used to enumerate backups stored in the Azure container.
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action
This is used to add backups in the Azure container.
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write
This is used to write backups in the Azure container.
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete
This is used to delete backups uploaded to the Azure container.
Microsoft.Storage/storageAccounts/blobServices/containers/write
This permission is only needed to make the container if the container does not exist. It is best practice when integrating with Azure to create the container separately, since the Synapse-Backup service does not specify any sort of access policies when it creates the container.
Trigger a Backup from the Command Line
This service includes an additional tool for triggering backups using the Synapse Backup service from the command line.
Syntax
backupsvc
is executed from an operating system command shell. The command usage is as follows:
usage: synmods.backup.tools.backupsvc [-h] [--backup-service BACKUP_SERVICE] [--target TARGET] [--name NAME] [--s3] [--gcs] [--wait]
Where:
BACKUP_SERVICE
is a telepath URL to the Synapse Backup service. This can also be set with the environment variableSYN_BACKUP_SERVICE_URL
.TARGET
is a telepath URL to the service to be backed up. This can also be set with the environment variableSYN_BACKUP_TARGET_URL
.NAME
is an optional filename to save the backup as.--s3
specifies that the backup should be saved to S3. This can also be set with the environment variableSYN_BACKUP_S3
.--gcs
specifies that the backup should be saved to GCS. This can also be set with the environment variableSYN_BACKUP_GCS
.--azure
specifies that the backup should be saved to Azure Blob Storage. This can also be set with the environment variableSYN_BACKUP_AZURE
.--wait
is whether to wait for the backup to complete before returning.
Example
To take a backup of a service and wait for the result:
python -m synmods.backup.tools.backupsvc --backup-service aha://[email protected]... --target aha://[email protected]... --wait
The command supports being configured to use structured logging with the SYN_LOG_STRUCT
flag, as well as any other synapse logging configurations.
2023-04-05 10:51:33,359 [INFO] log level set to DEBUG [common.py:setlogging:MainThread:MainProcess]
2023-04-05 10:51:33,360 [INFO] Connecting to backup service at [aha://[email protected]...] [backupsvc.py:main:MainThread:MainProcess]
2023-04-05 10:51:33,420 [INFO] Starting backup of cell at [aha://[email protected]...] with opts: {'wait': True, 's3': False, 'gcs': False, 'keep_on_fail': False} [backupsvc.py:main:MainThread:MainProcess]
2023-04-05 10:51:34,891 [INFO] Backup successful, saved as nettools-4c3d694525f62454b4e2a4470b027e24-1680706293450.tar.gz [backupsvc.py:main:MainThread:MainProcess]
Deploy a Mirror
Note
When a local filesystem backend is used the locally stored backup files are not mirrored.
Inside the AHA container
Generate a one-time use URL for provisioning from inside the AHA container:
python -m synapse.tools.aha.provision.service 01.backup --mirror backup
You should see output that looks similar to this:
one-time use URL: ssl://aha.<yournetwork>:27272/<guid>?certhash=<sha256>
On the Host
Create the container storage directory:
mkdir -p /srv/syn/01.backup/storage
chown -R 999 /srv/syn/01.backup/storage
Create the /srv/syn/01.backup/docker-compose.yaml
file with contents:
version: "3.3"
services:
01.backup:
user: "999"
image: vertexproject/synapse-backup:v1.x.x
network_mode: host
restart: unless-stopped
volumes:
- ./storage:/vertex/storage
environment:
# disable HTTPS API for now to prevent port collisions
- SYN_BACKUP_HTTPS_PORT=null
- SYN_BACKUP_AHA_PROVISION=ssl://aha.<yournetwork>:27272/<guid>?certhash=<sha256>
Note
Don’t forget to replace your one-time use provisioning URL!
Start the container:
docker-compose --file /srv/syn/01.backup/docker-compose.yaml pull
docker-compose --file /srv/syn/01.backup/docker-compose.yaml up -d
Devops Details
Docker Images
The Synapse-Backup service is available as a Docker container from Docker Hub. The repository can be found at:
Note
There are tagged images available on Docker Hub which correspond to software releases seen in the changelog. The
docker tag master
is the latest development release. A generic major version tag is available, representing the
latest release on a given major version. For example, the v2.x.x
tag represents the most current release for
the v2.x.x
release line. You can utilize specific tagged versions, or a major version specifier, depending on
your chosen deployment strategy.
Configuration Options
The following is a list of available configuration options.
aha:admin
An AHA client certificate CN to register as a local admin user.
- Type
string
- Environment Variable
SYN_BACKUP_AHA_ADMIN
aha:leader
The AHA service name to claim as the active instance of a storm service.
- Type
string
- Environment Variable
SYN_BACKUP_AHA_LEADER
aha:name
The name of the cell service in the aha service registry.
- Type
string
- Environment Variable
SYN_BACKUP_AHA_NAME
aha:network
The AHA service network.
- Type
string
- Environment Variable
SYN_BACKUP_AHA_NETWORK
aha:provision
The telepath URL of the aha provisioning service.
- Type
['string', 'array']
- Environment Variable
SYN_BACKUP_AHA_PROVISION
aha:registry
The telepath URL of the aha service registry.
- Type
['string', 'array']
- Environment Variable
SYN_BACKUP_AHA_REGISTRY
aha:user
The username of this service when connecting to others.
- Type
string
- Environment Variable
SYN_BACKUP_AHA_USER
auth:anon
Allow anonymous telepath access by mapping to the given user name.
- Type
string
- Environment Variable
SYN_BACKUP_AUTH_ANON
auth:passwd
Set to <passwd> (local only) to bootstrap the root user password.
- Type
string
- Environment Variable
SYN_BACKUP_AUTH_PASSWD
auth:passwd:policy
Specify password policy/complexity requirements.
- Type
object
- Environment Variable
SYN_BACKUP_AUTH_PASSWD_POLICY
azure:connstr
A connection string for accessing Azure storage.
- Type
string
- Environment Variable
SYN_BACKUP_AZURE_CONNSTR
azure:container
The name of the container to use.
- Type
string
- Default Value
'cellbackups'
- Environment Variable
SYN_BACKUP_AZURE_CONTAINER
azure:enable
Enable backing up to Azure Blob Storage.
- Type
boolean
- Default Value
False
- Environment Variable
SYN_BACKUP_AZURE_ENABLE
azure:url
The storage account’s blob service account URL (https://<my-storage-account-name>.blob.core.windows.net/).
- Type
string
- Environment Variable
SYN_BACKUP_AZURE_URL
backup:dir
A directory outside the service directory where backups will be saved. Defaults to ./backups in the service storage directory.
- Type
string
- Environment Variable
SYN_BACKUP_BACKUP_DIR
dmon:listen
A config-driven way to specify the telepath bind URL.
- Type
['string', 'null']
- Environment Variable
SYN_BACKUP_DMON_LISTEN
gcs:bucket
The name of the GCS bucket to use (must be a globally unique name).
- Type
string
- Environment Variable
SYN_BACKUP_GCS_BUCKET
gcs:credentials
Service account or authorized user credentials blob.
- Type
object
- Environment Variable
SYN_BACKUP_GCS_CREDENTIALS
gcs:credentials:path
Path to a file containing service account or authorized user credentials.
- Type
string
- Environment Variable
SYN_BACKUP_GCS_CREDENTIALS_PATH
gcs:enable
Enable backing up to Google Cloud Storage.
- Type
boolean
- Default Value
False
- Environment Variable
SYN_BACKUP_GCS_ENABLE
gcs:nocredentials
Disable authorization tokens (used for connecting to an emulated GCS server)
- Type
boolean
- Default Value
False
- Environment Variable
SYN_BACKUP_GCS_NOCREDENTIALS
gcs:project
The name of the project to create the bucket in if it does not exist.
- Type
string
- Environment Variable
SYN_BACKUP_GCS_PROJECT
gcs:retry
Maximum number of retries for failed connection attempts to GCS.
- Type
integer
- Default Value
5
- Environment Variable
SYN_BACKUP_GCS_RETRY
gcs:ssl
Set to false to disable ssl verification when connecting to GCS.
- Type
boolean
- Default Value
True
- Environment Variable
SYN_BACKUP_GCS_SSL
gcs:url
The base url to use when connecting to GCS.
- Type
string
- Default Value
'https://www.googleapis.com'
- Environment Variable
SYN_BACKUP_GCS_URL
health:sysctl:checks
Enable sysctl parameter checks and warn if values are not optimal.
- Type
boolean
- Default Value
True
- Environment Variable
SYN_BACKUP_HEALTH_SYSCTL_CHECKS
https:headers
Headers to add to all HTTPS server responses.
- Type
object
- Environment Variable
SYN_BACKUP_HTTPS_HEADERS
https:parse:proxy:remoteip
Enable the HTTPS server to parse X-Forwarded-For and X-Real-IP headers to determine requester IP addresses.
- Type
boolean
- Default Value
False
- Environment Variable
SYN_BACKUP_HTTPS_PARSE_PROXY_REMOTEIP
https:port
A config-driven way to specify the HTTPS port.
- Type
['integer', 'null']
- Environment Variable
SYN_BACKUP_HTTPS_PORT
limit:disk:free
Minimum disk free space percentage before setting the cell read-only.
- Type
['integer', 'null']
- Default Value
5
- Environment Variable
SYN_BACKUP_LIMIT_DISK_FREE
max:users
Maximum number of users allowed on system, not including root or locked/archived users (0 is no limit).
- Type
integer
- Default Value
0
- Environment Variable
SYN_BACKUP_MAX_USERS
nexslog:en
Record all changes to a stream file on disk. Required for mirroring (on both sides).
- Type
boolean
- Default Value
False
- Environment Variable
SYN_BACKUP_NEXSLOG_EN
onboot:optimize
Delay startup to optimize LMDB databases during boot to recover free space and increase performance. This may take a while.
- Type
boolean
- Default Value
False
- Environment Variable
SYN_BACKUP_ONBOOT_OPTIMIZE
s3:boto3
Boto3 configuration options.
- Type
object
- Default Value
None
- Environment Variable
SYN_BACKUP_S3_BOTO3
s3:bucket
The name of the S3 bucket to use.
- Type
string
- Default Value
'cell_backups'
- Environment Variable
SYN_BACKUP_S3_BUCKET
s3:enable
Enable backing up to S3.
- Type
boolean
- Default Value
False
- Environment Variable
SYN_BACKUP_S3_ENABLE
s3:log:level
Log level for S3 related logging. Enabling this at the DEBUG level may contain sensitive information such as private key materials depending on the deployment configuration
- Type
string
- Default Value
'INFO'
- Environment Variable
SYN_BACKUP_S3_LOG_LEVEL
s3:transfer:upload
S3 TransferConfig for file uploads.
- Type
object
- Default Value
{'multipart_chunksize': 104857600}
- Environment Variable
SYN_BACKUP_S3_TRANSFER_UPLOAD