Private Cloud Copy

To create a private cloud copy job, please go to Data Resilience > Copy page, select the Private Cloud module, and create a new copy job.

The New Copy Job wizard for private cloud opens. Please complete the wizard follow the below steps.

Step1. Select the Copy Source

As for Copy source type, you can choose one from Backup job, Backup Data, Copy Job and Copy Data, then you can select based on the storage categories you've added. Also, the storage where backup/copy data is located can be filtered.

  • Users can directly select offsite copy job or restore points as the source to create new copy job for migrating off-site copy data.

  • If the backup or copy job has been deleted or it's a once-off backup job, you can filter the backup data by selecting Restore Points.

You can select the copy source either way as per your convenience. Once the copy source is selected, please click on Next button to continue.

Step 2: Copy Destination

For the Copy Destination, Target storage can be stored in the On-site Storage, Off-site Storage and Cloud Object Storage.

  • An on-site backup copy storage is a storage that had been added to local Vinchin Backup Server or local Vinchin Backup Node.

  • An off-site backup copy storage is a backup storage added to remote site Vinchin Backup Server deployed in another location.

  • A Cloud Object Storage is a storage added on Vinchin Backup Server web console as Copy & Archive usage and it is not directly mounted onto any Vinchin nodes.

When you select the target storage as Off-site storage or Cloud Object Storage, the Compute Node needs to be specified to proceed to the copy job.

Please select the corresponding storage destination as per your actual deployment and requirements.

Step 3: Copy Strategy

Under the General Strategy tab, you can setup the Copy Type, Schedule , Throttling Policy, Retention Policy.

In the Source Type dropdown, you can select the copy source as Mirror Copy or Synthetic Copy.

Mirror Copy: entire backup/copy chain will be replicated completely without the data merging.

Synthetic Copy: the latest backup/copy chain will be merged in to a full restore point for generating a new copy chain. Synthetic copy function is compatible with the Instances, VMs and server backup data.

Copy Type

If the Synthetic Copy is selected, users need to configure Backup Chain Length as Number of Restore Points or Unlimited. When Number of Restore Points is set, it will limit the number of incremental copy restore points to make sure the copy data availability. So if the Restore Points is selected and when the number of restore points equals the set length of the data chain, the next copy job will be downgraded to synthetic copy (full restore point will be generated).

In the Schedule dropdown list, you can choose to setup a Copy as Scheduled job or a Once-off Copy job.

If you wish the backup copy to run regularly as per the backup job runs, please set Copy as Scheduled, otherwise set Once-off Copy to run the copy job for only once.

As for the schedule of the copy job, it is recommended to run the copy job right after the associated job finishes. For example, the backup job runs at 11 PM each day, and it takes approximately 2 hours to complete the backup job, so you can set the copy job to start 3 or 4 hours later than the backup job.

For Throttling Policy, it’s optional, only if the copy jobs will bring network or I/O overload to your production environment, you can configure the throttling policy accordingly. The throttling policy can be configured as Customized Policy or Select Global Policy.

In the Retention Type you can select By Restore Points or By Backup Chain for retention policy. But when selecting Mirror Copy type or Storage is Cloud Object Storage then By Backup Chain is the only available option.

Retention Mothod can be used to define how long the copy data to be reserved in the copy storage, Number of Restore Points mode is selected by default. When the Copy Data Retention Type is Retain by Backup Chain, this means that one backup chain corresponds to one Restore Point.And the data will not be merged.

Notice

  1. If the Cloud Object Storage is selected as the Target Storage for copy job, the Copy Data Retention type can't be set as Retain by Copy Point, because this storage type doesn't support the data merging.

  2. Synthetic copy is compatible with VMs, Server and Public Cloud type data assets.

  3. Only the VMs, Server and Public Cloud type data asset can be retained by copy points, while others will be retained by copy chain.

Transmission Strategy

Transmission Strategy contains some transmission options for the copy job.

Encrypted Transfer: use the RSA encryption algorithm by default to transfer the copy data for the data security.

Source Incremental: to reduce the size of data transmitted by comparing the hash values of the copy source and reduce the network bandwidth usage.

Compresses Level: Compresses Transfer can be configured to a different level, Quick Compression, Standard Compression, Maximum Compression and Ultimate Compression. With different compression level, the copy jobs will have different compression efficiency, the higher the level, the more system resources will be needed.

For Transfer Threads, you can enable multithreaded transmission to improve the processing speed of the copy job. The default value for multithreaded transmission is 3, even if you can set the value from 1 to 8, but usually 3 threads will be enough.

Security Policy

For WORM Protection, you can only enable this option when selected storage device with the WORM protection feature enabled. Backup points with WORM Protection enabled cannot be modified or deleted and their retention period can only be extended until they expire. The default protection period is 7 days, and the support range is 1 to 9999 days.

Advanced Strategy

Retry Attempts and Retry Interval ensure the copy job continues to proceed within the set retry attempts and retry interval when the network connection the copy storage is not well.

Retry Attempts: the number of retry attempts for reconnecting when the network connection is lost. The default number of retry attempts is 60, with a maximum of 999 and a minimum of 0, where 0 represents infinite retries.

Retry Interval: retry interval can be specified between each attempt when the network is disconnected during copy job execution to avoid copy job failure. The default reconnection interval is 30 seconds, with a maximum interval of 60 seconds and a minimum interval of 5 seconds.

Overload Protection: If resource limitation are set on the backup node, backup jobs running on the corresponding node will be restricted. Backup jobss are subject to resource resource limitation by default. For jobs with a higher running priority, you can enable this setting to ignore the node resource limitation.

In Storages module, Data File Shards Size can be set from 1GB to 4GB, this value will be used to manage the creation and deletion of storage data files. During a merge, the system will calculate the promotion of redundant data between the two points involved. Merge Redundant Data Proportion can be configured with 10%, 30%, 50% ,70% or 90%. By default,the promotion is 50%.

In Overload Protection module, you can enable Ignore Node Resource Limits option. Ignore the resource limitations of nodes especially for some high priority backup jobs.

Step4: Review & Confirm

After completing the above-mentioned settings, you are able to review and confirm the settings in one screen. A job name can be specified for identification of the copy jobs, and by clicking on the Submit button to confirm the settings and create the copy job.

results matching ""

    No results matching ""