Before posting, and to avoid disappointment, please read the following:

  • This forum is not for 2BrightSparks to provide technical support. It's primarily for users to help other users. Do not expect 2BrightSparks to answer any question posted to this forum.
  • If you find a bug in any of our software, please submit a support ticket. It does not matter if you are using our freeware, a beta version or you haven't yet purchased the software. We want to know about any and all bugs so we can fix them as soon as possible. We usually need more information and details from you to reproduce bugs and that is better done via a support ticket and not this forum.

Large files to cloud feature request

For technical support visit https://support.2brightsparks.com/
Post Reply
keithl
Enthusiastic
Enthusiastic
Posts: 10
Joined: Sun Jan 17, 2016 9:08 pm

Large files to cloud feature request

Post by keithl »

I have a large library of large (15-30GB) files that I want to keep in Amazon. Most have made it, but it has been painful. Even with the most reliable option set that keeps a DB of hashes it takes days to just do the initial scan then over 2 months to process through to get a few files. I would like to see an option for fast/blind upload that basically does no compares, it just uploads files it does not find with the same exact name. This puts the burden on me to make sure if I change a file on my end to delete the cloud copy to make sure the new copy gets up there, but it should speed up the initial scan and just focus on the files it does not find. It is hard to get a 2-3 month window of uninterrupted connectivity even with my 150/20 Comcast as Amazon's interface chokes on occasion.

I have tried using Amazon's uploaded, but it is far worse just timing out all the time, at least Syncback pro was more successful. Looking for alternatives or support for a blind/scan option to get added to SyncbackPro.

Thanks!
Swapna
2BrightSparks Staff
2BrightSparks Staff
Posts: 992
Joined: Mon Apr 13, 2015 6:22 am

Re: Large files to cloud feature request

Post by Swapna »

Hi,

SyncBackPro in each run has to scan Source and Destination locations to detect the file differences (using last modification date/time stamps, sizes or compare hash values (if enabled))in order to copy only new/changed files from Source to Destination.

We do not support "blind-copying", considering certain aspects on the way the program is implemented and functioning. In other words, there is no way to run the profile without scanning the Source and Destination drives.

You can read this forum to understand why a blind copy method is not implemented in SyncBack:

http://www.2brightsparks.com/bb/viewtop ... 65&p=39466

Also, you haven't stated clearly whether you are using Amazon S3 or Amazon Drive?

If you are using Amazon Drive:

You can try disabling the option "Retrieve a list of all the files and folders then filter" under:

Modify > Expert > Cloud > Advanced settings page

and retry the profile run, to see if it helps to reduce the scan time of your profile. Please read the help file for more details about his option. With Cloud > Advanced settings page opened, press F1 to open the contextual help.

For Amazon S3:

You can try enabling Fast Backup mode in your profile configuration which is available in SyncBackPro under:

Modify > Expert > Fast Backup > enable the option "Perform a Fast Backup"

But please note that Fast Backup can only be enabled for a Backup/Mirror profile type. A Intelligent Synchronization profile cannot use Fast Backup.      

This mode builds/creates a database of your Source (during the first run with Fast Backup option enabled) as it is at the time, which should match what it backs up to the Destination. Hence it can, on the next run scan that database of Source-last against Source-now (run #2) and work out the differences, and only backs up the differences/new files, etc. Please note that Fast Backup will significantly reduce the scan time and overall backup time of a profile.

Note that the first run of the profile with Fast Backup enabled (or a Rescan backups) will take the same amount of time as without Fast Backup enabled. This is because the Source against Destination will be scanned to identify the file differences on both sides. However, for the second and subsequent runs (or in a non-rescan backups) the scan time will be much faster as SyncBack will not need to scan the destination directory, because it remembers what it did the last time the profile was run.  

SyncBackPro Help file explains in detail about various settings available in Fast Backup mode with examples and important points to consider while enabling Fast Backup mode. I would suggest to read the Help file for more details. With Fast Backup settings page opened, press F1 to open the contextual help file for more details.

You can read this KB articles for addition details about Fast Backup:

http://support.2brightsparks.com/knowle ... st-backups

http://support.2brightsparks.com/knowle ... file-types

Thank you
keithl
Enthusiastic
Enthusiastic
Posts: 10
Joined: Sun Jan 17, 2016 9:08 pm

Re: Large files to cloud feature request

Post by keithl »

Thanks. I understand what you are saying, but making an opton available to not do any hash compares at the user's risk would be very helpful. I am using Amazon Drive and have that option set. It still takes days to scan a few directories that have hundreds of file sin them totaling almost 10TB. Then it takes months for the program to work through every file on Amazon to check it. I do think there is a use case where the user accepts the risk to do a blind copy, that would all it to scan quickly then just upload what it does not find making the assumption that if a file with the same name exists I do not want it. I has taken me 8 months to get the initial pass of most of the files up to Amazon with 20mb uploads. The problem is every time I have to restart it it turns into a 60+ day process to work its way through even though only 14 files (~500GB) are missing on Amazon at this point. I know there are other folks struggling to get big files up to Amazon Drive so anything that helps the process even if it requires the user to manager the data would be helpful. For now I think I have to move the 14 files to a temp directory and just run a job to grab those 14 then move them in Amazon drive to proper location later on. Or I will try building a single job and just select those 14 files.
Post Reply