Have you tried running the backup to the same target from outside cloud files? How does a simple HTTP PUT of a large file perform (i.e. That said, we do see 40+ Mbps during backups even crossing Rackspace datacenters using Rackspace Cloud Backup, so I suspect you have some form of configuration issue with duplicity, or you are disk-or-CPU bound during the backup. You also have the option to use Windows instance types with built-in SQL servers. They’re organized in the following categories: General Purpose, Compute Optimized, I/O Optimized Memory Optimized. So you have to open a support ticket to get your provisioning point changed.Īlso, you can't use ServiceNet between Rackspace datacenters (yet). Rackspace features Rackspace offers 17 types of Linux-based instances and 16 Windows-based instances. So everything for "newer" accounts as being provisioned in ORD. Rackspace does not currently give people a choice for which datacenter they are going to use, because the DFW facility is near capacity. Our cloud servers were being created in the DFW datacenter, but all cloud files buckets for JungleDisk are in the ORD datacenter. We saw some performance issues, and the underlying reason was our provisioning points for cloud files vs. We use Rackspace Server Backup (a.k.a JungleDisk Server backup), which like Duplicity does local dedupe and compression and then uploads "chunks" via HTTP to a cloud provider. I'm not sure if the speed limitation is because of some limitation of CloudFiles or if the cloudfiles python library isn't actually connecting via the RackSpace servicenet.ĭo any of y'all have any other suggestions for how I should/could go about getting these backups off the server and onto a 3rd party or remote backup service? So long as that is set to something the cloudfiles lib should be connecting to cloudfiles via the Rackspace Servicenet, which SHOULD make for fast transfers. I then looked at the cloudfiles source code for the lib that duplicity uses for the CF backend, and saw that there was an environmental option for utilizing the Servicenet ( RACKSPACE_SERVICENET). I've looked through the duplicity source code, and by default it does not utilize the Rackspace Servicenet to transfer to cloudfiles. That's horrendously slow, and unacceptable for me. I've been trying to use duplicity to copy these backups to S3 and CloudFiles storage, and it is taking forreverr! I would expect the S3 backup to not be very fast, but the CloudFiles backup has taken 15 hours to backup 9GB. I've got two MySQL servers set up, and have a method to create backups (using the Percona Xtrabackup and innobackupex tools). I recently signed up with Rackspace to host some database servers.
0 Comments
Leave a Reply. |