Delta Dumpa

The engineers dilemma - Cheap, Fast, Reliable - Pick any two...
Now, with Delta Dumpa, you can finally pick all three!

Learn More


Delta Dumpa (DD) servers are typically white-box items using mass produced components. Redundant arrays (RAID) of high capacity SATA drives provide excellent price per terabyte without compromising throughput or reliability. Resources (space, cpu, memory, network) can be spread over multiple servers with extensive use of open-source software reducing licensing implications. DD will handle all backups and restores, freeing up your DBA staff to handle other important tasks such as addressing security and performance concerns. De-duplication and high compression ensure DR sites can be updated with minimal WAN data costs.


Reliability is attained through several mechanisms: During the backup process failed dumps are retried intelligently giving them an excellent chance of succeeding, with various reporting mechanisms highlighting any persistent errors. At the hardware level, use of RAID disks and syncing of dumps between load-balance servers ensures that the effects of failures are not felt by the system nor lead to data or service loss. Getting production data off site (to a DR system) ensures data integrity under catastrophic conditions, and 'load upon arrival' ensures that the remote SQL-servers are properly tested, applicable space is available etc.


By compressing data at source, dump speeds of over 350 megabytes per second are achieved. Using large disk arrays with deep caching, high sustained throughput is achieved. Combinations of differential and transaction log dumps ensure data delivery of consistently high speeds. When necessary, bonded networks are used to facilitate parallelism and even higher throughput.
On many databases de-duplication techniques reduce the dump size to less than 1% of the original size, enabling extremely fast syncs to remote DR sites.

But, we're not all engineers. Some of us are business managers. Let's look at how a business manager would appreciate using DD.
Data Safety
Many businesses would damage their reputations or even fail if they lost critical data. Typically data is protected by various means, such as good quality hardware (SAN, RAID), clustered servers, secure server rooms, access control, Active-Directory permissions, regular backups and, hopefully, DR exercises. But is that really enough? In our experience, it isn't. A typical backup scenario might be nightly backups, sometimes staging to local disk, and then on to tape. The tapes are picked up by a courier service the next morning. A catastrophic failure inside your building (fire, flood, theft) could result in the loss of all onsite data, including the tapes, forcing a need to resort to the previous backup of 2 days back. But have the tapes been tested, can they definitely be read? All too often tapes have a surprisingly poor record of recovery. But data loss needn't be caused by catastrophic forces, human fingers are more often to blame. A stressed DBA using standard SQL commands to repair a data set can accidentally fix too much! Or the dropping of production tables instead of test server tables. These events are ugly as the damage is replicated by any mirroring or replication mechanisms.

DD protects data by using the following methods : backup directly over network, sync backups to load-balance servers, efficiently sync data to a remote site, restore that data at the DR site (for site readiness and data integrity checking), use intra-day differential and transaction log backups to minimise actual loss and ensure timeout recovery. Next it retries failed dumps before reporting any persistent errors to the DBA. For data restoration a simple GUI tool facilitates a calm and safe restore.
Efficient use of the DBA
Good DBAs are expensive and cheap DBAs are even more expensive! Repetitively checking backups is time consuming, boring and menial. Once configured properly, DD will handle all the backups efficiently, ensuring all databases meet their recovery objectives. Only problem areas are highlighted, which the DBA should resolve quickly allowing more time to address performance and security issues. Similarly, manual and ad-hoc restores can be time hogs. Sometimes this requires pulling tapes from the off site service, a process taking 1 or 2 days. This annoys and holds up the requesters. As DD usually holds 2 to 3 weeks worth of data, it can restore within seconds of the request. If the request is regular, it can be easily automated.
Data Security
Accidental exposure of personal data such as salary, purchasing patterns, etc is clearly damaging. Many precautions are taken to protect data : user permissions, secure facilities, access control. But backups are frequently left unencrypted with obvious risks. DD encrypts all dumps leaving the building with commercial strength methods and genuinely random keys. If vendors require data copies for testing or problem solving, DD provides a simple mechanism for supplying the the vendor with the appropriate keys and decryption methods.
Business Continuity
Lies, damned lies, statistics and benchmarks. To this list should be added 'DR exercises'. It is our experience that few companies have truly successful DR exercises. There's one good way to have confidence that backups are usable and DR capable : to actually load the backups into the corresponding databases at the DR site immediately after the dump arrives there, thereby ensuring: a) successful backups, b) a clear view of differences between production and DR, c) properly tested DR servers and lastly d) DR site disk space management. A clean switch-over to DR can be as simple as stopping loads and putting the databases into an online state. Everything should already be loaded by the time an emergency is declared.
DD can be spread over any number of primary and remote servers, it is truly scalable, allowing optimum use of CPU and disk space as well as additional redundancy capability. The scalability facilitates the physical placement of DD servers close to their SQL-Servers. DD servers range from small pc's to 24 core servers with massive attached disk arrays.
Contact Us Directly