SQL Backtrack is a robust application from the company Datatools (since bought by BMC) which supercedes the capabilities of Sybase's backupserver. SQL Backtrack can perform many different backup operations not available in the standard Sybase backup system. Examples include; incremental dumping, object level dumping, automatic dump file encryption and compression, cross platform and cross Sybase version database dumping, reloading to a smaller sized database, etc.
NOTE: this review was written for SQL Backtrack v3.11 over Solaris 2.5 and Sybase 11.0.x. It does not contain any information for future releases, the inline DBCC feature Verify or DBV, or any changes in binary names in future releases. I have not worked with the product since late 1997.back to top
SQL backtrack's home directory is defined as $DTBASE. Several other environment variables are required for its use; see the $SYBASE/.cshrc file or $SYBASE/.profile for syntax. In each $DTBASE directory, we have a link of "0" pointing at the newest version of the binary directory (since its named something long like sbacktrack-3.1.1). This link allows for quick testing and recovery from old installs and neatness in scripting.
In typical shops, Transactions are dumped hourly, database dumps done nightly through the SQL backtrack product. These are accomplished by generating scripts via the main SQL Backtrack program (sbacktrack), saving the scripts and calling them from cron. These scripts perform the requested action on control directories, which contain control files for each database that will be affected by the script. All databases are dumped compressed and encrypted w/ the sa password (if you desire encryption).
Each SQL backtrack installation has several control directories; I try to keep the naming conventions similar across servers. This is a typical list of the control directories and what purpose they serve:
All SQL Backtrack operations are managed through shell scripts generated
through the sbacktrack program. This is an overview of some typical scripts
generated. All scripts are located in the $dtscripts directory,
or $DTBASE/scripts (note; this is NOT a standard directory created upon
install; this is a directory I create and use to organize the code).
A DBA gets many requests to perform ad-hoc database and table recoveries, one-time backups of objects, and restorals due to database or data corruption. I typically save all the various ad-hoc backup and restore commands in scripts located in $DTBASE/scripts/one_time.sh on the various machines installed with Datatools
I've also tried to document what each command does and the logic behind it, and for the most part all the coding examples have been documented as FAQs in the SQL Backtrack Overview section of this WWW documentation system, including the actual command. However, here's an example of the full script with all required variables:
#!/bin/sh # SQL-BackTrack for Sybase - Copyright (c) 1995 #%DT%SQL-BackTrack
DTBASE=/export/datatools; export DTBASE DT_SBACKTRACK_HOME=/export/datatools/0; export DT_SBACKTRACK_HOME SYBASE=/u/sybase11; export SYBASE DTTEMPDIR=/export/datatools/tmp; export DTTEMPDIR ######### $DTBASE/0/bin/dtsload -physical -server prod11 -database prrc_old \ -from /news/prod11/hold/prrc.sqlbacktrack.6597
this command will load a new database (prrc_old) with the contents of a dump from another database, stored in a saved SQL Backtrack file (prrc.sqlbacktrack.6597). The 4 environment variables and the tag lines at the top are required for correct operation of SQL backtrack. See $DTBASE/scripts/one_time.sh or the FAQ for more examples.back to top
Q: "How do I run sbacktrack?"
A: Switch user to sybase, then just type sbacktrack at the prompt.
% su - sybase Password: [type in sybase password] sybase% sbacktracksbacktrack cannot be run if your current working directory is within the $DTBASE directory structure. Thus if you're already user sybase, do this to guarantee you're in a safe directory location:
sybase% cd sybase% sbacktrack
Q: "I just created a new database. How do I add it to the normal database
dump and transaction log dump process? "
- start sbacktrack
- select option "1:Create a control directory..."
- select the control directory. To add a database to the full nightly database dump process, select the "full_phys" control directory. To add the database to the hourly transaction log dumping process select the "trans_dump" control directory.
- select option "1: Create new control files for backups"
- select your server, log into server
- select option "1: Create new control files for physical backups." All databases are physically dumped each night. select your database(s) you want added
- Populate backup options: "y" for user sa for backup, "y" for sa for recovery, "y" to compress data, "y" to encrypt the data (use the sa password for encryption)
- select "1" for number of backup generations if you're doing a database backup. If adding a database to the trans_dump pool, select "unlimited"
- select just the "1: physical_disk" backup pool for full dumps, or the "1. trans_dump_pool" for transaction log dumps.
Q: "I just created a New server on a machine w/ SQL Backtrack. What do I
need to do to get SQL Backtrack to do nightly database backups? "
A: After creating the new server and updating the $SYBASE/interfaces file, follow these steps:
- start sbacktrack
- option 1. Create a control directory...
- Pick the full_phys existing Control Directory
- select 1. Create new control files...
- Pick your new server, log into it and then select databases. From this point, the process is the same as above for including new databases...the system will automatically create a new server control directory underneath the $DTBASE/full_phys directory.
- After completing the above steps, you'll need to create the backup script that runs out of cron. You can create this script one of two ways:
x copy an existing full dump script to [newserver]_full.sh and just change the server name in the script
x have sbacktrack generate it for you. To do this, follow all the steps for manually backing up a database (in the next question) via sbacktrack, and then at the end instead of actually executing it select the "Generate backup script" option and save it to a file.
Q: "One of my database dumps is getting too large; how do I stripe it
across multiple devices?"
A: This is a complicated effort. Here's the steps you need to take:
1. Create a new backup pool to stripe on. You can't stripe on the same primary backup pool slice.
- start sbacktrack
- select option "1: Create a control directory..."
- select your Control Directory
- select option "2: Definy/Modify backup pool information"
- select option "3: Define a new backup pool"
- ...and follow the steps. We suggest making the stripe name specific to the database in question. OBSI type "disk," make the "directory where the backed up data resides" the same directory as the primary pool location
2. You're gonna have to recreate the control file for the database in question.
- Delete the control file as it stands currently from unix. - From sbacktrack, Select option "1. Create a control directory..."
- select your control directory. (same one as before)
- select option "1: Create new control files for backups"
- Select your server, log into server as user sa
- select option "1: Create new control files for physical backups"
- pick your database from the list, hit "a" when done
- Populate backup options: "y" for user sa for backup, "y" for sa for recovery, "y" to compress data, "y" to encrypt the data (use the sa password for encryption), select "1" for number of backup generations
- you now stand at the backup pool menu. Select your primary pool (usually special_disk), then when it prompts you if you want to stripe, answer yes and then select the striping backup pool you just created.
Theoretically you can skip step 1 and create the striped backup pools at the last part of step 2.
Q: I need to recover a database that crashed?
A: To recover a database through the SQL Backtrack conventional method (i.e., starting sbacktrack and then recovering with the automated aid of the program) you'll do the following:
- start sbacktrack,
- select "4. Recover"
- select your control directory (depending on which type of dump you're restoring), select your option from the recover menu (typically "4. Recover database or set of databases")
- select your database, accept, and start recovery.
Note: all dumps are encrypted with the sa password for the server in question.
Q: I want to manually issue an incremental dump on a database?
A: Because we don't have any control directories specifically designed to issue incremental dumps, we have to issue the command manually:
$DTBASE/0/bin/dtsbackup -level 9 [full_phys control directory]
Q: I want to perform an object level dump of some tables out of my
A: You have to perform a logical dump for object level dumps. After ensuring there exists a control file for your database in the full_logical control directory, issue this command:
$DTBASE/0/bin/dtsbackup [full_logical control directory] -options '-object hss_itest'
Q: I need to recover just one table from last night's dump. How?
A: issue a command like this:
$DTBASE/0/bin/dtsrecover [control file for database, NOT the dump file] -object [object to recover].
Q: I want to recover a table from a dumpfile I've been saving for a while,
but sbacktrack only sees last night's dump file. Help!?"
A: another method to recover is by specifying the dump file you want, instead of allowing SQL backtrack to automatically pick last night's dump. Issue this command:
$DTBASE/0/bin/dtsload -from [your dump file] -objects [your tables]
Q: I want to port some databases from one server to another. However,
they're different operating systems.
A: A complex operation. Here's your steps you need to take;
- Ensure SQL backtrack is loaded on both servers.
- Ensure that the server you're porting FROM has the license server dtslicense running.
- Ensure the target server is in the source server's interfaces file.
- Create logical control directories for all the databases in the target server.
- Dump all the databases logically through the source server's SQL backtrack installation.
- Once you have the logical dump files local, load them into your server by issuing this command:
$DTBASE/0/bin/dtsrecover [logical control dir] -copyover -server [source svr]
If your servers are the SAME operating system, you can port the databases
- backup databases on source server
- ftp backup files to target server
- issue this command on target server:
$DTBASE/0/bin/dtsload -physical -server [servername] -database [dbname] -copyover -from [ftp'd dumpfile name]
Q: I want to load database A's dump file into Database B. How?
A: issue this command:
$DTBASE/0/bin/dtsload -physical -server [server] -database [database B] -from [database A's dumpfile]
Q: I need to dump a file to a location different from existing dump
directories available to backup pools. How?
A: You need to make a temporary backup pool location, then backup to that pool. Follow these steps:
- start sbacktrack
- select option 1: Create a control directory...
- select the option to "Specify other control directory"
- When prompted for a control directory name, type $DTBASE/[temporary name]
- Hit option 1. Create new control files... (option 2 will be fulfilled after selecting your control files (i.e. databases)
- Select your server, log into server
- Select whether you want physical or logical backups
- Select your databases from the list, hit "a" when you're done
- Set up the database backup options you want; see the first question above
- when done, hit option 1. Backup pool info. Give the backup pool a name, tell it its a "disk" type, and then put its location in your temporary desired location.
- exit sbacktrack
- Update sybase's .cshrc file, add your new control directory to the variable $DT_SBACKTRACK_PATH. To make this .cshrc enact, either log out and back in, or type "source .cshrc."
- NOW, log back into sbacktrack, and follow the directions given above in the second question for manually dumping a database, ensuring you select your new backup location
Q:I need to change an aspect of a database control file (i.e., I changed
the sa password of the server or I want to change the encryption password
or I want to change the number of generations we keep for a particular db?)
A: You'll need to delete the control file from the $DTBASE directory structure and then recreate it through sbacktrack. It is theoretically possible to manually edit the control files (as we did to have the master database have 2 generations of dumps instead of the conventional 1) but its faster and safer to just recreate the control files.
- cd $DTBASE on your server, then cd to the top-level directory that contains the control file (either full_phys, full_logical, or trans_dump).
- cd [server] in question
- rm [dbname] where the dbname is the filename of the database control file you wish to manipulate
- cd $DTBASE (you have to get out of that directory before running sbacktrack)
- Follow the directions in the first FAQ for "Adding a new database to the normal database dump process"