Forum OpenACS Q&A: Oracle hotbackup scripts?

Collapse
Posted by Andrew Piskorski on
What do folks here running Oracle use to do hot and cold physical backups?

I've written a nice little script to do nightly Oracle exports using named pipes, gzip, and split (as suggested in Expert One-on-One Oracle by Thomas Kyte), which I can upload here if anyone's interested, but I don't have a similar solution set up yet for doing periodic hot and cold physical backups of Oracle (using archivelog mode for hot backups, etc.)

Collapse
Posted by Bruno Mattarollo on

There is a good example from a book of Oracle Press called Oracle 8i Backup and Recovery. The scripts from the book need some syntax check but they are quite useful.

I don't think I am entitled to put the scripts somewhere since I don't know about all the copyright stuff involved

Collapse
Posted by Jade Rubick on
Andrew, if you didn't mind posting your scripts here, that would be great. My backup script looks like this at the moment:
telinit 3
restart-aolserver intranet
/etc/rc.d/init.d/oracle8i stop
nice tar cfz /tmp/ora8.tar.gz /ora8
/etc/rc.d/init.d/oracle8i start
telinit 4
Aolserver runs at runlevel 4, so the telinit 3 is turning Aolserver off. I have a separate script that puts the backed up file somewhere safe.
Collapse
Posted by Andrew Piskorski on
Ok, here's my export script. Note that it's a little more complicated than necessary in order to be useful from the command line as well as from crontab. I just run it from crontab every night with the default values, but you'll probably want to either change the defaults, or add some command-line switches in your crontab to override the defaults.

Just run "ora-export.sh -h" to get help on the command line options.

#!/bin/sh
#
# ora-export.sh
# $Header: /home/cvsroot/unix/usr/local/adm/bin/ora-export.sh,v 1.3 2002/06/17 17:24:02 andy Exp $
#
# by atp@piskorski.com, 2002/05/29
#
# Script to do an Oracle export using a named pipe, gzip, and split,
# as described in _Expert One-on-One Oracle_ by Thomas Kyte,
# pg. 327-328, "Export to an OS Pipe".  This lets us avoid exp's
# possible 2 gig file size limit, use less work space on disk by not
# first writing out an un-compressed export file, etc.


# We assume that /etc/profile sets and exports ORACLE_SID,
# ORACLE_HOME, etc.:

. /etc/profile


# Various defult settings, most of which may be overridden on the
# command line:

# This PASSWORD_FILE must be a Bourne shell script which these
# variables like this:
#   ORA_USERNAME="username"
#   ORA_PASSWORD="password"

# TODO: Putting the password in a file like this keeps us from having
# to hard-code it into multiple script files, but the password STILL
# DOES show up in ps -ef output - not good.

PASSWORD_FILE="/usr/local/adm/bin/ora-system-password.sh"
PUT_IN_DIR="/web/oracle-backups"
MAX_SIZE=600
FULL_P=1
LOG_DATE_FORMAT='+%a %Y-%m-%d %H:%M:%S %Z'

# TODO: For use on multiple machines, may want to move the above
# default settings into a separate configuration file, and source that
# file here.  Perhaps just use the same $PASSWORD_FILE.


CMDNAME=`basename $0`

USAGE="Do nightly Oracle export, using exp, gzip, and split. 

Usage:  ${CMDNAME} [-p FILE] [-d DIRECTORY] [-m SIZE] [-f {0|1}] 

        [-o USER] [-r LOG_FILE] 

Where: 

  -p  : Username/password file to source to set the ORA_USERNAME 

        and ORA_PASSWORD variables. 

        Default:  $PASSWORD_FILE 

  -d  : Directory where we will put the Oracle export file. 

        Default:  $PUT_IN_DIR 

  -m  : Max size of each dump file chunk, in megabytes. 

        Default:  $MAX_SIZE 

  -f  : Full instance export, 1 for yes, 0 for no. 

        Default:  $FULL_P 

  -o  : Oracle schema owner user name to export.  Only needed if -f is no. 

  -r  : Redirect stdout and stdin to file.  Set to empty string to 

        turn off default redirection to the log file. 
"

password_file=$PASSWORD_FILE
put_in_dir=$PUT_IN_DIR
max_size=$MAX_SIZE
full_p=$FULL_P
owner=""
redirect_p=0

direct_p=1
compress_p=1
consistent_p=1

while getopts hp:d:m:f:o:r: OPT
do
  case $OPT in
    h)  
        echo "$USAGE"
        exit
        ;;
    p)  password_file=$OPTARG
        ;;
    d)  put_in_dir=$OPTARG
        ;;
    m)  max_size=$OPTARG
        ;;
    f)  full_p=$OPTARG
        ;;
    o)  owner=$OPTARG
        ;;
    r)  STDOUT=$OPTARG
        STDERR=$OPTARG
        redirect_p=1
        ;;
    ?)
        echo 1>&2
        echo "ERROR:  One or more bad command line options." 1>&2
        echo 1>&2
        echo "$USAGE" 1>&2
        exit 1
        ;;
  esac
done
shift `expr $OPTIND - 1`

ORA_USERNAME=""
ORA_PASSWORD=""
. $password_file

weekday=`date "+%a"`

# TODO: We only have one Oracle instance on this machine.  If we add
# another, we'll need to change things to optionally take the
# ORACLE_SID as a command line option, or something like that:
# --atp@piskorski.com, 2002/05/29 11:03 EDT

if [ "$full_p" -eq 1 ]
then
  full_opt="full=y"
  owner_opt=""
  dump_file="${ORACLE_SID}-${ORA_USERNAME}-full-${weekday}.dmp.gz"
  log_file="${ORACLE_SID}-${ORA_USERNAME}-full-${weekday}.log"
else
  full_opt="full=n"
  if [ "$owner" = "" ]
  then
    owner="$ORA_USERNAME"
  fi
  owner_opt="owner=$owner"
  dump_file="${ORACLE_SID}-${owner}-${weekday}.dmp.gz"
  log_file="${ORACLE_SID}-${owner}-${weekday}.log"
fi


# Redirect stdout and stderr to where?  Settings STDOUT and STDERR to
# empty string will cause no redirection to occur at all:

if [ "$redirect_p" -ne 1 ]
then
  STDOUT=$log_file
  STDERR=$log_file
fi

# Truncate the old previous log file:
cat -s '' > $log_file

if [ "$ORA_USERNAME" = "" ]
then 
  echo "Error: ORA_USERNAME not set." 1>&2  1>>$STDOUT 2>>$STDERR
  exit 1
fi

if [ "$ORA_PASSWORD" = "" ]
then 
  echo "Error: ORA_PASSWORD not set." 1>&2  1>>$STDOUT 2>>$STDERR
  exit 1
fi


if [ "$direct_p" -eq 1 ]
then
  direct_opt="direct=y"
else
  direct_opt="direct=n"
fi

if [ "$compress_p" -eq 1 ]
then
  compress_opt="compress=y"
else
  compress_opt="compress=n"
fi

if [ "$consistent_p" -eq 1 ]
then
  consistent_opt="consistent=y"
else
  consistent_opt="consistent=n"
fi

# The process id of this running script:
pid=$$

ora_exp_maxsize="${max_size}m"
pipe="/tmp/${dump_file}-${pid}"
cd $put_in_dir

if [ -p $pipe ]
then
  # We've included the PID in the pipe name, so this is pretty
  # unlikely:

  echo "WARNING: Named pipe $pipe already existed!" 1>&2  1>>$STDOUT 2>>$STDERR
  echo   `ls -l $pipe`  1>&2  1>>$STDOUT 2>>$STDERR
fi


rm -f $pipe
mknod $pipe p

if [ $? -ne 0 ]
then
  echo "ERROR: Failed to create named pipe: $pipe" 1>&2  1>>$STDOUT 2>>$STDERR
  exit 1
fi

echo "$CMDNAME export STARTING:  `date "$LOG_DATE_FORMAT"`"  1>>$STDOUT 2>>$STDERR

# Note: If this week's export is smaller than last week's, then simply
# overwriting the files might leave one extra split file from LAST
# week, which would mess things up when we re-combine the split files
# in order to do an import.  So go and delete the old dump files
# first, rather than simply overwriting them:

rm -f ${dump_file}.*

gzip < $pipe | split -b $ora_exp_maxsize - ${dump_file}. &
exp $ORA_USERNAME/$ORA_PASSWORD file=$pipe $full_opt $owner_opt $consistent_opt $direct_opt $compress_opt  1>>$STDOUT 2>>$STDERR

echo "$CMDNAME export DONE:  `date "$LOG_DATE_FORMAT"`"  1>>$STDOUT 2>>$STDERR

# To import the dump files we just created, something like this will work:
#
#   cat `ls -1 ${dump_file}.* | sort` | gunzip > $pipe &
#   imp $ORA_USERNAME/$ORA_PASSWORD file=$pipe show=y

rm -f $pipe
Collapse
Posted by Rich Graves on
We use http://www.backupcentral.com/oraback.html

It's a real monster, but every bit of bloat was added for a reason.

I went to Curtis's "Oracle backup and recovery for sysadmins" tutorial at the last USENIX LISA. Highly recommended if you have the means.

Collapse
Posted by Andrew Piskorski on
There's a small bug in my ora-export.sh script above:

You should move the cd $put_in_dir line up to be before the if [ "$redirect_p" -ne 1 ] line, not where it is now. If the cd isn't before the cat -s '' > $log_file truncation of the old log file, the log file never gets truncated at all and it will just keep growing week after week.

Collapse
Posted by Andrew Piskorski on
Well, I started out this thread asking about Oracle hot backup scripts, so here's mine. I've been using it for several weeks now, seems to work fine:

#!/bin/sh
#
# ora-rman-hb.sh
# $Header: /home/cvsroot/unix/usr/local/adm/bin/ora-rman-hb.sh,v 1.4 2002/06/27 22:12:43 andy Exp $
#
# by atp@piskorski.com, 2002/06/19
# originally based on a similar script by Xuequn "Robert" Xu
# <xux@arsdigita.com>, c. 2000/10/03.

# Oracle physical hot backup using RMAN.  Run as user oracle from
# crontab something like so, to run every night at 0:45 am:
#
# 45 0 * * * /usr/local/adm/bin/ora-rman-hb.sh


# We assume that /etc/profile sets and exports ORACLE_SID,
# ORACLE_HOME, etc.:

. /etc/profile

BACKUP_ROOT="/web/oracle-backups"
backup_root=$BACKUP_ROOT
backup_dir="$backup_root/rman"
backup_dir_old_1="$backup_root/rman-old-1"
backup_dir_old_7="$backup_root/rman-old-7"

ARCHIVED_LOGS_DIR="$ORACLE_BASE/admin/ora8/arch"
PASSWORD_FILE="/usr/local/adm/bin/ora-system-password.sh"
MAX_SIZE_KBYTES=614400
LOG_DATE_FORMAT='+%a %Y-%m-%d %H:%M:%S %Z'

ORA_USERNAME=""
ORA_PASSWORD=""
. $PASSWORD_FILE

CMDNAME=`basename $0`
log_file=`basename $CMDNAME`.log

USAGE="Do nightly Oracle 8.1.7 physical hot backup using RMAN. 

Usage:  ${CMDNAME} [-h] [-t]  

Where: 

  -h  : Show this help and exit. 

  -t  : Don't do use the 'backup database' command, use 

        'backup tablespace' instead. 

  -r  : Redirect stdout and stdin to file.  Set to empty string to 

        turn off default redirection to the log file. 

        Default:  $backup_dir/$log_file 

  -d  : Directory root where we will place our directories of backup files. 

        This must be an ABSOLUTE pathname, and the oracle unix user must be 

        able to write to it. 

        Default:  $BACKUP_ROOT 

        So by default, the backup files will be placed in: 

                  $backup_dir/ 

                  $backup_dir_old_1/ 

                  $backup_dir_old_7/ 
"

tablespace_p=0
redirect_p=0

while getopts htr:d: OPT
do
  case $OPT in
    h)  
        echo
        echo "$USAGE"
        echo
        exit
        ;;
    t)  tablespace_p=1
        ;;
    r)  STDOUT=$OPTARG
        STDERR=$OPTARG
        redirect_p=1
        ;;
    d)  backup_root=$OPTARG
        backup_dir="$backup_root/rman"
        backup_dir_old_1="$backup_root/rman-old-1"
        backup_dir_old_7="$backup_root/rman-old-7"
        ;;
    ?)
        echo 1>&2
        echo "ERROR:  One or more bad command line options." 1>&2
        echo 1>&2
        echo "$USAGE" 1>&2
        exit 1
        ;;
  esac
done
shift `expr $OPTIND - 1`

# Redirect stdout and stderr to where?  Settings STDOUT and STDERR to
# empty string will cause no redirection to occur at all:

if [ "$redirect_p" -ne 1 ]
then
  STDOUT=$backup_dir/$log_file
  STDERR=$backup_dir/$log_file
fi


create_dir() {
  # Takes the path name of directory to create.  Allows recursive
  # creation.  If the file already exists, returns an error if it is a
  # normal file, does nothing if the file is a directory.

  return_code=0
  if [ -f $1 ]
  then
    return_code=1
    echo "Error:  A file '$1' already exists.  Cannot create directory '$1'."
  elif [ ! -d $1 ]
  then
    mkdir -p $1
    return_code=$?
  fi

  return $return_code
}   


# Delete the 2 day old backup, and move the yesterday's backup from
# the current to the 1 day old directory.  On Monday, instead of
# deleting the old Saturday 12:45 am backup, move it into the week-old
# directory:

create_dir $backup_dir
create_dir $backup_dir_old_1
create_dir $backup_dir_old_7

# 0 is Sunday:
weekday_num=`date "+%w"`

if [ $weekday_num -eq 1 ]
then
  /bin/rm -f $backup_dir_old_7/*
  /bin/mv -f $backup_dir_old_1/* $backup_dir_old_7/
fi

/bin/rm -f $backup_dir_old_1/*
/bin/mv -f $backup_dir/* $backup_dir_old_1/


if [ "$tablespace_p" -eq 1 ]
then
  # Using 'backup database' also includes the temp tablespace, which
  # we don't need, and which on our machien is currently ludicrously
  # large.  So for now, we want to use 'backup tablespace' instead.
  # For some info on fixing the temp tablespace problem, see:
  #   https://openacs.org/bboard/q-and-a-fetch-msg.tcl?msg_id=000542
  # Therefore, generate the list of tablespaces we went to backup,
  # excluding the temp tablespace:
  # --atp@piskorski.com, 2002/06/20 09:36 EDT

  tablespace_list=`$ORACLE_HOME/bin/sqlplus -S $ORA_USERNAME/$ORA_PASSWORD <<EOS  1>>$STDOUT 2>>$STDERR
set feed off linesize 500 pages 0
select tablespace_name
from dba_tablespaces
where tablespace_name not like 'TEMP%';
exit
EOS`

  tablespace_list=`echo $tablespace_list | /bin/sed -e 's/ /, /g'`
  backup_what="tablespace  $tablespace_list  include current controlfile ;"

else
  # Use 'backup datbase' rman command, NOT 'backup tablespace':
  backup_what="database ;"

fi

echo "$CMDNAME: We will backup:  $backup_what"  1>>$STDOUT 2>>$STDERR


# Run rman, executing the rman script lines up to the "EOS" sign:
#
# We set the max size on the channel of a single backup piece file to
# MAX_SIZE_KBYTES.  We pick a default value of 600 MB so that we're
# certain we could fit each file onto a 650 MB CD-R disk if for some
# reason we want to.  (If 1 kb = 1023 bytes, 600 MB = 614400 kb.  If 1
# kb = 1000 bytes, 600 MB = 629145.6 kb.)
#
# Note that while the Oracle docs are vague on the distinction,
# filesperset is the maximum number of Oracle data files to read into
# the backup set, NOT the max number of "backup piece" files to write
# out!  Presumably, as many backup pieces will be created as
# necessary, and the max size of each backup piece is defined by the
# 'set limit channel' command.
#
# We could also allocate additional channels in order to spread the
# backup across multiple disks.  Each single backup set goes to only
# one channel, an each channel can have 1 or more backup sets.

echo "$CMDNAME: rman backup STARTING:  `date "$LOG_DATE_FORMAT"`"  1>>$STDOUT 2>>$STDERR

$ORACLE_HOME/bin/rman target / nocatalog <<EOS  1>>$STDOUT 2>>$STDERR
run {

  allocate channel m01 type disk format '$backup_dir/%d-t%t-s%s-p%p.bak';
  allocate channel m02 type disk format '$backup_dir/%d-t%t-s%s-p%p.bak';
  allocate channel m03 type disk format '$backup_dir/%d-t%t-s%s-p%p.bak';

  set limit channel m01 kbytes $MAX_SIZE_KBYTES;
  set limit channel m02 kbytes $MAX_SIZE_KBYTES;
  set limit channel m03 kbytes $MAX_SIZE_KBYTES;

  backup 
    filesperset = 6
    skip inaccessible
    skip offline
  $backup_what

  # Archives the current redo log plus any other non-archived redo
  # logs.  The Oracle docs say you should do this immediately after
  # backing up the database:
  #   http://otn.oracle.com/docs/products/oracle8i/doc_library/817_doc/server.817/a76990/troubler.htm#446214 
  # --atp@piskorski.com, 2002/06/19 21:11 EDT

  sql 'alter system archive log current';

  # Creates a "trace file" script which when run will re-create the
  # control file, but all RMAN info in the control file will be lost.
  # We've already made a physical backup of th control file above, so
  # also doing this here is belt-and-suspenders:
  # --atp@piskorski.com, 2002/06/19 21:11 EDT

  sql 'alter database backup controlfile to trace';
}
exit
EOS

echo "$CMDNAME: rman backup DONE:  `date "$LOG_DATE_FORMAT"`"  1>>$STDOUT 2>>$STDERR

# Note that the 'backup database' command does NOT backup the archived
# redo logs.  Since our nightly file-system backs them up directly
# from where Oracle puts them, this is ok, we don't need to do a
# 'backup archivelog' command in RMAN.

# Note that since we have put them under CVS control, here we are NOT
# backing up the init.ora file, listener.ora, and other config files
# located in:
#   $ORACLE_BASE/admin/ora8/pfile/
#   $ORACLE_HOME/network/admin/

# TODO: The 'backup database' command backs up the control file, but
# it seems unclear WHICH version of the control file it backs up - the
# control file as it existed BEFORE RMAN started, or the version AFTER
# the RMAN backup of the database files completes, when it will
# include recovery catalog information for the database backup just
# completed.  If RMAN backs up the before version, perhaps we would
# also want to do 'backup current controlfile' or 'include current
# controlfile' after the 'backup database'.  I recall seeing at least
# one example in the Oracle docs that implies that this might be a
# good thing to do, but none of the rest the examples do so, so it's
# probably not necessary.
# --atp@piskorski.com, 2002/06/19 19:56 EDT


# It would be nice to have RMAN write to a named pipe and have gzip
# read from the named pipe at the same time in the background, like we
# do in the ora-export.sh script, but I'm not even really certain
# whether that's feasible.  So to keep things simple, just gzip
# separately after RMAN is done:  --atp@piskorski.com, 2002/06/25 09:19 EDT

echo "$CMDNAME: gzip STARTING:  `date "$LOG_DATE_FORMAT"`"  1>>$STDOUT 2>>$STDERR
/bin/gzip $backup_dir/*.bak  1>>$STDOUT 2>>$STDERR
echo "$CMDNAME: gzip DONE:  `date "$LOG_DATE_FORMAT"`"  1>>$STDOUT 2>>$STDERR

# Remove archived redo logs more than 12 days old: 
#
# Since on Monday we move the previous Saturday backup into the week
# old directory (2 days old), a minimum of ** 10 ** days of archived
# redo logs (7 days + 2 + 1 extra to account for different times
# during the day) should insure that we always have enough archived
# logs on disk to take us all the way back to the oldest full backup
# which we have on disk.

/bin/find $ARCHIVED_LOGS_DIR ( -ctime +12 ) -exec /bin/rm {} ;  1>>$STDOUT 2>>$STDERR

echo "$CMDNAME: everything DONE:  `date "$LOG_DATE_FORMAT"`"  1>>$STDOUT 2>>$STDERR