Please send your Questions & Answers or Feedback to "dvmadhavreddy@javabook.org"

Please send your Questions & Answers or Feedback to "mohan@javabook.org"

RMAN : Recovery of online redo log file


oracle venu> ls
oracle venu> rm *.log
oracle venu> ls
RMAN> validate database;
RMAN> shutdown abort
RMAN> startup mount
RMAN> list failure;
RMAN> restore database;
RMAN> recover database;
RMAN> sql 'alter database open';
RMAN> list failure;
RMAN> advice failure;
RMAN> repair failure;
RMAN> list failure;

RMAN : Recovery of Control files


RMAN> report schema;
Note : we cannot see the controlfile location in RMAN.
SQL> select name from v$controlfile;
oracle venu> ls
oracle venu> rm control01.ctl
oracle venu> ls
RMAN> validate database;
RMAN> shutdown abort
SQL> shut abort
oracle> rman target /
RMAN> startup nomount
RMAN> restore controlfile from '/u03/rmanbkp/control01.ctl';
RMAN> alter database mount;
RMAN> recover database;
RMAN> sql 'alter database open resetlogs';

RMAN : Recovery of System datafile


RMAN> report schema;
oracle venu> ls
oracle venu> rm system01.dbf
oracle venu> ls
RMAN> validate database;
RMAN> shutdown abort
RMAN> startup mount
RMAN> restore datafile <file_id>;
RMAN> recover datafile <file_id>;
RMAN> sql 'alter database open';

RMAN : Recovery of non system datafile


First method :
RMAN> report schema;
oracle> cd /u01/app/oracle/venu
oracle venu> ls
oracle venu> rm users01.dbf
oracle venu> ls
RMAN> validate database;
RMAN> sql 'alter database datafile <file_id> offline;
RMAN> restore datafile <file_id>;
RMAN> recover datafile <file_id>;
RMAN> sql 'alter database datafile <file_id> online;

Second method :
RMAN> startup mount
RMAN> restore datafile <file_id>;
RMAN> recover datafile <file_id>;
RMAN> sql 'alter database open resetlogs';

Third method :
oracle venu> rm user01.dbf
oracle venu> ls
RMAN> validate database;
RMAN> list failure;
RMAN> advice failure;
Note : copy n paste the given script with @ (or)
RMAN> repair failure;

Note : cat <path of the script to see the contents>

RMAN : Recovery of Full database


RMAN> report schema;
oracle> cd /u01/app/oracle/venu
oracle venu> ls
oracle venu> rm *
oracle venu> ls
RMAN> validate database;
RMAN> shutdown abort
RMAN> startup nomount
RMAN> restore controlfile from '/u03/rmanbkp/control01.ctl';
RMAN> alter database mount;
RMAN> restore database;
RMAN> recover database;
RMAN> sql 'alter database open resetlogs';
Note : If we want to use any SQL command in RMAN prompt, specify 'sql' infront of the single quote sql command like above.

RMAN : Loss of online redo log files steps


1.shutdown abort
2.startup mount
3.restore database;
4.recover database;
5.alter database open resetlogs;

RMAN : Loss of Control files steps


1.shutdown abort
2.startup nomount
3.restore controlfile
4.alter database mount;
5.recover database;
6.alter database open resetlogs;

RMAN : Loss of System datafile steps


1.shutdown abort
2.startup mount
3.restore datafile <file_id>;
4.recover datafile <file_id>;
5.alter database open;

RMAN : Loss of non system datafile steps


1.offline the datafile
2.restore the datafile
3.recover the datafile
4.online the datafile

RMAN : Loss of full database steps


1.shutdown abort
2.startup nomount
3.restore controlfile;
4.alter database mount;
5.restore database;
6.recover database;
7.alter database open resetlogs;

RMAN Full Database Backup


root> su - oracle
oracle> export ORACLE_SID=venu
oracle> sqlplus / as sysdba
SQL> startup
SQL> exit
oracle> rman target /
RMAN> show all;
RMAN> host 'clear';
oracle> cd /u03/rmanbkp
oracle rmanbkp> ls
RMAN> backup database plus archivelog;
oracle rmanbkp> ls

Note : datafiles will be shared among the channels.

RMAN> list backup; --> lists available backups

RMAN in nocatalog mode


root> su - oracle
oracle> export ORACLE_SID=venu
oracle> sqlplus / as sysdba
SQL> startup
SQL> select open_mode,name,log_mode from v$database;
SQL> select instance_name,status from v$instance;
SQL> select name from v$tablespace;
SQL> select name from v$datafile;
SQL> select name from v$controlfile;
SQL> select member from v$logfile;

FIrst way :
oracle> export ORACLE_SID=venu
oracle> rman target /
RMAN> exit

Second way :
oracle> rman nocatalog target /
RMAN> exit

THird way :
oracle> rman
RMAN> connect target /

RMAN> shut immediate
RMAN> shutdown immediate
RMAN> exit

oracle> rman target /
RMAN> startup
RMAN> host 'clear'; --> to clear the screen
RMAN> shutdown immediate

RMAN> startup nomount
RMAN> alter database mount;
RMAN> alter database open;
RMAN> show all; --> shows all the RMAN configuration parameters.
Note : We can make obsolete backup as valid backup but we cannot make expired backup as valid backup.
RMAN performs restore and recovery based on snapshot controlfile.
SQL> select * from v$rman_configuration;
Lists the rows regarding to RMAN configuration parameters what we set.
RMAN> configure retention policy to redundancy 2;
RMAN> show all;
SQL> select * from v$rman_configuration;
RMAN> spool log to rman.log
RMAN> show all;
RMAN> spool off;
RMAN> host 'pwd';
RMAN> exit;
oracle> pwd
oracle> ls

By default backup sets will be backuped into FRA(Flash_Recovery_Area) and controlfile into oraclehome/dbs.

oracle> vi rman.log

2.configure backup optimization on;
4.configure controlfile autobackup on;
5.configure controlfile autobackup format for device type disk to '/u03/rmanbkp/%f';
6.configure device type disk parallelism 2 backup type to backupset;
15.configure channel 1 device type disk format '/u03/app/rmanbkp/%u';
16.configure channel 2 device type disk format '/u03/app/rmanbkp/%u';

%u --> backup to some other backup destination.
%c --> follows the naming convention as it is while taking backup.

root> mkdir -p /u03/rmanbkp
root> mkdir -R oracle:oinstall /u03/rmanbkp
root> mkdir -R 775 /u03/rmanbkp
root> su - oracle
oracle> export ORACLE_SID=venu
oracle> rman target /
RMAN> @rman.log
RMAN> show all;

SQL> select * from v$rman_configuration;
RMAN> host 'clear';
RMAN> list backup; --> to see the existing backups
RMAN>  report schema; --> lists tablespaces, datafiles and all.
RMAN> report need backup;
oracle> cd /u03/rmanbkp
oracle rmanbkp> ls
RMAN> backup datafile 4;
oracle rmanbkp> ls
Oracel generates backup sets in default directory(FRA). If we specify other backup destination it generates backup pieces not backup sets.
RMAN> report need backup;
RMAN> backup datafile 4;

oracle rmanbkp> ls
RMAN> report need backup;

If we reach the redundancy it deletes/removes the file from report need backup list.

RMAN> report obsolete;
RMAN> backup datafile 4;
oracle rmanbkp> ls
RMAN> report obsolete;
RMAN> show retention policy;
RMAN> configure retention policy to redundancy 3;
RMAN> report obsolete;
RMAN> configure retention policy to redundancy 2;
RMAN> report obsolete;
RMAN> delete obsolete;
RMAN> report obsolete;

RMAN> backup tablespace users,undots1; --> here it uses 2 channels.
RMAN> backup datafile 4,3;
RMAN> backup current controlfile;
RMAN> backup spfile;

RMAN> list backup;
RMAN> host 'clear';
oracle rmanbkp> ls
RMAN> delete backupset 16; --> backupset key
RMAN> list backup;
oracle rmanbkp> ls
RMAN> backup datafile 4 tag='USERS';
RMAN> list backup;
By tag name we can identify the backup.
RMAN> list archivelog all; --> lists archive logs

SQL> alter system switch logfile;

RMAN> list archivelog all;
RMAN> backup archivelog all;
RMAN> backup archivelog sequence 2;
RMAN> delete archivelog sequence 2;
RMAN> list archivelog all;
RMAN> restore archivelog sequence 2;
RMAN> list archivelog all;
RMAN> list backup;

oracle rmanbkp> ls
oralce rmanbkp> rm -rf*
oracle rmanbkp> ls

RMAN> list backup;
RMAN> crosscheck backup;
RMAN> list backup;
RMAN> delete noprompt expired backup;
noprompt means it do not ask for confirmation.
RMAN> list backup;

RMAN Incremental/Differential Backups


In case of incremental backups RMAN takes the backup of only modified blocks.
The different levels of incremental that are possible are :
1.level 0(complete backup)
2.level 1(incremental)
3.level 2(cumulative)
Level 0 is the base backup for next incrementals.
If we issuse level 1 without level 0, by default RMAN takes level 0 backup.
From 10.2.0.4, levels greater than 0 and 1 have become obsolete.

RMAN> backup incremental level 0 database;
RMAN> backup incremental level 1 database;
RMAN> backup incremental level 2 database; ---> earlier
RMAN> backup incremental level 1 cumulative database; ---> 10.2.0.4
RMAN> backup as compressed backup set level 0 database;

Pre Requisits to configure RMAN in catalog mode


Note : It is not only important to have a proper backup policy for all target databases, it is also equally important to have a backup policy for catalog database.
Example : everyday schema level logical backup.
 weekly once cold backup(recommanded to run catalog database in archive log mode).
1.Identify a low end server or least utilised server and install the same or higher version of oracle binaries.
2.create a catalog database
3.create a separate tablespace to hold RMAN metadata tables.
4.create a user and grant connect,resource,recovery_catalog_owner.
5.create catalog
6.configure tnsnames.ora so that it points to the target database listeners.
7.register target database into recovery catalog.
8.configure RMAN parameters
9.create backup,restore and recovery scripts.

RMAN(Recovery MANager) Backup 2

If we configure FRA(Flash Recovery Area) by default RMAN backup sets will be created in the FRA.
RMAN is capable of taking the backup into disk but not into tape library. In order to take the backup into tape library, we need to integrate RMAN with third party MML(Media Management Layer) softwares like Tivoli data protector for oracle, HP data protector for oracle, Veritos, Legato or OSB(Oracle Secure Backup).
OSB is propreitory of oracle and it is separate licenced product.
In case of tape backups third party administrator is going to install third party software and all the database servers.
Once third party software is installed, we need to create RMAN backup, restore and recovery scripts by specifying third party specific environment variables.We can get all third party specific environment variables from support.oracle.com.
In oracle 11g, oracle has introduced multi section backups where we can split out the backup of huge datafile into multiple sections.
In oracle 10g, oracle has introduced block change tracking to minimise the incremental backup time.
Once we enable block change tracking , a new background process CTWR(Change Track Writer) gets enabled and it keeps track of the modified blocks information and updates to a separate file created at Operating System level.
Note : If RMAN is the backup policy, recommanded to configure Large Pool(LP) otherwise RMAN uses some space in Shared Pool(SP).

RMAN(Recovery MANager) Backup


RMAN backup is a block level backup(oracle block).
RMAN is a centralised backup and restore and recovery utility.
We can configure RMAN in 2 modes :
1.no catalog mode
2.catalog mode
Recommanded to configure RMAN in catalog mode. In no catalog mode, RMAN stores the configuration parameters information and backup information in the target database controlfile(reusable section).
The retention of the backup information depends on the parameter "controlfile_record_keep_time"(default value 7 days). Even if we specify a bigger value, the retention may not be guaranteed since control file grows upto Operating System limitation.

In no catalog mode, we cannot create stored RMAN scripts which can be used across the Enterprise databases.
In no catalog mode, we cannot use "keep until time" clause to retain a backup for a specific time interval.

RMAN backup retention is based on 2 factors
1.retention based on number of backups.
parameter : configure retention  policy to redundancy <value>;
2.retention based on number of days.
parameter : configure retention policy to recovery window of <x days>;

In case of RMAN, excessive redo information will not be generated.
Execution of every RMAN backup command produces backup sets in the backup destination.
A backup set is nothing but collection of backup pieces.
Separate backup pieces will be created for datafiles and archive logs.
If the instance is started with Spfile, by default RMAN takes the backup of Spfile.
RMAN uses API's like DBMS_RCVMAN and DBMS_RCVCAT.
For catalog mode configuration, we create a separate database called catalog database.
Catalog is a repository for all target databases and it holds the RMAN configuration parameters information and backup information.
In order to minimise the backup, restore and recovery time, recommanded to configure multiple channels.
A channel is a communication session from catalog database to target database.
Number of channels generally depends on the size of the target database as well as number of processors.
In a uni processor based system, even if we configure multiple channels, we dont gain much performance benefits.

Hot backup : Loss of Online Redo Logs


oracle> cd /u01/app/oracle/venu
oracle venu> rm *.log
SQL> shut abort
oracle venu> cd /u01/hotbkp
oracle hotbkp> cp *.dbf /u01/app/oracle/venu
SQL> startup mount
SQL> recover database until cancel;
specify log : press enter one time
specify log : auto
SQL> recover cancel;
SQL> select open_resetlogs from v$database;
SQL> alter database open resetlogs;

Hot backup : Loss of Full database


oracle> cd /u01/app/oracle/venu
oracle venu> ls
oracle venu> rm *.*

SQL> conn scott/tiger
scott> insert into salgrade select * from salgrade;
scott> select count(*) from salgrade;
scott> conn / as sysdba
SQL> shut abort

oracle> cd /u01/hotbkp
oracle hotbkp> cp *.dbf /u01/app/oracle/venu
oracle hotbkp> cp control01.ctl /u01/app/oracle/venu

SQL> startup mount
SQL> alter database recover automatic using backup controlfile until cancel;
SQL> recover cancel;
SQL> select open_resetlogs from v$database;
SQL> alter database open resetlogs;

oracle> cd /u01/app/oracle/venu
oracle venu> ls
Automatically online redo logs will be created.

SQL> conn scott/tiger
scott> select count(*) from salgrade;
scott> conn / as sysdba
SQL> alter database begin backup;
SQL> select * from v$backup;

oracle venu> ls
oracle venu> cp *.dbf /u01/hotbkp


SQL> alter database end backup;
SQL> select * from v$backup;
SQL> alter database backup controlfile to '/u01/hotbkp';
SQL> alter system switch logfile;

Steps to take Hot backup


root> su - oracle
oracle> export ORACLE_SID=venu
oracle> !sq
SQL> startup

Steps :
1.create the backup directory structure.
2.alter database begin backup;
3.Take the backup of datafiles at Operating System level.
4.alter database end backup;
5.Take the backup of control file at Oracle level.
6.alter system switch logfile;

SQL> select name,open_mode,log_mode from v$database;
SQL> select instance_name,status from v$instance;
SQL> select name from v$datafile;
SQL> select member from v$logfile;

root> mkdir -p /u01/hotbkp
root> chown -R oracle:oinstall /u01/hotbkp
root> chmod -R 775 /u01/hotbkp

SQL> select * from v$backup;
SQL> alter database begin backup;
SQL> select * from v$backup;

oracle> cd /u01/app/oracle/venu
oracle venu> ls
oracle venu> cp *.dbf /u01/hotbkp

SQL> alter database end backup;
SQL> select * from v$backup;
SQL> alter database backup controlfile to '/u01/hotbkp';
SQL> alter system switch logfile;
SQL> conn scott/tiger
scott> insert into salgrade select * from salgrade;
scott> /
scott> commit;
scott> select count(*) from salgrade;
scott> conn / as sysdba
SQL> alter system switch logfile;

Steps to take Hot Backup


1.alter tablespace system begin backup;
2.copy datafiles of system tablespace to backup location.
3.alter tablespace system end backup;
4.repeat step 1 to step 4 for all the tablespaces
5.alter database backup controlfile to '<path/control01.ctl>';
6.alter system switch logfile;

Hot Backup


Hot backup is also called as online backup/inconsistant backup.
This is the type of backup strategy that we implement for 24/7 customers.
The prerequisite for hot backup strategy is database must and should run in archive log mode.
In case of hot backup there is no concept of simple restore(we must and should perform recovery).
In order to take hot backup we depend on operating system command "cp".
In case of hot backup, we are going to backup only datafiles and control files but not online redo log files since they are contiguously modified.
In case of hot backup, we backup the datafiles at operating system level using "cp" command and control files at oracle level by executing the following command :
SQL> alter database backup controlfile to '<path/control01.ctl>';
We take hot backup tablespace by tablespace by keeping the tablespace in a special mode called 'begin backup'.
SQL> alter tablespace system begin backup;
Once the backup of datafiles of a specific tablespace is completed, we execute :
SQL> alter tablespace system end backup;
Once we keep the tablespace in begin backup mode, oracle freezes the headers of the datafiles and oracle will not update the SCN that was generated.
Once we execute end backup of the tablespace oracle updates the SCN that was generated.
Hot backup is not a consistant backup since there are chances of having fractured blocks/split blocks. Because of this reason we cannot perform simple restore.
Recommanded to keep the tablespce in begin backup for shorter duration otherwise excessive redo information will be generated which results in more number of archives in the archive destination.
If the archive destination is 100% filled,database goes to hung state.
In case of hot backup for the first time oracle brings and keeps the entire block in the Database Buffer Cache(DBC) which leads to more number of log switches.
At the end of every hot backup recommanded to switch the log file manually by executing the following command :
SQL> alter system switch logfile;
The above causes a checkpoint and the checkpoint information will be updated to the control file and datafile headers.

In oracle 10g, oracle has introduced a single command to keep the entire database in begin backup mode.
SQL> alter database begin backup;
copy *.dbf <backup destination>
SQL> alter database end backup;
Recommanded to use the above command during half peek hours (non business hours) otherwise excessive redo information will be generated.
With respect to cold/hot, the way how we take the backup differs, but once it comes to restore and recovery the process is same.

Loss of Temp.dbf file

Note : It is not necessary to perform restore and recovery with respect to temp.dbf since controlfile does not have any entry with respect to temp file.
Create new temporary tablespace and make it as default for the database.

Loss of one control file

1.shut abort
2.copy lostcontrolfile to the location and rename it
3.startup

Recovery of non system datafile which is not in the backup


SQL> create tablespace raj
datafile '/u01/app/oracle/venu/raj01.dbf' size 2m;
SQL> create user u2 identified by u2 dafault tablespace raj;
SQL> grant connect,resource to u2;
SQL> conn u2/u2
u2> create table a(a number);
u2> insert into a values(1);
u2> insert into a select * from a;
u2> /
u2> conn / as sysdba
SQL> select name from v$datafile;
SQL> select file#,name from v$datafile;

oracle venu> ls
oracle venu> rm raj01.dbf
oracle venu> ls

SQL> alter database datafile 5 resize 10m;
SQL> alter database datafile 5 offline;
SQL> alter database create
datafile '/u01/app/oracle/venu/raj01.dbf';
SQL> recover datafile 5;
SQL> alter database datafile 5 online;
SQL> conn u2/u2
u2> select count(*) from a;

Point in time recovery


SQL> select * from all_users;
SQL> conn scott/tiger
scott> set time on;
time scott> select sysdate from dual;
time scott> conn / as sysdba
time SQL> drop user scott cascade;
time SQL> shut immediate/abort

oracle coldbkp> ls
oracle coldbkp> cp *.* /u01/app/oracle/venu

time SQL> startup mount
time SQL> set time off;
SQL> alter database recover automatic using backup controlfile using time 'data time';
SQL> recover cancel;
SQL> alter database read only;
After recovery we need to open the database in read only mode to check wheather scott is recovered or not.
If we open the database in read write mode and if the scott is not recovered then we cannot recover scott with the existing backup.
SQL> select name,open_mode from v$database;
SQL> conn scott/tiger
sccot> conn / as sysdba
SQL> shut immediate
SQL> startup mount
SQL> select open_resetlogs from v$database;
SQL> alter database open resetlogs;

Recovery of the Control file


root> su - oracle
oracle> export ORACLE_SID=venu
oracle> sqlplus / as sysdba
SQL> startup

oracle> cd /u01/app/oracle/venu
oracle venu> ls
oracle venu> cp *.* /u01/coldbkp
SQL> select name from v$controlfile;
oracle venu> rm controlo1.ctl
SQL> conn scott/tiger
SQL> shut abort

oralce coldbkp> ls
oracle coldbkp> cp control01.ctl /u01/app/oracle/venu

SQL> startup mount
SQL> recover database using backup controlfile until cancel;
specify log : auto
SQL> recover cancel;
SQL> select member from v$logfile;
SQL> select group#,status from v$log;
SQL> recover database using backup controlfile until cancel;
specify log : /u01/app/oracle/venu/redo02.log
We need to apply the current log first, if failed then apply the active log.
SQL> select open_resetlogs from v$database;
This will show either to open the database with resetlogs or not.
SQL> alter database open resetlogs;
SQL> shut immediate
Dont forget to take the backup of the database after recovery when the database is opened with resetlogs.

oracle venu> ls
oracle venu> cp *.* /u01/coldbkp

Steps to perform "point in time" recovery


1.shut immediate/shut abort
2.restore CRD files from the backup destination
3.startup mount
4.alter database recover automatic using backup controlfile until time 'data time';
5.alter database open resetlogs;

Steps to recover the Loss of non system datafile which is not in the backup


1.offline the datafile
2.alter database create datafile <'path'>;
3.recover datafile
4.alter database datafile online

Tablespace importing using datapump


oracle> impdp dumpfile=ts.dmp log=ts.log tablespaces=ssss
username : / as sysdba

Tablespace exporting using datapump


oracle> expdp dumpfile=ts.dmp full=y
username : / as sysdba

Loss of sysaux datafile

root> su - oracle
oracle> export ORACLE_SID=venu
oracle> sqlplus / as sysdba
SQL> startup
SQL> select name from v$datafile;
SQL> select name from v$controlfile;
SQL> select member from v$logfile;

oracle> cd /u01/app/oracle/venu
oracle venu> ls
oracle venu> cp *.* /u01/coldbkp
oracle venu> rm nonsystemdatafile.dbf
We can restore the non system data file while the database is up and running.
SQL> conn scott/tiger
scott> insert into salgrade select * from salgrade;
scott> conn / as sysdba

SQL> select file#,error,status from v$datafile_header;
SQL> select * from v$recover_file;
Shows the missed datafile.
SQL> alter database datafile 4 offline;

oracle> cd /u01/coldbkp
oracle coldbkp> ls
oracle coldbkp> cp nonsystemdatafile /u01/app/oracle/venu

SQL> select file#,checkpoint_change# from v$datafile;
Picks the data from the control file.
save df.sql
SQL> select file#,checkpoint_change# from v$datafile_header;
Picks the data from the datafile headers.
save dfh.sql

SQL> recover datafile 4;
If the database is in noarchivelog mode, no need to perform recover after restoring the database.
SQL>@df.sql
SQL>@dfh.sql
SQL> alter database datafile 4 online;
SQL> select * from v$recover_file;
Now it shows no records as the missed datafile is restored and recovered.

Loss of Redo log files

1.shut abort
2.restore datafiles from backup location
3.startup mount
4.recover database until cancel;
5.alter database open resetlogs;

Loss of All Control Files

1.shut abort
2.restore control file from the backup location
3.startup mount
4.recover database using backup controlfile until cancel;
5.alter database open resetlogs;

Note:  The only scenario where we open the database with resetlogs but still it is complete recovery is in case of loss of current control file.

Loss of System datafile

SQL> select name from v$datafile;
oracle> cd /u01/app/oracle/venu
oracle venu> rm system01.dbf
oracle venu> ls

SQL> conn scott/tiger
scott> insert into salgrade select * from salgrade;
scott> conn / as sysdba
SQL> shut immediate
Throws error.
SQL> shut abort

oracle> cd /u01/coldbkp
oracle coldbkp> ls
oracle coldbkp> cp system01.dbf /u01/app/oracle/venu

SQL> startup mount
SQL> select file#,checkpoint_change# from v$datafile;
Picks the data from the control file.
save df.sql
SQL> select file#,checkpoint_change# from v$datafile_header;
Picks the data from the datafile headers.
save dfh.sql
SQL> recover datafile 1;
Here we can specify the file number or the complete path of the datafile.
SQL> @df.sql
SQL> @dfh.sql
SQL> alter database open;

Loss of Non- system datafile

root> su - oracle
oracle> export ORACLE_SID=venu
oracle> sqlplus / as sysdba
SQL> startup
SQL> select name from v$datafile;
SQL> select name from v$controlfile;
SQL> select member from v$logfile;

oracle> cd /u01/app/oracle/venu
oracle venu> ls
oracle venu> cp *.* /u01/coldbkp
oracle venu> rm nonsystemdatafile.dbf
We can restore the non system data file while the database is up and running.
SQL> conn scott/tiger
scott> insert into salgrade select * from salgrade;
scott> conn / as sysdba

SQL> select file#,error,status from v$datafile_header;
SQL> select * from v$recover_file;
Shows the missed datafile.
SQL> alter database datafile 4 offline;

oracle> cd /u01/coldbkp
oracle coldbkp> ls
oracle coldbkp> cp nonsystemdatafile /u01/app/oracle/venu

SQL> select file#,checkpoint_change# from v$datafile;
Picks the data from the control file.
save df.sql
SQL> select file#,checkpoint_change# from v$datafile_header;
Picks the data from the datafile headers.
save dfh.sql

SQL> recover datafile 4;
If the database is in noarchivelog mode, no need to perform recover after restoring the database.
SQL>@df.sql
SQL>@dfh.sql
SQL> alter database datafile 4 online;
SQL> select * from v$recover_file;
Now it shows no records as the missed datafile is restored and recovered.

Scenario : Loss of Full Database

Steps to recovery :
1.shut abort
2.startup nomount
3.restore the CRD files
4.alter database mount
5.alter database recover automatic using backup controlfile until cancel;
6.recover cancel;
7.alter database open resetlogs;

**** SQL> create pfile from memory; *****
Execute the above command when we lost the pfile when the database is up and running.

SQL> archive log list;
SQL> select name from v$datafile;
SQL> select name from v$controlfile;
SQL> select member from v$logfile;
Go to the CRD files location and remove all the files to crash the database and try to connect to scott and try to insert some records to check.
SQL> conn scott/tiger
scott> insert into salgrade select * from salgrade;
scott> /
scott> conn / as sysdba
SQL> shut immediate
The above command throws an error as shut immediate is a graceful command.
SQL> shut abort
SQL> startup nomount

oracle> cd /u03/coldbkp
oracle coldbkp> cp *.* /u03/app/oracle/venu

SQL> alter database mount;
SQL> select file#,checkpoint_change# from v$datafile;
save df.sql
SQL> select file#,checkpoint_change# from v$datafile_header;
save dfh.sql
SQL> alter database recover automatic using backup controlfile until cancel;
SQL> recover cancel;
SQL> @df.sql
SQL> @dfh.sql
SQL> select open_resetlogs from v$database;
To know the resetlogs are required for our database or not.
SQL> alter database open resetlogs;

Note : In case of full database crash recovery, after applying the archive logs, log sequence mismatch occurs. To overcome this problem we are going to open the database with resetlogs. Resetlogs is going to synchronise the log sequence number by creating brand new online redo logs starting with the log sequence one.

Checking after recovery :
SQL> startup
SQL> conn scott/tiger
scott> select count(*) from salgrade;
scott> conn / as sysdba
SQL> shut immediate
When the database is opened with resetlogs we need to take the cold backup again as the checkpoint_change numbers were changed.

Steps to take Cold Backup

1.shutdown the database
2.create backup directory structure
3.copy CRD files to backup destination
4.start the database

root> su - oracle
oracle> cat /etc/oratab
To see the existing databases.
oracle> export ORACLE_SID=venu
oracle> sqlplus / as sysdba
SQL> startup
SQL> select name,open_mode,log_mode from v$database;
SQL> select instance_name,status from v$instance;
SQL> select name from v$datafile;
SQL> select name from v$controlfile;
SQL> select member from v$logfile;
SQL> shut immediate

root> mkdir -p /u03/coldbkp
root> chown -R oracle:oinstall /u03/coldbkp
root> chmod -R 775 /u03/coldbkp

root> su - oracle
oracle> cd /u01/app/oracle/venu
oracle venu> ls
oracle venu> cp *.* /u03/coldbkp

SQL> startup
SQL> archive log list;
SQL> conn scott/tiger
scott> select count(*) from salgrade;
scott> insert into salgrade select * from salgrade;
scott> /
scott> commit;
scott> select count(*) from salgrade;
scott> conn / as sysdba
SQL> alter system switch logfile;
When we switch the logfile manually, the status of the file will be changed to active from current and the archive logs will be generated.

Scenario-3 : Loss of system datafile(sysaux.dbf)

1.Offline sysaux datafile
2.Restore sysaux datafile
3.Recover sysaux datafile
4.Online sysaux datafile

Scenario-2 : Loss of System datafile(system.dbf)

1.shut abort
2.Restore system.dbf from the latest backup
3.startup mount
4.Recover database
5.Open the database

Scenario-1 : Loss of non system data file

Method-1(not advisable)
1.offline the tablespace.
2.Restore the lost datafile from the latest backup.
3.Recover the lost datafile.
4.Online the tablespace.

Method-2(Advisable)
1.Offline the lost datafile.
2.Restore the lost datafile from the latest backup.
3.Recover the lost datafile.
4.Online the datafile.

Steps to take cold backup

1.shut immediate
2.Copy CRD files to the backup destination.
3.startup

User managed physical backup : recovery

Recovery is of 2 types :
                          incomplete recovery
                          complete recovery
Complete recovery means there will be no loss of data.
In case of incomplete recovery there will be loss of data.
Loss of current online redo log files leads to incomplete recovery.

In case of incomplete recovery we open the database with reset logs.
Whenever we open the database with reset logs, oracle creates brand new online redo logs starting with the log sequence one and generates a new incornation number for the database.
Theonly scenario where we open the database with reset logs but still it is complete recovery is in case of loss of current control file.
Along with the database backup it is also recommanded to backup pfile/spfile and archive logs.
During the process of recovery, oracle applies the generated archive logs in the same sequence.
Prior to 10g, whenever we open the database with reset logs, recommanded to take the backup of database or atleast the current control file.
Prior to 10g, the child incornation archive logs cannot be applied to the parent incornation(because of this reason, recommanded to take the backup of the database).

User Managed Physical Backups

Cold Backup
In case of physical backups we are going to backup the physical files of the database(CRDfiles).
In order to take cold backup we need to shutdown the database gracefully by executing any one of the following commands:
                     shutdown normal
                     shutdown transactional
                     shutdown immediate
we cannot implement cold backup strategy for 24/7 customers.
In case of cold backup we are going to copy CRD files to the backup destination by using "cp" command.
In case of cold backup redo logs are not mandatory to backup but if we want to perform simple restore redo logs are mandatory.

Data pump meta data importing

u1> drop table emp purge;

u1> select * from tab;

oracle> impdp dumpfile=meta.dmp remap_schema=scott:u1 directory=dp

u1> select * from tab;

u1> desc emp;

Data pump row level importing

u1> drop table emp purge;

u1> select * from tab;

oracle> impdp dumpfile=row.dmp remap_schema=scott:u1 directory=dp

                username : / as sysdba

u1> select * from emp;

Data pump table level importing

u1> drop table emp purge;

u1> drop table dept purge;

                user should exist in table level import.

oracle> impdp dumpfile=table.dmp remap_schema=scott:u1 directory=dp

                username : / as sysdba

u1> select * from tab;

Data pump user level importing

SQL> drop user scott cascade;

oracle> impdp dumpfile=user.dmp remap_schema=scott:scott directory=dp

                username : / as sysdba

                We no need to create the skeleton user before importing the data like logical traditional importing.

SQL> alter user scott account unlock indentifed by scott;

SQL> conn scott/tiger

scott> select * from tab;

 

oracle> impdp dumpfile=user.dmp remap_schema=scott:u1 directory=dp

                If there is no u1 user,it creates.

                u1 takes tiger as password.

Data pump exporting : Meta data backup

oracle> expdp dumpfile=meta.dmp log=meta.log tables=emp rows=n directory=dp job_name=metadata

                username : scott/tiger

scott> select count(*) from tab;

Data pump exporting : Row level backup

oracle> expdp dumpfile=row.dmp log=row.log tables=emp query='\where deptno=20\' directory=dp

                username : scott/tiger

scott> select count(*) from tab;

Data pump exporting : Table level backup

SQL> grant read,write on directory dp to scott;

SQL> conn scott/tiger;

scott> select count(*) from tab;

oracle> expdp dumpfile=table.dmp logfile=table.log tables=emp,dept directory=dp

                username : scott/tiger

scott> select count(*) from tab;

Data pump exporting : User level backup

oracle> expdp dumpfile=user.dmp logfile=user.log schemas=scott directory=dp

                username : / as sysdba

SQL> select count(*) from tab;

                Check for the tables before starting the job and after ending the job.

Data pump exporting : Full database backup

oracle> expdp dumpfile=fulldb.dmp logfile=fulldb.log full=y directory=dp job_name=fulldatabase

                username : / as sysdba

SQL> select count(*) from tab;

                Check for the tables before starting the job and after ending the job.

 

                stopping a job :

                press Ctrl C to stop a job.

                export> stop_job=immediate

                SQL> select * from dba_datapump_jobs;

                SQL> select * from tab;

 

                attaching a job(reinitiating a job):

                scott> conn / as sysdba

                SQL> select * from dba_datapump_jobs;

                oracle> expdp attach=fulldatabase

                                username : / as sysdba

                export> continue_client

                SQL> select * from dba_datapump_jobs;

Data pump exporting

root> su - oracle

oracle> export ORACLE_SID=venu

oracle> sqlplus / as sysdba

SQL> startup

 

root> su - oracle

oracle> export ORACLE_SID=venu

oracle> expdp dumpfile=sys.dmp logfile=sys.log

                username : / as sysdba

 

SQL> select count(*) from tab;

                Check for the tables before starting the job and after ending the job.

               

                default location :

                /u01/app/oracle/venu/dpdump/sys.dmp

                SQL> select * from dba_directories;

                oracle> cd /u01/app/oracle/venu/dpdump/

                oracle dpdump> ls

 

                different location :

                SQL> create directory dp as '/u03/dpdump';

                root> mkdir -p /u03/dpdump

                root> chown -R oracle:oinstall /u03/dpdump/

                root> chmod -R 775 /u03/dpdump/

                root> su - oracle

                oracle> export ORACLE_SID=venu

Logical Incremental Backup : Importing

SQL> drop user scott cascade;

 

oracle logical> imp file=thursday.dmp log=thursday.log inctype=system ignore=y full=y

                username : / as sysdba

SQL> conn scott/tiger

 

oracle logical> imp file=sunday.dmp log=sunday.log inctype=restore ignore=y full=y

                username : / as sysdba

 

root> su - oracle

oracle> export ORACLE_SID=venu

 

oracle logical> imp file=wedday.dmp log=wedday.log inctype=restore ignore=y full=y

                username : / as sysdba

 

scott> select * from tab;

 

oracle logical> imp file=thursday.dmp log=thursday.log inctype=restore ignore=y full=y

                username : / as sysdba

 

scott> select * from tab;

Logical Incremental Backup : Exporting

root> su - oracle

oracle> export ORACLE_SID=venu

oracle> sqlplus / as sysdba

SQL> startup

 

oracle logical> rm *

oracle logical> export ORACLE_SID=venu

oracle logical> exp file=sunday.dmp log=sunday.log inctype=complete direct=y statistics=none buffer=200000

                username : / as sysdba

                direct=y - bypass the SGA layer

                inctype=complete - full database backup with timestamp

                buffer=200000 - the number of database buffer cache(DBC) buffers to use

 

SQL> conn scott/tiger

SQL> alter user scott account unlock identified by tiger;

scott> select * from tab;

scott> insert into emp select * from emp;

                error : unique constraint violated

scott> alter table emp disable primary key cascade;

scott> insert into emp select * from emp;

scott> commit;

scott> select count(*) from emp;

 

oracle logical> exp file=monday.dmp log=monday.log inctype=incremental direct=y statistics=none buffer=200000

                username : / as sysdba

                Here only the emp table is backuped as the data is changed in it.

                If any problem occurs, drop the scott schema and recreate the schema again.

SQL> conn scott/tiger

scott> alter table dept disable primary key cascade;

scott> insert into dept select * from dept;

scott> insert into salgrade select * from salgrade;

scott> commit;

scott> select count(*) from dept;

scott> select count(*) from salgrade;

 

oracle logical> exp file=tuesday.dmp log=tuesday.log inctype=incremental direct=y statistics=none buffer=200000

                username : / as sysdba

                Here emp,dept and salgrade table will be backuped as there is a relation between emp and       dept.

 

scott> insert into salgrade select * from salgrade;

scott> commit;

scott> select count(*) from salgrade;

oracle logical> exp file=wedday.dmp log=wedday.log inctype=cumulative direct=y statistics=none buffer=200000

                username : / as sysdba

                Here emp,dept and salgrade tables will bebackuped.

scott> insert into emp select * from emp;

scott> commit;

scott> select count(*) from emp;

oracle logical> exp file=thursday.dmp log=thursday.log inctype=incremental direct=y statistics=none buffer=200000

                username : / as sysdba

                Here only emp table will be backuped.

 

scott> conn / as sysdba

SQL> select * from dba_exp_version;

SQL> select * from dba_exp_files;

SQL> select * from dba_exp_objects;

Data Pump

This is one type of logical backup introduced in oracle 10g version.

Utilities of data pump are expdp and impdp.

Data pump is much faster compared to the traditional logical backup since it uses the API's like dbms_datapump and dbms_metadata.

Data pump is a server side utility which means it generates the dump file at the server side irrespective of the location from where we initiated the job.

The pre-requisite for data pump is we need to create a directory at oracle level as well as at operating system level.

By default, a user can take the backup of his own objects, incase of data pump user should have read,write privilege on the directory.

The biggest advantage of data pump is we can stop/detach the job whenever there is a performance issue.

Once we reinitiate the job, it starts from the point where it was stopped earlier(which means we can detach and attach a job).

Once we initiate the data pump job, a master control process gets invoked and it keeps track of the status of the job in a table created by the job name.

Recommanded to give a name for every job that we initiate with data pump.

If we dont specify the job name, oracle creates a table on its own naming convention.

Once the job is completed, table gets dropped automatically.

We can also take the backup parallally with the help of "parallel" parameter.

We can compress metadata as well as business data in the dump file with the help of "compress" paramerer.

We can also estimate the space required in the backup destination without actually taking the backup.

Data pump enables faster movement of data between the databases or across the databases.

Useful in case of database migrations and upgradations.

Also useful in case of schema level refreshes.

 

Note : If we give the same name for the dump file, exp overrides without any warning messages or error messages. expdp fails with error message "file already exists".

 

Note : We cannot take incremental backups using data pump.

Restore Process

                1.inctype=system file=thursday.dmp
                2.inctype=restore file=sunday.dmp
                3.inctype=restore file=wedday.dmp
                4.inctype=restore file=thursday.dmp
inctype=system is going to restore only system objects(meta data objects). This is not going to restore user objects and user data.
inctype=restore restores only user objects and user data but no system objects.
                SQL> select * from dba_exp_version;
                SQL> select * from dba_exp_objects;
                SQL> select * from dba_exp_files;

Logical Incremental Backups

In case of incremental backups, oracle takes the backup of only modified objects.

 

Note : The biggest dis-advantage is, even if a single record is modified, it considers the entire segment.

 

"inctype" is the parameter that we use to take the incremental backups. The possible options for inctype are :

                1.complete

                2.incremental

                3.cumulative

inctype=complete/incremental/cumulative

 

inctype=complete is the base backup for incremental backups. It takes the backup of entire database and it also maintains some information in the data dictionary tables which is required for next incrementals.

inctype=incremental takes the backup of all modified objects till the last incremental or cumulative or complete whichever appears first.

inctype=cumulative takes the backup of all modified objects till the last cumulative or complete whichever appears first.

Tablespace level Importing

SQL> select name from v$tablespace;

SQL> select name from v$datafile;

SQL> drop tablespace ssss including contents and datafiles;

SQL> create tablespace ssss

                datafile '/u02/app/oracle/ssss/ssss01.dbf'

                size 2m autoextend on;

SQL> alter user u1 default tablespace ssss;

SQL> conn u1/u1

u1> select * from tab;

 

oracle logical> imp file=ts.dmp full=y

                username : / as sysdba

Meta data level Importing

u1> drop table emp purge;

oracle logical> imp file=meta.dmp fromuser=scott touser=u1

                username : / as sysdba

u1> select * from tab;

u1> desc emp;

Row level Importing

u1> drop table emp purge;

 

oracle logical> imp file=row.dmp fromuser=scott touser=u1

                username : / as sysdba

u1> select * from tab;

u1> select count(*) from emp;

Table level Importing

oracle logical> ls

SQL> conn u1/u1

u1> select * from tab;

u1> drop table emp purge;

u1> drop table dept purge;

u1> select * from tab;

 

oracle logical> imp file=table.dmp fromuser=scott touser=u1

                username : / as sysdba

u1> select * from tab;

u1> drop table emp purge;

 

oracle logical> imp file=table.dmp tables=emp fromuser=scott touser=u1

                username : / as sysdba

u1> select * from tab;

User level Importing

root> su - oracle

oracle> export ORACLE_SID=ssss

oracle> sqlplus / as sysdba

SQL> startup

root> su - oracle

oracle> eexport ORACLE_SID=ssss

oracle> ls

oracle> cd logical

 

SQL> conn scott/tiger

scott> select * from tab;

scott> conn / as sysdba

SQL> drop user scott cascade;

SQL> grant connect,resource to scott identified by tiger;

SQL> conn scott/tiger

scott> select * from tab;

 

oracle logical> imp file=user.dmp log=user.log fromuser=scott touser=scott ignore=y

                username : / as sysdba

scott> select * from tab;

Logical Backup : Tablespace level backup

SQL> select name from V$tablespace;

SQL> select name from V$datafile;

SQL> create tablespace ssss

                datafile '/u02/app/oracle/ssss/ssss01.dbf'

                size 10m autoextend on;

SQL> grant connect,resource to u1 identified by u1;

SQL> alter user u1 default tablespace ssss;

SQL> conn u1/u1;

SQL> create table a(a number(3));

SQL> insert into a values(1);

SQL> /

                repeat for many times.

SQL> commit;

SQL> select count(*) from a;

oracle logical> exp file=ts.dmp log=ts.log tablespace=ssss

                username : / as sysdba

Logical Backup : Meta data backup

oracle logical> exp file=met.dmp log=meta.log tables=emp rows=n

                username : scott/tiger

                rows=n means backup only the meta data not the business data.

Logical Backup : Row level backup

oracle logical> exp file=row.dmp log=row.log tables=emp query=\'where deptno=30\'

                username : scott/tiger

Logical Backup : Table level backup

oracle logical> exp file=table.dmp log=table.log tables=emp,dept

                username : scott/tiger

oracle> export ORACLE_SID=ssss

oracle> sqlplus / as sysdba

SQL> alter user scott account unlock

Logical Backup : User level backup

oracle logical> exp file=user.dmp log=user.log owner=scott

                username : / as sysdba

Logical Bkp : Full Backup


oracle> which exp
oracle> which imp
oracle> exp help=y
oracle> export ORACLE_SID=ssss
oracle> sqlplus / as sysdba
SQL> startup
SQL> exit
oracle> pwd
oracle> mkdir logical
oracle> ls
oracle> cd logical
oracle> export ORACLE_SID=ssss
oracle logical> exp file=full.dmp log=full.log full=y
                username : / as sysdba

Scopes when running with Pfile

SQL> create pfile from spfile;

SQL> shut immediate

oracle> cd $ORACLE_HOME/dbs

oracle> mv spfilessss.ora spfilessss.ora.bkp

oracle> export ORACLE_SID=ssss

oracle> sqlplus / as sysdba

SQL> startup

SQL> show parameter pfile;

SQL> show parameter undo_

SQL> alter system set undo_retention=900 scope=memory;

                If we wont specify the scope for dynamic parameter when running with pfile, it take "memory" by default.

                "memory" means it is applicable for current.

SQL> show parameter undo_

Scopes when running with spfile

oracle> export ORACLE_SID=ssss

oracle> sqlplus / as sysdba

SQL> startup

SQL> select name,open_mode,log_mode from V$database;

SQL> select instance_name,status from V$instance;

SQL> show parameter undo_

SQL> desc V$parameter

SQL> select name,value,issys_modifiable from V$parameter where name='undo_retention';

                If the issys_mod is true/immediate then its "dynamic parameter".

                If issys_mod is false, it means that its a static parameter.

SQL> show parameter spfile

SQL> show parameter pfile

SQL> alter system set undo_retention=1000 scope=both;

                If we wont specify the scope for dynamic parameter when running with spfile, it take "both" by default.

                "both" means it is applicable for current and future instance.

SQL> show parameter undo_

SQL> show parameter memory_

SQL> select name,value,issys_modifiable from V$parameter where name="memory_max_target";

                If issys_mod is false, it means that its a static parameter.

SQL> alter system set memory_max_target=900m scope=spfile;

SQL> show parameter memory_

SQL> shut immediate

SQL> startup

                We need to shut and start the database to see the effect of changed static parameter if the database is running with spfile.

SQL> show parameter memory_

Scopes of init parameters


If the database is running with pfile, we can change the dynamic parameters while the database is up and running but we cannot change the static parameters. If we want to change the static parameters we need to shutdown the database.

If the database is running with spfile, we can change the dynamic parameters while the database is up and running and we can also change the static parameters while the database is up and running but we need to restart the database to take effect.

SQL> alter system set someparameter=1000 scope=memmory;

"memory" scope is used only when the database is running with pfile.
If the scope is "memory" it means it is applcable for the current instance and if we restarts the database it gets the old value.
If the database is running with pfile and if we are changing the dynamic parameter, by default it takes "memory" for the scope.

"both" and "spfile" are used when the database is running with spfile.
If the scope is "both" it means it is applicable for the current instance as well as the future instances.
If the scope is "spfile' it means that it is applicable for the future instance but not the current instance. The changes will be reflected in the future instance.
If the database is running with spfile and if we are changing the dynamic parameter, by default it takes "both" for the scope.

Logical Backup : Traditional Logical Backup


Logical backup cannot be considered as main backup option.
Logical backups are useful in case of database migrations(windows to Linux vice versa).
Logical backups are more useful in case of database upgradations(9i to 10g etc).
Logical backups are platform independant.
Logical backups are useful in case of Schema level refreshes.
Logical backups are useful in case of database re org, schema re org as well as table re org.

Note : If database is down we cannot take logical backup.

In order to use exp,expdp,imp and impdp, database should be up and running.

Note : Dump files generated by exp can only be imported by imp and not by impdp.

Files generated by logical backup are :
                1.dump file
                2.log file
Dump file contains meta data as well as business data.
Dump file is partial binary and partial text in nature.
Recommanded not to edit the dump file either with vi editor or with notepad.
Logical backup takes the backup of only logical objects of the database(not going to backup CRD files).
By default, logical backup is conventional(goes thru the SGA processing layer).
We can improve the performance(minimise the backup time)by specifying direct=y.
direct=y is going to bypass the SGA processing layer and writes into operating system buffer and from the buffer writes into dump file.
The default buffer size is 64K, recommanded to specify buffers=200000.
If the dump file size is very huge and if there are n number of indexes, import by specifying indexes=no and late recreate the indexes.
By default, a user can take the backup of his/her own objects.
If a user wants to take the backup of full database, user should  have "exp_full_database" role. In the same way in order to import, user should have "imp_full_database" role.
While importing sometimes we get undo related errors. To overcome this, specify commit=y.
commit=y imports array of records and then commits rather than importing all the records and issuing commit.

Note : To suppress DDL related errors specify ignore=y.
If we specify consistant=y, oracle will not considers the objects or data that was inserted after initiating the job.

Backup and Recovery



Before designing a proper backup strategy for the customer we need to consider the following.
1.The size of the customer database
2.The way how the customer conducts the business(9am-9pm / 24/7 /24/5 etc)

Logical backup cannot be considered as a main backup option since we do not have the concept of recovery with the logical backup.
Recovery is possible only in case of user managed physical backups as well as RMAN backup.
In order to perform recovery, database must and should run in archive log mode.
If the customer conducts his business in 24/7 environment, we cannot take cold backup/offline backup.
For 24/7 customers, we need to implement online backup strategy.
In order to take user managed physical backup we depend on "cp" command.
User managed physical backup does not support Incremental backups.
If the database size is very huge, everyday full backup is not possible. In this case we need to depend on incremental backups.
Incremental backup means taking the backup of only modified blocks.
RMAN supports Incremental backups.
Related Posts Plugin for WordPress, Blogger...

Flag Counter