Clone Database from one server to another
I think it is time to write another blog post. I will talk about an old but very useful way how to clone a database from one server to another. In my particular case it is the best solution that perfectly fits to all requirements put on the table. In the meantime let me explain what the task is and then share how it was achieved.
Ok, there is a primary Oracle RAC 10.2 database(in 11g we have more power and flexibility with clonedb but anyway) and a standby database. On both sides there is a RMAN backup. The database size is 800GB.
On a monthly basis there is a business need the primary database to be copied to a reporting server. The data there should be exactly at the point of 30/31 day of the month. Up to now the way this database was cloned is quite simple as a technique but unfortunately not very easy to do as it required a lot of manual interaction from the DBA and last but not least not very effective in time the database is available for use. So a full export is taken then it is copied to the target server and then imported. The whole procedure sometimes took more than 1 day usually 2 days as the dump is 120GB. I started to think for improvements.
I tried to optimize the expdp/impdp and import only the data and then create the indexes with a script produced by datapump and create them in parallel. The improvement was around 5 hours but still it was not enough. Here is the time to point the next requirement. I had to create this new database on the very same server where the standby database is but with a different database name. Cool!
Next direction was clone database. Clone database is easy even if you want to rename the database name.So I came up with two choices:
1. duplicate the primary database using RMAN to the machine where the phisical standby resides. This would be a different instance with a different database name from the source
2. duplicate the standby database(physical) using RMAN to the same machine again different instance
First one was successful and took more than 8 hours. In this case I used duplicate target database and convert of datafiles and logfiles(use *.db_file_name_convert and *.log_file_name_convert). One disadvantage was that I had to copy 350GB backup + archivelogs. The reason for this was that they are located on OCFS and on Windows these could not be shared through NFS. I also tried using BCV and take a snapshot of the LUN where OCFS is and then present it to the target server but it wasn’t accessible.
I hit a wall with the second option. I was not able to restore from standby database because of the standby controlfile. If you try to restore a controfile from the primary backup again it is not working.
I wanted to find another faster way to do this task. I chose the classical approach – stop source database , take a cold backup of the datafiles and recreate the database with a different name to the reporting server. Here I want to say thanks to Tim Hall and Joze Senegacnik. These guys threw me this idea during the Q&A sections of Tim Hall’s presentation at BGOUG where he spoke about clonedb in Oracle 11g.
So I went a little bit further because I can not afford to stop neither the production primary database nor the physical standby. I used storage BCV to get a copy of the primary database on the test environment. There I can shutdown the database and copy the database files as long as it takes. Here are my successful actions to the end:
1. Use this to build RMAN script to copy datafiles from ASM to filesystem
select ‘COPY DATAFILE ”’ || name || ”” || ‘ TO ”’ || REPLACE(name,’+DATA/datafile/’,’H:\oracle\testdb\datafiles\’) || ”” || ‘;’
from v$datafile where name like ‘%DATA%’ order by name;
2. shutdown the database and get a copy of all datafiles from ASM with RMAN using the script above
conn / as sysdba
rman target /
COPY DATAFILE ‘+DATA/datafile/test.ora’ TO ‘H:\oracle\testdb\datafiles\test.ora’;
3. Get pfile from source database
create pfile=’H:\oracle\testdb\pfiletest.ora’ from spfile;
4. Modify this file with the new locations, new name for database, memory parameters, non-cluster database as follows
Its contents should be as follows:
5. From the source server side produce a create a control file script
alter database backup controlfile to trace as ‘H:\oracle\testdb\createcontrolfile.sql’ resetlogs;
6. Modify this file massively to correspond to the new database.
Get rid of all empty lines, all commented lines, edit the datafiles’path point to the new location, etc. Its contents should look like this:
CREATE CONTROLFILE REUSE SET DATABASE “TESTDB” RESETLOGS FORCE LOGGING NOARCHIVELOG
GROUP 1 ‘H:\oracle\testdb\onlinelog\group_1.302.653686569′ SIZE 300M,
GROUP 2 ‘H:\oracle\testdb\onlinelog\group_2.303.653686633′ SIZE 300M,
GROUP 3 ‘H:\oracle\testdb\onlinelog\group_3.308.653686849′ SIZE 300M
CHARACTER SET CL8MSWIN1251
ALTER DATABASE OPEN RESETLOGS;
ALTER TABLESPACE TEMP ADD TEMPFILE ‘H:\oracle\testdb\tempfile\temp.ora’
SIZE 32767M REUSE AUTOEXTEND ON NEXT 104857600 MAXSIZE 32767M;
7. Unpresent the disk from the storage from test server where the datafiles were copied from and present it to the target server(Eliminate the very copy of the files)
8. The target machine is a Windows Server. Create Oracle Service in Windows with ORADIM Windows utility
oradim -new -sid TESTDB -INTPWD password -STARTMODE AUTO
9. Final step
conn / as sysdba
create spfile from pfile=’H:\oracle\testdb\pfiletest.ora’;
Overall statistics showed that 620 GB of datafiles were copied for 5 hours which is 5 times betterr than previous method and requires less interaction. The tasks are automated. Simple and quick.