Archive

Archive for the ‘DBA’ Category

BGOUG 2015 Autumn Conference

November 25, 2015 Leave a comment

The autumn BGOUG conference is now over. I planned to write a post immediately after the end of the conference when the enthusiasm and spirit is still fresh bee-zing but anyway it stays for quite a while  🙂

This year it was the whole family attending the conference and my daughter Alex had an early Oracle start listening to Jonathan Lewis talking about execution plans.

This time we had about 370 people attending BGOUG. My presentation on Three Sides of the Coin with Oracle Data Integration – ODI, OGG, EDQ went very well and I had a full room of people. I also had a live 30 minutes demo which went smoothly and I managed to fit to the 60 mins all together. I attended Jonathan Lewis’s, Joze Senegacnik’s , Gurcan Orhan’s sessions. Not as many as I initially thought. I planned to listen Heli, Julian Dontcheff and Martin Widlake at least but I couldn’t do it. I am sure I will see Martin at the LOB events in London anyway.

It was really nice to catch up with old friends and colleagues and also make new friends. I am glad I met Osama Mustafa and Joel Goodman in person. I haven’t seen Julian for quite a while and it was a real pleasure to see him again and meet his Accenture colleagues. Same for Joze and Lili. Obviously BGOUG is like a big family(without the fights) and it is quite an international one.

Big thanks to Milena for organizing this great event. The whole team has been doing great job for  all these years now. Really nice idea to start printing the number of visits and tracking this for each attendee. I have now 16 and Joze has 14 and I have to be very careful with that 🙂

 

All in all, it was a great event, presentations, great people, very nice venue, hotel, food, party till midnight. I would say keep up the good work!

 

Cheers,

Maria

 

 

Categories: DBA

ORA-00001: unique constraint (SYS.I_PLSCOPE_SIG_IDENTIFIER$) violated

November 5, 2012 Leave a comment

Just a quick mention when you get the error bellow while trying to create a trigger for example:

 

CREATE OR REPLACE TRIGGER AGE_GROUP_TRG
BEFORE INSERT ON AGE_GROUP REFERENCING
NEW AS NEW
FOR EACH ROW
BEGIN
SELECT AGE_GROUP_SEQ.nextval INTO :NEW.AGE_GROUP_ID FROM dual;
END;

 

ORA-00603: ORACLE server session terminated by fatal error
ORA-00600: internal error code, arguments: [kqlidchg1], [], [], [], [], [], [], [], [], [], [], []
ORA-00604: error occurred at recursive SQL level 1
ORA-00001: unique constraint (SYS.I_PLSCOPE_SIG_IDENTIFIER$) violated
00603. 00000 –  “ORACLE server session terminated by fatal error”
*Cause:    An ORACLE server session is in an unrecoverable state.
*Action:   Login to ORACLE again so a new server session will be created

 

 

The root cause is in the plscope_settings. If you use SQL Developer try to change PLSCOPE_SETTINGS to NONE from menu Tools->Preferences->Database->PL/SQL Compiler->PLScope Identifiers.

 

 

 

 

 

 

 

 

 

 

 

There are quite a lot discussions about this like this thread.

 

 

Regards,

Maria

 

Categories: DBA

Oracle OSWatcher tool and yast for EM Grid Control

February 16, 2012 5 comments

 

These days I configured Oracle OSWatcher tool and yast package for EM Grid Control used to manage and monitor Oracle servers. I want to share how quickly to configure and start using them on Linux.

 

 

OSWatcher invokes system utilities like ps, top, iostat, vmstat, netstat and collects data according to the specified parameters. You can download it from Metalink.

unzip it to OSWatcher directory you would use($OSWATCHER_HOME)

tar -xvf oswbb4.0.tar

 

 

OSWatcher has been renamed to OSWatcher Black Box to avoid the confusion with the too many tools with this name. OSWatcher Black Box Analyzer(OSWbba ) is a graphing and analysis utility which comes bundled with OSWbb v4.0.0 and higher. For OSWbba you need java version 1.4.2 or higher.

Put next lines in the profile needed by OSWatcher:

export JAVA_HOME=$ORACLE_HOME/jdk/jre
export PATH=$JAVA_HOME/bin:$PATH
alias oswatch=’java -jar $OSWATCHER_HOME/oswbba.jar -i $OSWATCHER_HOME/archive’

 

 

Let’s start it with nohup in background and configure to take snapshots with the system utilities at every 5 minutes for the last 24 hours.

nohup ./startOSWbb.sh 300 24 &

 

What it does is subdirs in $OSWATCHER_HOME/archives like oswiostat, oswmeminfo, oswmpstat, oswnetstat, oswprvtnet, oswps, oswslabinfo, oswtop, oswvmstat where result .dat files with collected data for each hour reside. OSWbba parses all the archive files and you invoke it from the alias created in the profile:

$ oswatch

Starting OSW Black Box Analyzer V4.0
OSWatcher Black Box Analyzer Written by Oracle Center of Expertise
Copyright (c)  2012 by Oracle Corporation

Parsing Data. Please Wait…

Parsing file …iostat_12.02.16.0100.dat …
Parsing file …vmstat_12.02.16.0100.dat …

Parsing Completed.

Enter 1 to Display CPU Process Queue Graphs
Enter 2 to Display CPU Utilization Graphs
Enter 3 to Display CPU Other Graphs
Enter 4 to Display Memory Graphs
Enter 5 to Display Disk IO Graphs

Enter 6 to Generate All CPU Gif Files
Enter 7 to Generate All Memory Gif Files
Enter 8 to Generate All Disk Gif Files

Enter L to Specify Alternate Location of Gif Directory
Enter T to Specify Different Time Scale
Enter D to Return to Default Time Scale
Enter R to Remove Currently Displayed Graphs
Enter P to Generate A Profile
Enter A to Analyze Data
Enter Q to Quit Program

Please Select an Option:

 

 

Yast(Yet Another Setup Tool) is needed if you need to administer a linux host through Enterprise Manager Grid Control. Download yast from here.

tar -xvf yast_el5_x86_64.tar
cd yast_el5_x86_64
./install.sh

 

Start it in the commandline:

/sbin/yast

 

 

 

 

 

Here is how it looks like in EM Grid Control:

 

 

 

 

 

Regards,
Maria

 

 

Categories: DBA, Unix/Linux

Hung Auto SQL Tuning Task Messages

February 10, 2012 4 comments

 

Still in Oracle 11.2.0.3 may appear an alert in EM related to metric “Generic Operational Error”. Occasionally when running Automatic SQL Tuning the following messages may appear in the alert log:

Process 0x%p appears to be hung in Auto SQL Tuning task
 Current time = %u, process death time = %u
 Attempting to kill process 0x%p with OS pid = %s
 OSD kill succeeded for process 0x%p

You can have a look at “How to Avoid or Prevent Hung Auto Tuning Task Messages [ID 1344499.1]” in Metalink and this post .

The explanation is that the AUTO SQL TUNING TASK has been over-running and as a protective measure it is auto killed. As thus, there is no fix for this and the solution is to disable this job and eventually manually execute it when needed. Here is how to do that:

BEGIN
   DBMS_AUTO_TASK_ADMIN.DISABLE(
   client_name => ‘sql tuning advisor’,
   operation => NULL,
   window_name => NULL);
 END;
 /

 

The Automatic SQL Tuning :

SELECT   TASK_NAME, DESCRIPTION, STATUS, LAST_MODIFIED  FROM   DBA_ADVISOR_TASKS
 WHERE   task_name LIKE ‘%AUTO_SQL_TUNING_TASK%’;

 

is part of the Automated Maintenance Tasks together with Optimizer Statistics Gathering and Segment Advisor.  You can see this also in Oracle EM Grid Control->Server->Automated Maintenance Tasks.

 

 

 

 

I have a similar post but the error in the alert log is different.

 

Cheers,
Maria

 

Categories: DBA

Clone Database from one server to another

December 13, 2011 Leave a comment

 

I think it is time to write another blog post. I will talk about an old but very useful way how to clone a database from one server to another.  In my particular case it is the best solution that perfectly fits to all requirements put on the table. In the meantime let me explain what the task is and then share how it was achieved.

Ok, there is a primary Oracle RAC 10.2 database(in 11g we have more power and flexibility with clonedb but anyway) and a standby database. On both sides there is a RMAN backup. The database size is 800GB.

On a monthly basis there is a business need the primary database to be copied to a reporting server. The data there should be exactly at the point of 30/31 day of the month. Up to now the way this database was cloned is quite simple as a technique but unfortunately not very easy to do as it required a lot of manual interaction from the DBA and last but not least not very effective in time the database is available for use.  So a full export is taken then it is copied to the target server and then imported. The whole procedure sometimes took more than 1 day usually 2 days as the dump is 120GB. I started to think for improvements.

I tried to optimize the expdp/impdp and import only the data and then create the indexes with a script produced by datapump and create them in parallel. The improvement was around 5 hours but still it was not enough. Here is the time to point the next requirement. I had to create this new database on the very same server where the standby database is but with a different database name. Cool!

Next direction was clone database. Clone database is easy even if you want to rename the database name.So I came up with two choices:

  1. duplicate the primary database using RMAN to the machine where the phisical standby resides. This would be a different instance with a different database name from the source

  2. duplicate the standby database(physical) using RMAN to the same machine again different instance

First one was successful and took more than 8 hours. In this case I used duplicate target database and convert of datafiles and logfiles(use *.db_file_name_convert and *.log_file_name_convert). One disadvantage was that I had to copy 350GB backup + archivelogs. The reason for this was that they are located on OCFS and on Windows these could not be shared through NFS. I also tried using BCV and take a snapshot of the LUN where OCFS is and then present it to the target server but it wasn’t accessible. 

I hit a wall with the second option. I was not able to restore from standby database because of the standby controlfile. If you try to restore a controfile from the primary backup again it is not working.

I wanted to find another faster way to do this task. I chose the classical approach – stop source database , take a cold backup of the datafiles and recreate the database with a different name to the reporting server. Here I want to say thanks to Tim Hall and Joze Senegacnik. These guys threw me this idea during the Q&A sections of Tim Hall’s presentation at BGOUG where he spoke about clonedb in Oracle 11g.

So I went a little bit further because I can not afford to stop neither the production primary database nor the physical standby. I used storage BCV to get a copy of the primary database on the test environment. There I can shutdown the database and copy the database files as long as it takes. Here are my successful actions to the end:

1. Use this to build RMAN script to copy datafiles from ASM to filesystem

   select ‘COPY DATAFILE ”’ || name || ”” || ‘ TO ”’  || REPLACE(name,’+DATA/datafile/’,’H:\oracle\testdb\datafiles\’)  || ”” || ‘;’
   from v$datafile where name like ‘%DATA%’ order by name;

 

2. shutdown the database and get a copy of all datafiles from ASM with RMAN using the script above

   sqlplus /nolog
   conn / as sysdba
   startup mount
   rman target /
   COPY DATAFILE ‘+DATA/datafile/test.ora’ TO ‘H:\oracle\testdb\datafiles\test.ora’;
   ……

 

3. Get pfile from source database

create pfile=’H:\oracle\testdb\pfiletest.ora’ from spfile;

 

4. Modify this file with the new locations, new name for database, memory parameters, non-cluster database as follows

Its contents should be as follows:

       ….
      *.audit_file_dest=’H:\oracle\testdb\adump’
      *.background_dump_dest=’H:\oracle\testdb\bdump’
      *.cluster_database=false
      *.control_files=’H:\oracle\testdb\controlfile\controlfile1.ctl’
      *.core_dump_dest=’H:\oracle\testdb\cdump’
      *.db_block_size=8192
      *.db_create_file_dest=’H:\oracle\testdb\datafiles’
      *.db_create_online_log_dest_1=’H:\oracle\testdb\onlinelog’
      *.db_name=’testdb’
      *.db_recovery_file_dest=’H:\oracle\testdb\flashback’
      *.db_unique_name=’testdb’
      *.user_dump_dest=’H:\oracle\testdb\udump’
      ….

 

 5. From the source server side produce a create a control file script

alter database backup controlfile to trace as ‘H:\oracle\testdb\createcontrolfile.sql’ resetlogs;

 

6. Modify this file massively to correspond to the new database.

Get rid of all empty lines, all commented lines, edit the datafiles’path point to the new location, etc. Its contents should look like this:

      STARTUP NOMOUNT
      CREATE CONTROLFILE REUSE SET DATABASE “TESTDB” RESETLOGS FORCE LOGGING NOARCHIVELOG
      MAXLOGFILES 192
      MAXLOGMEMBERS 3
      MAXDATAFILES 1024
      MAXINSTANCES 32
      MAXLOGHISTORY 5840
      LOGFILE
       GROUP 1 ‘H:\oracle\testdb\onlinelog\group_1.302.653686569’  SIZE 300M,
       GROUP 2 ‘H:\oracle\testdb\onlinelog\group_2.303.653686633’  SIZE 300M,
       GROUP 3 ‘H:\oracle\testdb\onlinelog\group_3.308.653686849’  SIZE 300M
      DATAFILE
       ‘H:\oracle\testdb\datafiles\system.279.653621391’,
       …………………………………………
       CHARACTER SET CL8MSWIN1251
     ;
    
    ALTER DATABASE OPEN RESETLOGS;
   
    ALTER TABLESPACE TEMP ADD TEMPFILE ‘H:\oracle\testdb\tempfile\temp.ora’
     SIZE 32767M REUSE AUTOEXTEND ON NEXT 104857600  MAXSIZE 32767M;

 

7. Unpresent the disk from the storage from test server where the datafiles were copied from and present it to the target server(Eliminate the very copy of the files)

 

8. The target machine is a Windows Server. Create Oracle Service in Windows with ORADIM Windows utility

oradim -new -sid TESTDB -INTPWD password -STARTMODE AUTO

 

9. Final step

    sqlplus /nolog
    conn / as sysdba
    create spfile from pfile=’H:\oracle\testdb\pfiletest.ora’;
    @’H:\oracle\testdb\createcontrolfile.sql’;

 

Overall statistics showed that 620 GB of datafiles were copied for 5 hours which is 5 times betterr than previous method and requires less interaction. The tasks are automated. Simple and quick.

 

Regards,
Maria

 

 

 

 

 

 

 

Categories: DBA Tags:

Visiting SIOUG

September 30, 2011 2 comments

 

It has been a busy period for me last days both at work and life. I am glad I spent a week in Slovenia and around. I had a presentation about Oracle GoldenGate at SIOUG. My presentation was short in slides and with 30 minutes demo.  I uploaded it in the whitepapers section. Thanks to Joze, Janez and the whole SIOUG team for keeping open door all the time.

 

I met old friends there and also had the chance to meet Debra Lilley and Doug Burns in person. I like Debra’s accent and Doug’s sense of humour very much. I would say SIOUG, BGOUG, UKOUG, HROUG spent wonderful time together. Real collaboration between the Oracle User Groups 🙂

 

Regards,

Maria

 

Categories: DBA

Data Masking with Oracle Data Pump

August 5, 2011 6 comments

If you expose production data to test, QA or UAT environment, most probably you’ll need to hide sensitive data. You can do this in different ways. One of them is to
use Oracle Data Pump to mask the data. You may choose to do that on the very export step and then on the import step, or just mask the sensitive data when you do the
import in the target schema(s).

Masking should protect your data but not stop the testing process. Ensure you do not lose realistic lookup data for the testing. Mask only whatt you need to mask.
Hide only sensitive data. If you can expose primary and foreign key columns then don’t mask them. If not ensure you keep the integrity of the relations. For example
when you mask the primary key column, you should use the same mask for the foreign key column.

Oracle Data Pump provides a way to achieve this. Oracle Data Pump’s REMAP_DATA parameter is introduced in Oracle Database 11g. It uses an user defined remapping
function to rewrite data. If you want to mask multiple columns in the same process, REMAP_DATA parameter is used several times. I choose to mask only ADDRESS column
in my table. If you have a dump file you can choose to mask the data only when doing the import. I will show this below:

Syntax

REMAP_DATA=[(schema.tablename.column_name:schema.pkg.function)]

impdp user/pass
TABLES=SCHEMA_NAME.TABLE_NAME
TABLE_EXISTS_ACTION=replace
DUMPFILE=EXP.DPUMP
DIRECTORY=IMPORT REMAP_TABLESPACE=(SOURCE_TABLESPACE:TARGET_TABLESPACE)  REMAP_DATA=SCHEMA_NAME.TABLE_NAME.ADDRESS:SCHEMA_NAME.REMAP_UTILS.MASKVARCHAR

This is my remapping function :

CREATE OR REPLACE PACKAGE REMAP_UTILS AS

FUNCTION maskvarchar(ADDRESS VARCHAR2) return VARCHAR2;

END;
/

CREATE OR REPLACE PACKAGE BODY REMAP_UTILS AS

FUNCTION maskvarchar(ADDRESS VARCHAR2) return VARCHAR2
IS
v_string VARCHAR2(120 BYTE) := ”;
BEGIN

v_string := dbms_random.string(‘A’, 10);
RETURN v_string;
END;

END  REMAP_UTILS ;
/

Let’s see the result after the import.

SQL> conn / as sysdba
Connected.
SQL> Spool on
SQL> Spool c:\spooltext.txt
SQL> select ADDRESS from TABLE_NAME
2  where rownum <=10;

ADDRESS
————————————

htkphShybP
bNBNwCIsdY
PhfDFAnZbO
wrTyPtxPjC
spfuDBPhRJ
JUIrqmuXPJ
hequOqHydf
EAiITTjvkX
JwujGneNVe
CeuZwGgmsh

10 rows selected.

SQL> Spool off
SQL>

The address column is masked successfully. As a conclusion, I would say it is a great feature.

Cheers,
Maria

Categories: DBA

Oracle GoldenGate 11g direction

June 15, 2011 2 comments

Oracle GoldenGate is great for heterogeneous environments by replicating and integrating data across different systems.

This post is about Oracle GoldenGate 11g and implementing a simple replication method. Installing and configuring GoldenGate is quite straightforward. I will not cover this just mention that the source database should be in archivelog mode, have supplemental logging enabled, another thing is to create golden gate user gg_owner. This post will show simple example how to replicate source schema to target, both located on the same Oracle database 11.2.0.2.

1. Users

SQL> CREATE USER SOURCEDB
IDENTIFIED BY …
DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE TEMP;

User created.

SQL> GRANT CONNECT, RESOURCE TO SOURCEDB;

Grant succeeded.

SQL> CREATE USER TARGETDB
IDENTIFIED BY 12345678
DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE TEMP;

User created.

SQL> GRANT CONNECT, RESOURCE TO TARGETDB;

Grant succeeded.

The purpose is to create replication between the two schemas. Golden Gate is installed in /oracle/gg11

2. Check existing configuration

[oracle@orclbox ~]$ cd /oracle/gg11
[oracle@orclbox gg11]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.1.1.1 OGGCORE_11.1.1_PLATFORMS_110421.2040
Linux, x64, 64bit (optimized), Oracle 11g on Apr 21 2011 22:42:14

Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved.

GGSCI (orclbox) 1> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING

GGSCI (orclbox) 2> stop manager
Manager process is required by other GGS processes.
Are you sure you want to stop it (y/n)? y

Sending STOP request to MANAGER …
Request processed.
Manager stopped.

GGSCI (orclbox) 3> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     STOPPED

GGSCI (orclbox) 4> start manager

Manager started.

GGSCI (orclbox) 5> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING

GGSCI (orclbox) 6> edit params mgr
PORT 7810
~
“/oracle/gg11/dirprm/mgr.prm” 1L, 10C

Manager is configured on default port 7809. In this case I prefer not to use the default one and configure manager on 7810.

3. Create extract s_extr on the source

GGSCI (orclbox) 7> add extract s_extr, tranlog, begin now

2011-06-13 WARNING OGG-01752  Cannot register EXTRACT S_EXTR with database because no database login was provided. You can manually register this group later with the REGISTER EXTRACT <GROUP NAME> LOGRETENTION command. Issue DBLOGIN first.
EXTRACT added.

GGSCI (orclbox) 8> add exttrail /oracle/gg11/dirdat/lt, extract s_extr
EXTTRAIL added.

GGSCI (orclbox) 9> edit params s_extr

extract s_extr
–connection to database
userid gg_owner, password
TRANLOGOPTIONS DBLOGREADER
rmthost orclbox, mgrport 7810
rmttrail /oracle/gg11/dirdat/lt
ddl include mapped objname sourcedb.*;
table sourcedb.*;
~
“/oracle/gg11/dirprm/s_extr.prm”

GGSCI (orclbox)>add exttrail /oracle/gg11/dirdat/lt, extract s_extr

GGSCI (orclbox) 10> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING
EXTRACT     STOPPED     S_EXTR      00:00:00      00:10:38

4. First add checkpoint to the destination and then create the replicat on the destination (in this case in the same database, Oracle GoldenGate supports this)

GGSCI (orclbox) 11> edit params ./GLOBAL

GGSCHEMA gg_owner
CHECKPOINTTABLE gg_owner.checkpoint

GGSCI (orclbox) 12>dblogin userid gg_owner

Password:
Successfully logged into database.

GGSCI (orclbox) 13>add checkpointtable gg_owner.checkpoint

GGSCI (orclbox) 14> add replicat t_rep, exttrail /oracle/gg11/dirdat/lt,checkpointtable gg_owner.checkpoint
REPLICAT added.

GGSCI (orclbox) 15>edit params t_rep

replicat t_rep
ASSUMETARGETDEFS
–database login
userid gg_owner, password …
discardfile /oracle/gg11/discard/t_rep_disc.dat, append, megabytes 10
–ddl support
DDL
–table mapping
map sourcedb.*, target targetdb.*;

5. Time to test extraction and replication

GGSCI (orclbox) 16> info all

Program     Status      Group       Lag

MANAGER     RUNNING
EXTRACT     STOPPED     S_EXTR      00:00:00
REPLICAT    STOPPED     T_REP       00:00:00

In case you need to investigate if something is not working, you can use ‘info PAR s_extr , detail’ where PAR can be manager, extract, replicat.

GGSCI (orclbox) 17> start extract s_extr

Sending START request to MANAGER …
EXTRACT S_EXTR starting

GGSCI (orclbox) 18> start replicat t_rep

Sending START request to MANAGER …
REPLICAT T_REP starting

GGSCI (orclbox) 19> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     S_EXTR      00:01:15      00:00:07
REPLICAT    RUNNING     T_REP       00:00:00      00:00:04

6. Now let’s start some activity on the source and see the replication in action

In the source schema

SQL> create table sourcedb.sync_table( sync_id number, sync_desc varchar2(50));

Table created.

— check the target schema and see it is already there

SQL> select * from targetdb.sync_table;

no rows selected

SQL> desc targetdb.sync_table;
Name                                      Null?    Type
—————————————– ——– —————————-
SYNC_ID                                            NUMBER
SYNC_DESC                                          VARCHAR2(50)

Insert a test row in the source table and ensure it populates to the target

SQL> insert into sourcedb.sync_table( sync_id , sync_desc ) values ( 1, ‘Description1’);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from targetdb.sync_table;

SYNC_ID SYNC_DESC
———- ————————————————–
1 Description1

— add new clob column to the source table and check the replicat

SQL> alter table sourcedb.sync_table add ( sync_detail clob);

Table altered.

SQL> desc targetdb.sync_table;
Name                                          Null?    Type
———————————————————————
SYNC_ID                                                NUMBER
SYNC_DESC                                              VARCHAR2(50)
SYNC_DETAIL                                            CLOB

SQL>

— update value for clob sync_detail column

SQL> update sourcedb.sync_table set sync_detail = ‘ DETAIL1’ where sync_id = 1;

1 row updated.

SQL> commit;

Commit complete.

SQL> select * from targetdb.sync_table;

SYNC_ID SYNC_DESC                                          SYNC_DETAIL
———- ————————————————– ————-
1 Description1                                        DETAIL1

SQL>

Great synchronization is successful and clob calue is replicated.

Now let’s go on and add a primary key on the source table and see what’s going on:

SQL> alter table sourcedb.sync_table add constraint SYNC_TABLE_PK primary key (SYNC_ID);

Table altered.

SQL> desc targetdb.sync_table;
Name                                                                             Null?    Type
——————————————————————————————————–
SYNC_ID                                                                          NOT NULL NUMBER
SYNC_DESC                                                                                 VARCHAR2(50)
SYNC_DETAIL                                                                               CLOB

SQL> select index_name from all_indexes where owner = ‘TARGETDB’ and table_name = ‘SYNC_TABLE’;

INDEX_NAME
——————————
SYNC_TABLE_PK

We can see the primary key is created in the target table as well. We see both DDL and DML effect of the replication with Oracle GoldenGate.

Cheers,

Maria

Categories: DBA

Using Flashback Database

June 9, 2011 1 comment

 

I have played again with Oracle Flashback on Oracle database 11.2, last time I used it on Oracle database 10g. It is good to double security your production database before huge database upgrades on application level. Though we use storage snapshots we can also use and Oracle flashback database before big upgrades. You know, safety first.

As the Oracle documentation says(both in 10.2 and 11.2 Oracle keeps this section) :

“In practice, guaranteed restore points provide a useful alternative to storage snapshots. Storage snapshots are often used to protect a database before risky operations such as large-scale database updates or application patches or upgrades. Rather than creating a snapshot or duplicate database to test the operation, you can create a guaranteed restore point on a primary or physical standby database. You can then perform the risky operation with the certainty that the required flashback logs are retained.”

 

So here is what we can do step by step:
1. As a prerequisite the Oracle database should be in archivelog

[oracle@orclbox ~]$ sqlplus /nolog

SQL*Plus: Release 11.2.0.2.0 Production

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

SQL> conn / as sysdba
Connected.
SQL> alter system set log_archive_dest_1=’location=/oracle/test_flash’;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 1653518336 bytes
Fixed Size                  2227032 bytes
Variable Size            1275069608 bytes
Database Buffers          369098752 bytes
Redo Buffers                7122944 bytes
Database mounted.
SQL> alter database archivelog;

Database altered.

SQL> alter database open;

Database altered.

SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /oracle/test_flash
Oldest online log sequence     2
Next log sequence to archive   4
Current log sequence           4
SQL>

 

1.1 Configure FRA

SQL> alter system set db_recovery_file_dest_size=4g;

System altered.

SQL> alter system set db_recovery_file_dest=’/oracle/test_flash’;

System altered.

 

Note: If it is a RAC database the FRA should point to the shared storage. Ensure you estimate appropriately db_recovery_file_dest_size and have enough space. STORAGE_SIZE column from V$RESTORE_POINT can help you for that.

 

2. Flashback configuration

SQL> shutdown immediate
SQL> alter system set db_flashback_retention_target=1440; /*1 day retention*/
SQL> alter database flashback on;
SQL> alter database open;

 

3. Flashback time – as in the song Calvin Harris – This is like a flashback 🙂

3.1 Create guaranteed restore point

Before we start the upgrade, create a guaranteed restore point

SQL> create restore point before_upg guarantee flashback database;

Restore point created.

— to view the restore points, can filter them by type – guaranteed or normal
SQL> select NAME, SCN, TIME, GUARANTEE_FLASHBACK_DATABASE, STORAGE_SIZE
2  from V$RESTORE_POINT
3  where GUARANTEE_FLASHBACK_DATABASE=’YES’;

 

3.2 Monitor flashback  v$flashback_database_log

SQL> select oldest_flashback_scn from v$flashback_database_log;

OLDEST_FLASHBACK_SCN
——————–
2882481

 

4. Flashback database example

SQL> create table flashback_test ( col1 number);

Table created.

SQL> insert into flashback_test values(1);

1 row created.

SQL> insert into flashback_test values(2);

1 row created.

SQL> insert into flashback_test values(3);

1 row created.

SQL> commit;

Commit complete.

SQL> select current_scn, scn_to_timestamp(current_scn) from v$database;

CURRENT_SCN SCN_TO_TIMESTAMP(CURRENT_SCN)
———– ————————————————————-
2955736 2011-06-08 13:05:53.000000000

SQL> truncate table flashback_test;

Table truncated.

SQL> select current_scn, scn_to_timestamp(current_scn) from v$database;

CURRENT_SCN SCN_TO_TIMESTAMP(CURRENT_SCN)
———– —————————————————————-
2955809 2011-06-08 13:07:41.000000000

— table flashback_test is dropped by mistake
SQL> drop table flashback_test;

Table dropped.

SQL> select * from flashback_test;
select * from flashback_test
*
ERROR at line 1:
ORA-00942: table or view does not exist

SQL> select current_scn, scn_to_timestamp(current_scn) from v$database;

CURRENT_SCN SCN_TO_TIMESTAMP(CURRENT_SCN)
———– —————————————————————–
2955891 2011-06-08 13:09:35.000000000

 

If  you want to revert the database to SCN=2955736 where flashback_test table has 3 rows

SQL> shutdown immediate
SQL> startup mount
SQL> flashback database to scn 2955736;
SQL> alter database open resetlogs;

SQL> select * from flashback_test;

COL1
———-
1
2
3

 

Done, reverse operation is successful. You can use FLASHBACK TABLE in this case also. I will show that later in this post.

Now let’s try and revert the database up to the moment after the truncate has been executed, SCN=2955809

SQL> shutdown immediate
SQL> startup mount
SQL> flashback database to scn 2955809;
SQL> alter database open resetlogs;

SQL> select * from flashback_test;

COL1
———-
1
2
3

 

Note: Rows are still here.

Now let’s try with delete:

SQL> delete from flashback_test;

3 rows deleted.

SQL> commit;

Commit complete.

SQL> select current_scn, scn_to_timestamp(current_scn) from v$database;

CURRENT_SCN SCN_TO_TIMESTAMP(CURRENT_SCN)
———– —————————————————————–
2956216 2011-06-08 13:24:39.000000000

SQL> insert into flashback_test values(4);

1 row created.
SQL> commit;

SQL> select * from flashback_test;

COL1
———-
4

— Now let’s try and reverse up to the moment after the delete
SQL> shutdown immediate
SQL> startup mount
SQL> flashback database to scn 2956216;
SQL> alter database open resetlogs;

SQL> select * from flashback_test;

no rows selected

SQL>

 

Note: Correct!

 

When finished and all went well with the upgrade, drop the guaranteed restore point

SQL> drop restore point before_upg;

Restore point dropped.

Note: Don’t forget to switch flashback database off when finished so that flashback logs don’t fill up your FRA area.
SQL> select flashback_on from v$database;

FLASHBACK_ON
——————
YES

SQL> alter database flashback off;

Database altered.

SQL> select flashback_on from v$database;

FLASHBACK_ON
——————
NO

 

5. You can use flashback table, let’s use user test and create the same table

SQL> create table test.flashback_test ( col1 number);

Table created.

— in order not to get ORA-08189: cannot flashback the table because row movement is not enabled
SQL> alter table test.flashback_test enable row movement;

Table altered.

SQL> insert into test.flashback_test values(1);

1 row created.

SQL> insert into test.flashback_test values(2);

1 row created.

SQL> insert into test.flashback_test values(3);

1 row created.

SQL> commit;

SQL> select current_scn, scn_to_timestamp(current_scn) from v$database;

CURRENT_SCN SCN_TO_TIMESTAMP(CURRENT_SCN)
———– ——————————————————————–
2957732 2011-06-08 13:58:15.000000000

SQL> delete from test.flashback_test;

3 rows deleted.

SQL> commit;

Commit complete.

SQL> select * from test.flashback_test;

no rows selected

SQL> flashback table test.flashback_test to scn 2957732;

Flashback complete.

SQL> select * from test.flashback_test;

COL1
———-
1
2
3

 

You don’t need to enable row movement for flashback drop as the objects are recovered from the recycle bin:

SQL> alter table test.flashback_test disable row movement;

Table altered.

SQL> select table_name,row_movement FROM all_tables where owner=’TEST’ and table_name=’FLASHBACK_TEST’;

TABLE_NAME                     ROW_MOVE
—————————— ——–
FLASHBACK_TEST                 DISABLED

SQL> FLASHBACK TABLE test.flashback_test TO BEFORE DROP;

Flashback complete.

SQL> select * from test.flashback_test;

COL1
———-
1
2
3

 

Enjoy 🙂

 

 

Regards,

Maria

 

Categories: DBA Tags:

Modify the attribute AUTO_START to “always”

With Oracle 11.2 database auto start policy in the clusterware is restore, which means that clusterware will remember the last state of the database.  If the database was stopped normally then on the next restart of clusterware it won’t be started. Otherwise if the server crashes or by some reason the OS is rebooted then clusterware will start the database because last state was ONLINE (running).

If you are running on Linux there is no need to write your own scripts for automatically startup/shutdown anymore. But you need to configure something.

Here is what it is:
By default with Oracle 11.2 several important resources come in the profile with attribute AUTO_START=restore.  Such resources are Oracle Database resource, Oracle ASM resource, CRS resource type for the Listener. This means that if Oracle database server is restarted for some reason , it will keep and restore the last state.

It is a good practice to change this default behaviour and it is the first thing I usually do after a new installation. I change AUTO_START=always for the resources listed above.

Check out the current state

$ crs_stat -p

Find the resource you’d like to change and modify the attribute AUTO_START to “always”

$ crsctl modify resource resource_name -attr AUTO_START=always

$ crsctl modify resource ora.asm -attr AUTO_START=always

$ crsctl modify resource ora.<SID>.db -attr AUTO_START=always

$ crsctl modify resource ora.LISTENER.lsnr -attr AUTO_START=always

You are done.

Cheers,

Maria

Categories: DBA Tags:
%d bloggers like this: