Oracle OSWatcher tool and yast for EM Grid Control

February 16, 2012 5 comments

 

These days I configured Oracle OSWatcher tool and yast package for EM Grid Control used to manage and monitor Oracle servers. I want to share how quickly to configure and start using them on Linux.

 

 

OSWatcher invokes system utilities like ps, top, iostat, vmstat, netstat and collects data according to the specified parameters. You can download it from Metalink.

unzip it to OSWatcher directory you would use($OSWATCHER_HOME)

tar -xvf oswbb4.0.tar

 

 

OSWatcher has been renamed to OSWatcher Black Box to avoid the confusion with the too many tools with this name. OSWatcher Black Box Analyzer(OSWbba ) is a graphing and analysis utility which comes bundled with OSWbb v4.0.0 and higher. For OSWbba you need java version 1.4.2 or higher.

Put next lines in the profile needed by OSWatcher:

export JAVA_HOME=$ORACLE_HOME/jdk/jre
export PATH=$JAVA_HOME/bin:$PATH
alias oswatch=’java -jar $OSWATCHER_HOME/oswbba.jar -i $OSWATCHER_HOME/archive’

 

 

Let’s start it with nohup in background and configure to take snapshots with the system utilities at every 5 minutes for the last 24 hours.

nohup ./startOSWbb.sh 300 24 &

 

What it does is subdirs in $OSWATCHER_HOME/archives like oswiostat, oswmeminfo, oswmpstat, oswnetstat, oswprvtnet, oswps, oswslabinfo, oswtop, oswvmstat where result .dat files with collected data for each hour reside. OSWbba parses all the archive files and you invoke it from the alias created in the profile:

$ oswatch

Starting OSW Black Box Analyzer V4.0
OSWatcher Black Box Analyzer Written by Oracle Center of Expertise
Copyright (c)  2012 by Oracle Corporation

Parsing Data. Please Wait…

Parsing file …iostat_12.02.16.0100.dat …
Parsing file …vmstat_12.02.16.0100.dat …

Parsing Completed.

Enter 1 to Display CPU Process Queue Graphs
Enter 2 to Display CPU Utilization Graphs
Enter 3 to Display CPU Other Graphs
Enter 4 to Display Memory Graphs
Enter 5 to Display Disk IO Graphs

Enter 6 to Generate All CPU Gif Files
Enter 7 to Generate All Memory Gif Files
Enter 8 to Generate All Disk Gif Files

Enter L to Specify Alternate Location of Gif Directory
Enter T to Specify Different Time Scale
Enter D to Return to Default Time Scale
Enter R to Remove Currently Displayed Graphs
Enter P to Generate A Profile
Enter A to Analyze Data
Enter Q to Quit Program

Please Select an Option:

 

 

Yast(Yet Another Setup Tool) is needed if you need to administer a linux host through Enterprise Manager Grid Control. Download yast from here.

tar -xvf yast_el5_x86_64.tar
cd yast_el5_x86_64
./install.sh

 

Start it in the commandline:

/sbin/yast

 

 

 

 

 

Here is how it looks like in EM Grid Control:

 

 

 

 

 

Regards,
Maria

 

 

Categories: DBA, Unix/Linux

Hung Auto SQL Tuning Task Messages

February 10, 2012 4 comments

 

Still in Oracle 11.2.0.3 may appear an alert in EM related to metric “Generic Operational Error”. Occasionally when running Automatic SQL Tuning the following messages may appear in the alert log:

Process 0x%p appears to be hung in Auto SQL Tuning task
 Current time = %u, process death time = %u
 Attempting to kill process 0x%p with OS pid = %s
 OSD kill succeeded for process 0x%p

You can have a look at “How to Avoid or Prevent Hung Auto Tuning Task Messages [ID 1344499.1]” in Metalink and this post .

The explanation is that the AUTO SQL TUNING TASK has been over-running and as a protective measure it is auto killed. As thus, there is no fix for this and the solution is to disable this job and eventually manually execute it when needed. Here is how to do that:

BEGIN
   DBMS_AUTO_TASK_ADMIN.DISABLE(
   client_name => ‘sql tuning advisor’,
   operation => NULL,
   window_name => NULL);
 END;
 /

 

The Automatic SQL Tuning :

SELECT   TASK_NAME, DESCRIPTION, STATUS, LAST_MODIFIED  FROM   DBA_ADVISOR_TASKS
 WHERE   task_name LIKE ‘%AUTO_SQL_TUNING_TASK%’;

 

is part of the Automated Maintenance Tasks together with Optimizer Statistics Gathering and Segment Advisor.  You can see this also in Oracle EM Grid Control->Server->Automated Maintenance Tasks.

 

 

 

 

I have a similar post but the error in the alert log is different.

 

Cheers,
Maria

 

Categories: DBA

ORA-31011: XML parsing failed LPX-00217: invalid character error can be a bug

January 27, 2012 1 comment

 

If you get the error below:

 ORA-31011: XML parsing failed
ORA-19202: Error occurred in XML processing
LPX-00217: invalid character

from code running on Oracle 11.2.0.2 and 11.2.0.3 and it used to work on previous versions then stop and check MOS 1391688.1. It can save you a lot of time. The reason you get this error can be bug 11877267.

 

I hit this bug on Oracle 11.2.0.3 during investigation why a PL/SQL piece of code that parses XMLTYPE was not working. The reason is the new XML parser introduced with 11.2.0.2.

You can either apply patch 11877267 or use the workaround to set back the old XML parser as of version 11.2.0.1.

alter system set event=’31156 trace name context forever, level 0x400′ scope=spfile;

 

Cheers,
Maria

 

Categories: Development

Clone Database from one server to another

December 13, 2011 Leave a comment

 

I think it is time to write another blog post. I will talk about an old but very useful way how to clone a database from one server to another.  In my particular case it is the best solution that perfectly fits to all requirements put on the table. In the meantime let me explain what the task is and then share how it was achieved.

Ok, there is a primary Oracle RAC 10.2 database(in 11g we have more power and flexibility with clonedb but anyway) and a standby database. On both sides there is a RMAN backup. The database size is 800GB.

On a monthly basis there is a business need the primary database to be copied to a reporting server. The data there should be exactly at the point of 30/31 day of the month. Up to now the way this database was cloned is quite simple as a technique but unfortunately not very easy to do as it required a lot of manual interaction from the DBA and last but not least not very effective in time the database is available for use.  So a full export is taken then it is copied to the target server and then imported. The whole procedure sometimes took more than 1 day usually 2 days as the dump is 120GB. I started to think for improvements.

I tried to optimize the expdp/impdp and import only the data and then create the indexes with a script produced by datapump and create them in parallel. The improvement was around 5 hours but still it was not enough. Here is the time to point the next requirement. I had to create this new database on the very same server where the standby database is but with a different database name. Cool!

Next direction was clone database. Clone database is easy even if you want to rename the database name.So I came up with two choices:

  1. duplicate the primary database using RMAN to the machine where the phisical standby resides. This would be a different instance with a different database name from the source

  2. duplicate the standby database(physical) using RMAN to the same machine again different instance

First one was successful and took more than 8 hours. In this case I used duplicate target database and convert of datafiles and logfiles(use *.db_file_name_convert and *.log_file_name_convert). One disadvantage was that I had to copy 350GB backup + archivelogs. The reason for this was that they are located on OCFS and on Windows these could not be shared through NFS. I also tried using BCV and take a snapshot of the LUN where OCFS is and then present it to the target server but it wasn’t accessible. 

I hit a wall with the second option. I was not able to restore from standby database because of the standby controlfile. If you try to restore a controfile from the primary backup again it is not working.

I wanted to find another faster way to do this task. I chose the classical approach – stop source database , take a cold backup of the datafiles and recreate the database with a different name to the reporting server. Here I want to say thanks to Tim Hall and Joze Senegacnik. These guys threw me this idea during the Q&A sections of Tim Hall’s presentation at BGOUG where he spoke about clonedb in Oracle 11g.

So I went a little bit further because I can not afford to stop neither the production primary database nor the physical standby. I used storage BCV to get a copy of the primary database on the test environment. There I can shutdown the database and copy the database files as long as it takes. Here are my successful actions to the end:

1. Use this to build RMAN script to copy datafiles from ASM to filesystem

   select ‘COPY DATAFILE ”’ || name || ”” || ‘ TO ”’  || REPLACE(name,’+DATA/datafile/’,’H:\oracle\testdb\datafiles\’)  || ”” || ‘;’
   from v$datafile where name like ‘%DATA%’ order by name;

 

2. shutdown the database and get a copy of all datafiles from ASM with RMAN using the script above

   sqlplus /nolog
   conn / as sysdba
   startup mount
   rman target /
   COPY DATAFILE ‘+DATA/datafile/test.ora’ TO ‘H:\oracle\testdb\datafiles\test.ora’;
   ……

 

3. Get pfile from source database

create pfile=’H:\oracle\testdb\pfiletest.ora’ from spfile;

 

4. Modify this file with the new locations, new name for database, memory parameters, non-cluster database as follows

Its contents should be as follows:

       ….
      *.audit_file_dest=’H:\oracle\testdb\adump’
      *.background_dump_dest=’H:\oracle\testdb\bdump’
      *.cluster_database=false
      *.control_files=’H:\oracle\testdb\controlfile\controlfile1.ctl’
      *.core_dump_dest=’H:\oracle\testdb\cdump’
      *.db_block_size=8192
      *.db_create_file_dest=’H:\oracle\testdb\datafiles’
      *.db_create_online_log_dest_1=’H:\oracle\testdb\onlinelog’
      *.db_name=’testdb’
      *.db_recovery_file_dest=’H:\oracle\testdb\flashback’
      *.db_unique_name=’testdb’
      *.user_dump_dest=’H:\oracle\testdb\udump’
      ….

 

 5. From the source server side produce a create a control file script

alter database backup controlfile to trace as ‘H:\oracle\testdb\createcontrolfile.sql’ resetlogs;

 

6. Modify this file massively to correspond to the new database.

Get rid of all empty lines, all commented lines, edit the datafiles’path point to the new location, etc. Its contents should look like this:

      STARTUP NOMOUNT
      CREATE CONTROLFILE REUSE SET DATABASE “TESTDB” RESETLOGS FORCE LOGGING NOARCHIVELOG
      MAXLOGFILES 192
      MAXLOGMEMBERS 3
      MAXDATAFILES 1024
      MAXINSTANCES 32
      MAXLOGHISTORY 5840
      LOGFILE
       GROUP 1 ‘H:\oracle\testdb\onlinelog\group_1.302.653686569’  SIZE 300M,
       GROUP 2 ‘H:\oracle\testdb\onlinelog\group_2.303.653686633’  SIZE 300M,
       GROUP 3 ‘H:\oracle\testdb\onlinelog\group_3.308.653686849’  SIZE 300M
      DATAFILE
       ‘H:\oracle\testdb\datafiles\system.279.653621391’,
       …………………………………………
       CHARACTER SET CL8MSWIN1251
     ;
    
    ALTER DATABASE OPEN RESETLOGS;
   
    ALTER TABLESPACE TEMP ADD TEMPFILE ‘H:\oracle\testdb\tempfile\temp.ora’
     SIZE 32767M REUSE AUTOEXTEND ON NEXT 104857600  MAXSIZE 32767M;

 

7. Unpresent the disk from the storage from test server where the datafiles were copied from and present it to the target server(Eliminate the very copy of the files)

 

8. The target machine is a Windows Server. Create Oracle Service in Windows with ORADIM Windows utility

oradim -new -sid TESTDB -INTPWD password -STARTMODE AUTO

 

9. Final step

    sqlplus /nolog
    conn / as sysdba
    create spfile from pfile=’H:\oracle\testdb\pfiletest.ora’;
    @’H:\oracle\testdb\createcontrolfile.sql’;

 

Overall statistics showed that 620 GB of datafiles were copied for 5 hours which is 5 times betterr than previous method and requires less interaction. The tasks are automated. Simple and quick.

 

Regards,
Maria

 

 

 

 

 

 

 

Categories: DBA Tags:

Exalytics Intelligence Machine unveiled

October 3, 2011 Leave a comment

 

Oracle Exalytics machine was announced at OOW2011 .

You can check Mark Rittman’s blog , Roels blog as well as Capgemini .

 

Regards,

Maria

 

Categories: ETL

Visiting SIOUG

September 30, 2011 2 comments

 

It has been a busy period for me last days both at work and life. I am glad I spent a week in Slovenia and around. I had a presentation about Oracle GoldenGate at SIOUG. My presentation was short in slides and with 30 minutes demo.  I uploaded it in the whitepapers section. Thanks to Joze, Janez and the whole SIOUG team for keeping open door all the time.

 

I met old friends there and also had the chance to meet Debra Lilley and Doug Burns in person. I like Debra’s accent and Doug’s sense of humour very much. I would say SIOUG, BGOUG, UKOUG, HROUG spent wonderful time together. Real collaboration between the Oracle User Groups 🙂

 

Regards,

Maria

 

Categories: DBA

Data Masking with Oracle Data Pump

August 5, 2011 6 comments

If you expose production data to test, QA or UAT environment, most probably you’ll need to hide sensitive data. You can do this in different ways. One of them is to
use Oracle Data Pump to mask the data. You may choose to do that on the very export step and then on the import step, or just mask the sensitive data when you do the
import in the target schema(s).

Masking should protect your data but not stop the testing process. Ensure you do not lose realistic lookup data for the testing. Mask only whatt you need to mask.
Hide only sensitive data. If you can expose primary and foreign key columns then don’t mask them. If not ensure you keep the integrity of the relations. For example
when you mask the primary key column, you should use the same mask for the foreign key column.

Oracle Data Pump provides a way to achieve this. Oracle Data Pump’s REMAP_DATA parameter is introduced in Oracle Database 11g. It uses an user defined remapping
function to rewrite data. If you want to mask multiple columns in the same process, REMAP_DATA parameter is used several times. I choose to mask only ADDRESS column
in my table. If you have a dump file you can choose to mask the data only when doing the import. I will show this below:

Syntax

REMAP_DATA=[(schema.tablename.column_name:schema.pkg.function)]

impdp user/pass
TABLES=SCHEMA_NAME.TABLE_NAME
TABLE_EXISTS_ACTION=replace
DUMPFILE=EXP.DPUMP
DIRECTORY=IMPORT REMAP_TABLESPACE=(SOURCE_TABLESPACE:TARGET_TABLESPACE)  REMAP_DATA=SCHEMA_NAME.TABLE_NAME.ADDRESS:SCHEMA_NAME.REMAP_UTILS.MASKVARCHAR

This is my remapping function :

CREATE OR REPLACE PACKAGE REMAP_UTILS AS

FUNCTION maskvarchar(ADDRESS VARCHAR2) return VARCHAR2;

END;
/

CREATE OR REPLACE PACKAGE BODY REMAP_UTILS AS

FUNCTION maskvarchar(ADDRESS VARCHAR2) return VARCHAR2
IS
v_string VARCHAR2(120 BYTE) := ”;
BEGIN

v_string := dbms_random.string(‘A’, 10);
RETURN v_string;
END;

END  REMAP_UTILS ;
/

Let’s see the result after the import.

SQL> conn / as sysdba
Connected.
SQL> Spool on
SQL> Spool c:\spooltext.txt
SQL> select ADDRESS from TABLE_NAME
2  where rownum <=10;

ADDRESS
————————————

htkphShybP
bNBNwCIsdY
PhfDFAnZbO
wrTyPtxPjC
spfuDBPhRJ
JUIrqmuXPJ
hequOqHydf
EAiITTjvkX
JwujGneNVe
CeuZwGgmsh

10 rows selected.

SQL> Spool off
SQL>

The address column is masked successfully. As a conclusion, I would say it is a great feature.

Cheers,
Maria

Categories: DBA
%d bloggers like this: