I think I am a little bit late but on the other hand it’s never too late to comment and share opinion about some among the 500+ new features introduced with Oracle Database 12c.
The first feature which really caught my attention was Multiple Indexes on Same Set of Columns as long as there are differences and only one of them is visible. Before getting into an example , there are some exceptions Oracle Doc :
- You cannot create a B-tree index and a B-tree cluster index on the same set of columns.
- You cannot create a B-tree index and an index-organized table on the same set of columns.
Prior to 12c you know when you try to create more than one index on the same list of columns you get ORA-01408: such column list already indexed error.
If you think, it sounds quite logical. What is that which made this feature be available in Oracle Database 12c now? This is Testing performance and more important Index availability. Now index maintenance and respectively application maintenance is much easier.
Imagine you have to change an existing index(even more a large index)from a non-partitioned index to a partitioned index or convert a B-Tree to a Bitmap index. Instead of having ‘downtime’ and wait for the existing index to be dropped and the set of columns will not be covered by the index. We can create the new one in advance, test and when satisfied make invisible the existing one(or drop it) and make visible the newly created index. Flexible, isn’t it !?!
SQL*Plus: Release 220.127.116.11.0 Production on Mon Jul 29 11:09:01 2013 Copyright (c) 1982, 2013, Oracle. All rights reserved. SQL> create table test_idx ( col1 integer, col2 integer, col3 integer, col4 varchar2(100) ); Table created. SQL> create index idx1 on test_idx(col1,col2,col3); Index created. SQL> create bitmap index idx2 on test_idx(col1,col2,col3); create bitmap index idx2 on test_idx(col1,col2,col3) * ERROR at line 1: ORA-01408: such column list already indexed SQL> create bitmap index idx2 on test_idx(col1,col2,col3) invisible; Index created.
We can now test idx2 index and if it performs well drop idx1 and make idx2 visible. There are a bunch of articles already about this new feature, have a look at Tom Kyte’s Blog and Richard Foote’s Blog .
Next, I’d like to mention DEFAULT values using sequences. It is quite handy to have CURRVAL and NEXTVAL as the default values for a column:
SQL> create sequence s_test_seq; Sequence created. SQL> create table test_seq (col1 NUMBER DEFAULT s_test_seq.NEXTVAL, 2 col2 varchar2(100) ); Table created. SQL> insert into test_seq(col2)values('Test Default value'); 1 row created. SQL> commit; Commit complete. SQL> select * from test_seq; COL1 COL2 ========== ===================================== 1 Test Default value
This means that the trigger-based functionality and writing PL/SQL for populating number id columns with a sequence is now a past with Oracle 12c.
You can also check Tim Hall’s blog for more details.
Other features which impress me at first glance are:
- Oracle Data Redaction
- Online Statistics Gathering for Bulk Loads
- Dynamic Statistics
- Concurrent Statistics Gathering
- Concurrent Execution of UNION and UNION ALL Branches
- Automatic Data Optimization
- Move a Data File Online
- ONLINE Move Partition
- Table-Level Recovery From Backups
- Oracle Data Pump Export View As a Table
- Partial Indexes for Partitioned Tables
- SYSBACKUP Administration Privilege
- Last Login Time Information
- Real-Time Database Operations Monitoring
- Enhanced Online DDL Capabilities(no longer require blocking locks)
- Row Limiting Clause for Top-N Queries
- Synchronous Materialized View Refresh
Just a quick mention when you get the error bellow while trying to create a trigger for example:
CREATE OR REPLACE TRIGGER AGE_GROUP_TRG
BEFORE INSERT ON AGE_GROUP REFERENCING
NEW AS NEW
FOR EACH ROW
SELECT AGE_GROUP_SEQ.nextval INTO :NEW.AGE_GROUP_ID FROM dual;
ORA-00603: ORACLE server session terminated by fatal error
ORA-00600: internal error code, arguments: [kqlidchg1], , , , , , , , , , , 
ORA-00604: error occurred at recursive SQL level 1
ORA-00001: unique constraint (SYS.I_PLSCOPE_SIG_IDENTIFIER$) violated
00603. 00000 – “ORACLE server session terminated by fatal error”
*Cause: An ORACLE server session is in an unrecoverable state.
*Action: Login to ORACLE again so a new server session will be created
The root cause is in the plscope_settings. If you use SQL Developer try to change PLSCOPE_SETTINGS to NONE from menu Tools->Preferences->Database->PL/SQL Compiler->PLScope Identifiers.
There are quite a lot discussions about this like this thread.
This post is about to quickly refer to one of the most common trick you must know when you install ODI 11g on Windows Server 2008 64 bit OS.
64-bit JDK is just fine but once you start ODI 11g Studio from the same machine you will get “Oracle ODI 11g Unable to launch the Java Virtual Machine Located at path …”:
This is because ODI 11g Studio client tool works only with 32-bit JDK. The workaround is to edit $ODI_HOME\oracledi\client\odi\bin\odi.conf and point SetJavaHome to the 32-bit path.
You also can have a look here.
These days I configured Oracle OSWatcher tool and yast package for EM Grid Control used to manage and monitor Oracle servers. I want to share how quickly to configure and start using them on Linux.
OSWatcher invokes system utilities like ps, top, iostat, vmstat, netstat and collects data according to the specified parameters. You can download it from Metalink.
unzip it to OSWatcher directory you would use($OSWATCHER_HOME)
tar -xvf oswbb4.0.tar
OSWatcher has been renamed to OSWatcher Black Box to avoid the confusion with the too many tools with this name. OSWatcher Black Box Analyzer(OSWbba ) is a graphing and analysis utility which comes bundled with OSWbb v4.0.0 and higher. For OSWbba you need java version 1.4.2 or higher.
Put next lines in the profile needed by OSWatcher:
alias oswatch=’java -jar $OSWATCHER_HOME/oswbba.jar -i $OSWATCHER_HOME/archive’
Let’s start it with nohup in background and configure to take snapshots with the system utilities at every 5 minutes for the last 24 hours.
nohup ./startOSWbb.sh 300 24 &
What it does is subdirs in $OSWATCHER_HOME/archives like oswiostat, oswmeminfo, oswmpstat, oswnetstat, oswprvtnet, oswps, oswslabinfo, oswtop, oswvmstat where result .dat files with collected data for each hour reside. OSWbba parses all the archive files and you invoke it from the alias created in the profile:
Starting OSW Black Box Analyzer V4.0
OSWatcher Black Box Analyzer Written by Oracle Center of Expertise
Copyright (c) 2012 by Oracle Corporation
Parsing Data. Please Wait…
Parsing file …iostat_12.02.16.0100.dat …
Parsing file …vmstat_12.02.16.0100.dat …
Enter 1 to Display CPU Process Queue Graphs
Enter 2 to Display CPU Utilization Graphs
Enter 3 to Display CPU Other Graphs
Enter 4 to Display Memory Graphs
Enter 5 to Display Disk IO Graphs
Enter 6 to Generate All CPU Gif Files
Enter 7 to Generate All Memory Gif Files
Enter 8 to Generate All Disk Gif Files
Enter L to Specify Alternate Location of Gif Directory
Enter T to Specify Different Time Scale
Enter D to Return to Default Time Scale
Enter R to Remove Currently Displayed Graphs
Enter P to Generate A Profile
Enter A to Analyze Data
Enter Q to Quit Program
Please Select an Option:
Yast(Yet Another Setup Tool) is needed if you need to administer a linux host through Enterprise Manager Grid Control. Download yast from here.
tar -xvf yast_el5_x86_64.tar
Start it in the commandline:
Here is how it looks like in EM Grid Control:
Still in Oracle 18.104.22.168 may appear an alert in EM related to metric “Generic Operational Error”. Occasionally when running Automatic SQL Tuning the following messages may appear in the alert log:
Process 0x%p appears to be hung in Auto SQL Tuning task
Current time = %u, process death time = %u
Attempting to kill process 0x%p with OS pid = %s
OSD kill succeeded for process 0x%p
You can have a look at “How to Avoid or Prevent Hung Auto Tuning Task Messages [ID 1344499.1]” in Metalink and this post .
The explanation is that the AUTO SQL TUNING TASK has been over-running and as a protective measure it is auto killed. As thus, there is no fix for this and the solution is to disable this job and eventually manually execute it when needed. Here is how to do that:
client_name => ‘sql tuning advisor’,
operation => NULL,
window_name => NULL);
The Automatic SQL Tuning :
SELECT TASK_NAME, DESCRIPTION, STATUS, LAST_MODIFIED FROM DBA_ADVISOR_TASKS
WHERE task_name LIKE ‘%AUTO_SQL_TUNING_TASK%';
is part of the Automated Maintenance Tasks together with Optimizer Statistics Gathering and Segment Advisor. You can see this also in Oracle EM Grid Control->Server->Automated Maintenance Tasks.
I have a similar post but the error in the alert log is different.
If you get the error below:
ORA-31011: XML parsing failed
ORA-19202: Error occurred in XML processing
LPX-00217: invalid character
from code running on Oracle 22.214.171.124 and 126.96.36.199 and it used to work on previous versions then stop and check MOS 1391688.1. It can save you a lot of time. The reason you get this error can be bug 11877267.
I hit this bug on Oracle 188.8.131.52 during investigation why a PL/SQL piece of code that parses XMLTYPE was not working. The reason is the new XML parser introduced with 184.108.40.206.
You can either apply patch 11877267 or use the workaround to set back the old XML parser as of version 220.127.116.11.
alter system set event=’31156 trace name context forever, level 0x400′ scope=spfile;
I think it is time to write another blog post. I will talk about an old but very useful way how to clone a database from one server to another. In my particular case it is the best solution that perfectly fits to all requirements put on the table. In the meantime let me explain what the task is and then share how it was achieved.
Ok, there is a primary Oracle RAC 10.2 database(in 11g we have more power and flexibility with clonedb but anyway) and a standby database. On both sides there is a RMAN backup. The database size is 800GB.
On a monthly basis there is a business need the primary database to be copied to a reporting server. The data there should be exactly at the point of 30/31 day of the month. Up to now the way this database was cloned is quite simple as a technique but unfortunately not very easy to do as it required a lot of manual interaction from the DBA and last but not least not very effective in time the database is available for use. So a full export is taken then it is copied to the target server and then imported. The whole procedure sometimes took more than 1 day usually 2 days as the dump is 120GB. I started to think for improvements.
I tried to optimize the expdp/impdp and import only the data and then create the indexes with a script produced by datapump and create them in parallel. The improvement was around 5 hours but still it was not enough. Here is the time to point the next requirement. I had to create this new database on the very same server where the standby database is but with a different database name. Cool!
Next direction was clone database. Clone database is easy even if you want to rename the database name.So I came up with two choices:
1. duplicate the primary database using RMAN to the machine where the phisical standby resides. This would be a different instance with a different database name from the source
2. duplicate the standby database(physical) using RMAN to the same machine again different instance
First one was successful and took more than 8 hours. In this case I used duplicate target database and convert of datafiles and logfiles(use *.db_file_name_convert and *.log_file_name_convert). One disadvantage was that I had to copy 350GB backup + archivelogs. The reason for this was that they are located on OCFS and on Windows these could not be shared through NFS. I also tried using BCV and take a snapshot of the LUN where OCFS is and then present it to the target server but it wasn’t accessible.
I hit a wall with the second option. I was not able to restore from standby database because of the standby controlfile. If you try to restore a controfile from the primary backup again it is not working.
I wanted to find another faster way to do this task. I chose the classical approach – stop source database , take a cold backup of the datafiles and recreate the database with a different name to the reporting server. Here I want to say thanks to Tim Hall and Joze Senegacnik. These guys threw me this idea during the Q&A sections of Tim Hall’s presentation at BGOUG where he spoke about clonedb in Oracle 11g.
So I went a little bit further because I can not afford to stop neither the production primary database nor the physical standby. I used storage BCV to get a copy of the primary database on the test environment. There I can shutdown the database and copy the database files as long as it takes. Here are my successful actions to the end:
1. Use this to build RMAN script to copy datafiles from ASM to filesystem
select ‘COPY DATAFILE ”’ || name || ”” || ‘ TO ”’ || REPLACE(name,’+DATA/datafile/’,’H:\oracle\testdb\datafiles\’) || ”” || ‘;’
from v$datafile where name like ‘%DATA%’ order by name;
2. shutdown the database and get a copy of all datafiles from ASM with RMAN using the script above
conn / as sysdba
rman target /
COPY DATAFILE ‘+DATA/datafile/test.ora’ TO ‘H:\oracle\testdb\datafiles\test.ora';
3. Get pfile from source database
create pfile=’H:\oracle\testdb\pfiletest.ora’ from spfile;
4. Modify this file with the new locations, new name for database, memory parameters, non-cluster database as follows
Its contents should be as follows:
5. From the source server side produce a create a control file script
alter database backup controlfile to trace as ‘H:\oracle\testdb\createcontrolfile.sql’ resetlogs;
6. Modify this file massively to correspond to the new database.
Get rid of all empty lines, all commented lines, edit the datafiles’path point to the new location, etc. Its contents should look like this:
CREATE CONTROLFILE REUSE SET DATABASE “TESTDB” RESETLOGS FORCE LOGGING NOARCHIVELOG
GROUP 1 ‘H:\oracle\testdb\onlinelog\group_1.302.653686569′ SIZE 300M,
GROUP 2 ‘H:\oracle\testdb\onlinelog\group_2.303.653686633′ SIZE 300M,
GROUP 3 ‘H:\oracle\testdb\onlinelog\group_3.308.653686849′ SIZE 300M
CHARACTER SET CL8MSWIN1251
ALTER DATABASE OPEN RESETLOGS;
ALTER TABLESPACE TEMP ADD TEMPFILE ‘H:\oracle\testdb\tempfile\temp.ora’
SIZE 32767M REUSE AUTOEXTEND ON NEXT 104857600 MAXSIZE 32767M;
7. Unpresent the disk from the storage from test server where the datafiles were copied from and present it to the target server(Eliminate the very copy of the files)
8. The target machine is a Windows Server. Create Oracle Service in Windows with ORADIM Windows utility
oradim -new -sid TESTDB -INTPWD password -STARTMODE AUTO
9. Final step
conn / as sysdba
create spfile from pfile=’H:\oracle\testdb\pfiletest.ora';
Overall statistics showed that 620 GB of datafiles were copied for 5 hours which is 5 times betterr than previous method and requires less interaction. The tasks are automated. Simple and quick.