Monday, November 12, 2007

Resync RAC standby database using ASM

If a RAC standby database using ASM (and also standby logfiles in our case) stays down for a little while, it will likely go behind the primary and under ASM of the standby cluster you'll see these "gaps" in sequence numbers --
ASMCMD [+DATA/PROD/ARCHIVELOG/2007_11_12] > ls
thread_1_seq_652.618.638445679
thread_1_seq_654.619.638466159
thread_1_seq_656.620.638466307
thread_1_seq_658.623.638467267
thread_1_seq_659.624.638467413
thread_1_seq_660.625.638467431
thread_2_seq_262.616.638416837
thread_2_seq_264.617.638445669
thread_2_seq_265.621.638467091
thread_2_seq_266.622.638467229
ASMCMD [+DATA/PROD/ARCHIVELOG/2007_11_12]


Notice that thread 1 (node 1) is missing sequences 653, 655 and 657. Similarly thread 2 is missing sequence#263.
Question is how to bring these missing archivelogs from ASM@primary cluster (they are still sitting on disk @primary)?

Solution: I use a 3-step process there
step 1: Use RMAN "copy" command to copy these to a cooked filesystem on the primary cluster
step 2: Scp the restored (copied) archivelogs to the standby cluster
step 3: Recover (automatic) standby database and supply the scp'ed archivelog under cooked FS.

First, query the ASM@primary to get the exact name/location of these archives --
ASMCMD [+FRA/PROD/ARCHIVELOG/2007_11_12] > ls
thread_1_seq_650.768.638416833
thread_1_seq_651.771.638416841
thread_1_seq_652.781.638445663
thread_1_seq_653.796.638465819
thread_1_seq_654.797.638466159
thread_1_seq_655.798.638466189
thread_1_seq_656.800.638466307
thread_1_seq_657.801.638466455
thread_2_seq_262.769.638416835
thread_2_seq_263.770.638416841
thread_2_seq_264.782.638445665
thread_2_seq_265.799.638466191


step 1: Execute the RMAN copy command (primary cluster). Set the env to the database first.
RMAN> copy archivelog '+FRA/prod/archivelog/2007_11_12/thread_2_seq_263.770.638416841' to '/u02/backup/PROD/thread_2_seq_263.770.638416841';

Starting backup at 12-NOV-07
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=1055 instance=PROD1 devtype=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: sid=1057 instance=PROD1 devtype=DISK
channel ORA_DISK_1: starting archive copy
input archive log thread=2 sequence=263 recid=1412 stamp=638416840
output filename=/u02/backup/PROD/thread_2_seq_263.770.638416841 recid=1426 stamp=638465672
channel ORA_DISK_1: archivelog copy complete, elapsed time: 00:00:02
Finished backup at 12-NOV-07

Starting Control File and SPFILE Autobackup at 12-NOV-07
piece handle=+FRA/prod/autobackup/2007_11_12/s_638465673.795.638465673 comment=NONE
Finished Control File and SPFILE Autobackup at 12-NOV-07

RMAN> copy archivelog '+FRA/PROD/ARCHIVELOG/2007_11_12/thread_1_seq_653.796.638465819' to '/u02/backup/PROD/thread_1_seq_653.796.638465819';

Starting backup at 12-NOV-07
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=1055 instance=PROD1 devtype=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: sid=1081 instance=PROD1 devtype=DISK
channel ORA_DISK_1: starting archive copy
input archive log thread=1 sequence=653 recid=1427 stamp=638465819
output filename=/u02/backup/PROD/thread_1_seq_653.796.638465819 recid=1435 stamp=638466707
channel ORA_DISK_1: archivelog copy complete, elapsed time: 00:00:02
Finished backup at 12-NOV-07

Starting Control File and SPFILE Autobackup at 12-NOV-07
piece handle=+FRA/prod/autobackup/2007_11_12/s_638466708.802.638466709 comment=NONE
Finished Control File and SPFILE Autobackup at 12-NOV-07

RMAN> copy archivelog '+FRA/PROD/ARCHIVELOG/2007_11_12/thread_1_seq_655.798.638466189' to '/u02/backup/PROD/thread_1_seq_655.798.638466189';

Starting backup at 12-NOV-07
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting archive copy
input archive log thread=1 sequence=655 recid=1430 stamp=638466189
output filename=/u02/backup/PROD/thread_1_seq_655.798.638466189 recid=1436 stamp=638466745
channel ORA_DISK_1: archivelog copy complete, elapsed time: 00:00:02
Finished backup at 12-NOV-07

Starting Control File and SPFILE Autobackup at 12-NOV-07
piece handle=+FRA/prod/autobackup/2007_11_12/s_638466746.803.638466747 comment=NONE
Finished Control File and SPFILE Autobackup at 12-NOV-07

RMAN> copy archivelog '+FRA/PROD/ARCHIVELOG/2007_11_12/thread_1_seq_657.801.638466455' to '/u02/backup/PROD/thread_1_seq_657.801.638466455';

Starting backup at 12-NOV-07
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting archive copy
input archive log thread=1 sequence=657 recid=1434 stamp=638466454
output filename=/u02/backup/PROD/thread_1_seq_657.801.638466455 recid=1437 stamp=638466777
channel ORA_DISK_1: archivelog copy complete, elapsed time: 00:00:02
Finished backup at 12-NOV-07

Starting Control File and SPFILE Autobackup at 12-NOV-07
piece handle=+FRA/prod/autobackup/2007_11_12/s_638466778.804.638466779 comment=NONE
Finished Control File and SPFILE Autobackup at 12-NOV-07

RMAN> quit


Recovery Manager complete.

/u02/backup/PROD: ls -lt|more
total 47463696
-rw-r----- 1 oracle oinstall 32768 Nov 12 15:52 thread_1_seq_657.801.638466455
-rw-r----- 1 oracle oinstall 17920 Nov 12 15:52 thread_1_seq_655.798.638466189
-rw-r----- 1 oracle oinstall 15824384 Nov 12 15:51 thread_1_seq_653.796.638465819
-rw-r----- 1 oracle oinstall 1024 Nov 12 15:50 thread_2_seq_263.770.638416841
...
...


step2: Scp the restored archivelogs to the standby server (bcp1 is node 1 of our standby cluster)
/u02/backup/PROD: scp thread_2_seq_263.770.638416841 bcp1:/u02/backup/PROD/.
thread_2_seq_263.770.638416841
/u02/backup/PROD: scp thread_1_seq_653.796.638465819 bcp1:/u02/backup/PROD/.
thread_1_seq_653.796.638465819 100% 15MB 15.1MB/s 00:01
/u02/backup/PROD: scp thread_1_seq_655.798.638466189 bcp1:/u02/backup/PROD/.
thread_1_seq_655.798.638466189 100% 18KB 17.5KB/s 00:00
/u02/backup/PROD: scp thread_1_seq_657.801.638466455 bcp1:/u02/backup/PROD/.
thread_1_seq_657.801.638466455

step 3: Recover the standby database using the scp'ed archive logs.
PROD1_SQL > recover automatic standby database;
ORA-00279: change 98980021976 generated at 11/12/2007 02:00:34 needed for thread 2
ORA-00289: suggestion : +DATA/prod/2_263_631792716.dbf
ORA-00280: change 98980021976 for thread 2 is in sequence #263
ORA-00278: log file '+DATA/prod/2_262_631792716.dbf' no longer needed for this recovery

Specify log: {=suggested | filename | AUTO | CANCEL}
/u02/backup/PROD/thread_2_seq_263.770.638416841

RA-00279: change 98980621191 generated at 11/12/2007 10:01:02 needed for thread 1
ORA-00289: suggestion : +DATA/prod/1_653_631792716.dbf
ORA-00280: change 98980621191 for thread 1 is in sequence #653
ORA-00278: log file '+DATA/prod/1_653_631792716.dbf' no longer needed for this recovery
ORA-00308: cannot open archived log '+DATA/prod/1_653_631792716.dbf'
ORA-17503: ksfdopn:2 Failed to open file +DATA/prod/1_653_631792716.dbf
ORA-15173: entry '1_653_631792716.dbf' does not exist in directory 'prod'


Specify log: {=suggested | filename | AUTO | CANCEL}
/u02/backup/PROD/thread_1_seq_653.796.638465819
ORA-00279: change 98980621191 generated at 11/12/2007 02:00:40 needed for thread 2
ORA-00289: suggestion : +DATA/prod/2_264_631792716.dbf
ORA-00280: change 98980621191 for thread 2 is in sequence #264


Specify log: {=suggested | filename | AUTO | CANCEL}

ORA-00279: change 98980621195 generated at 11/12/2007 10:01:04 needed for thread 2
ORA-00289: suggestion : +DATA/prod/2_265_631792716.dbf
ORA-00280: change 98980621195 for thread 2 is in sequence #265
ORA-00278: log file '+DATA/prod/2_264_631792716.dbf' no longer needed for this recovery

Specify log: {=suggested | filename | AUTO | CANCEL}
/u02/backup/PROD/thread_2_seq_265.799.638466191
ORA-00279: change 98981521197 generated at 11/12/2007 15:36:59 needed for thread 1
ORA-00289: suggestion : +DATA/prod/1_654_631792716.dbf
ORA-00280: change 98981521197 for thread 1 is in sequence #654
ORA-00278: log file '/u02/backup/PROD/thread_1_seq_653.796.638465819' no longer needed for this recovery

Specify log: {=suggested | filename | AUTO | CANCEL}

ORA-00279: change 98981522036 generated at 11/12/2007 15:42:38 needed for thread 1
ORA-00289: suggestion : +DATA/prod/1_655_631792716.dbf
ORA-00280: change 98981522036 for thread 1 is in sequence #655
ORA-00278: log file '+DATA/prod/1_654_631792716.dbf' no longer needed for this recovery

Specify log: {=suggested | filename | AUTO | CANCEL}
/u02/backup/PROD/thread_1_seq_655.798.638466189
ORA-00279: change 98981522129 generated at 11/12/2007 15:43:09 needed for thread 1
ORA-00289: suggestion : +DATA/prod/1_656_631792716.dbf
ORA-00280: change 98981522129 for thread 1 is in sequence #656
ORA-00278: log file '/u02/backup/PROD/thread_1_seq_655.798.638466189' no longer needed for this recovery

Specify log: {=suggested | filename | AUTO | CANCEL}

ORA-00279: change 98981522138 generated at 11/12/2007 15:43:11 needed for thread 2
ORA-00289: suggestion : +DATA/prod/2_266_631792716.dbf
ORA-00280: change 98981522138 for thread 2 is in sequence #266
ORA-00278: log file '/u02/backup/PROD/thread_2_seq_265.799.638466191' no longer needed for this recovery


Specify log: {=suggested | filename | AUTO | CANCEL}
/u02/backup/PROD/thread_2_seq_266.806.638467227
ORA-00279: change 98981577931 generated at 11/12/2007 15:45:07 needed for thread 1
ORA-00289: suggestion : +DATA/prod/1_657_631792716.dbf
ORA-00280: change 98981577931 for thread 1 is in sequence #657
ORA-00278: log file '+DATA/prod/1_656_631792716.dbf' no longer needed for this recovery

Specify log: {=suggested | filename | AUTO | CANCEL}
/u02/backup/PROD/thread_1_seq_657.801.638466455
ORA-00279: change 98981578237 generated at 11/12/2007 15:47:34 needed for thread 1
ORA-00289: suggestion : +DATA/prod/1_658_631792716.dbf
ORA-00280: change 98981578237 for thread 1 is in sequence #658
ORA-00278: log file '/u02/backup/PROD/thread_1_seq_657.801.638466455' no longer needed for this recovery

Specify log: {=suggested | filename | AUTO | CANCEL}
cancel
Media recovery cancelled.
PROD1_SQL > recover managed standby database using current logfile disconnect from session;
Media recovery complete.


This completes the recovery process for the standby database and fills in the gaps.

Thursday, November 1, 2007

Switchover from Primary to Standby in RAC

Recently we were asked to perform a switchover in a RAC environment and although it seemed difficult initially, it turned out to be rather quick and simple (provided, of course you did things in the right order!).

Following the documentation at http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/role_management.htm#i1030646
in preparation for a switchover, I shut instance 2 of standby down (the one that was not involved in the media recovery).

Per the documentation:
Verify that all but one RAC instance on the standby databases in a Real Application Clusters configuration are shut down.
For a Real Application Clusters database, only one RAC instance on the standby database can be online during the role transition. Shut down all other instances before starting the role transition. Then, after the role transition completes, bring these instances back online.
I ensured that the primary and standby databases were in sync and then went ahead with the ALTER DATABASE COMMIT TO SWITCHOVER.. command on one of the two primary instances --

GCPROD1_SQL > ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY with session shutdown;
ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY with session shutdown
*
ERROR at line 1:
ORA-01105: mount is incompatible with mounts by other instances


After doing some research, I realized that you need to shutdown ALL but one instance on the primary side also (other words, before running the switchover command in a RAC env, keep only one instance running at primary site and one instance at the DR site).

On primary node 1
After shutting down instance 2 at the primary site, I was able to run the SWITCHOVER successfully on instance 1 of the primary.
[Note: After the switchover, the instance is brought up in NOMOUNT stage. So I did a shutdown and mounted it again]

GCPROD1_SQL > ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;


Database altered.

GCPROD1_SQL > shutdown immediate
ORA-01507: database not mounted

ORACLE instance shut down.

GCPROD1_SQL > startup NOMOUNT
ORACLE instance started.

Total System Global Area 1543503872 bytes
Fixed Size 2072960 bytes
Variable Size 369102464 bytes
Database Buffers 1157627904 bytes
Redo Buffers 14700544 bytes
Database mounted.

GCPROD1_SQL > SELECT SWITCHOVER_STATUS FROM V$DATABASE;


SWITCHOVER_STATUS
--------------------
SESSIONS ACTIVE

Notice that I used the "..WITH SESSION SHUTDOWN" option as I was unable to bring a few active internal processes down.

Now that our old primary has become the new standby, we need to fix a few INIT parameters related to log shipping.

a) LOG_ARCHIVE_DEST_n setting

Here's our old settings --
GCPROD1_SQL > show parameter log_archive_dest

NAME TYPE VALUE
------------------------------------ -------- ------------------------------
log_archive_dest_1 string location="+FLASHDG", valid_for
=(ONLINE_LOGFILES, PRIMARY_ROLE)
log_archive_dest_10 string
log_archive_dest_2 string service="GCPROD_STBY", LGWR AS
YNC NOAFFIRM delay=0 OPTIONAL
max_failure=10 max_connection
s=4 reopen=15 db_unique_name="
GCPROD" register net_timeout
=180 valid_for=(online_logfile
,primary_role)
...
...
[removed for brevity]

We can remove log_archive_dest_2 as its not needed on a standby database. Also log_archive_dest_1 needs to be changed.

GCPROD1_SQL > ALTER SYSTEM SET log_archive_dest_1='LOCATION=+FLASHDG' SCOPE=BOTH;
GCPROD1_SQL > ALTER SYSTEM SET log_archive_dest_2='';

[Note: For, log_archive_dest_1 we may also set as LOCATION=+FLASHDG valid_for=(STANDBY_LOGFILES, STANDBY_ROLE)'. That should also work ok]


b) FAL settings
FAL_SERVER should point to the primary database (tns string) and FAL_CLIENT should point to the Standby database (self)

GCPROD1_SQL > ALTER SYSTEM SET fal_client='GCPROD_STBY' SCOPE=BOTH;
GCPROD1_SQL > ALTER SYSTEM SET fal_server='GCPROD_PRIM' SCOPE=BOTH;


Make sure we update the TNSNAMES.ora file for GCPROD_STBY and GCPROD_PRIM. GCPROD_STBY should resolve to the standby database and GCPROD_PRIM should resolve to the primary (ie. the erstwhile standby; that we haven't switched to the primary role yet -- we'll do that shortly)

After making these change, we put the new standby database in recovery mode --

GCPROD1_SQL > alter database recover managed standby database disconnect from session;

Database altered.




On standby node 1
Next, we perform a switchover of the old standby database to make it the new PRIMARY.
[Note: I had to perform media recovery as shown below as I had accidentally canceled the recovery earlier -- this might not be needed if the (old) standby was already in recovery mode]

GCPROD1_SQL > ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY
*
ERROR at line 1:
ORA-16139: media recovery required

GCPROD1_SQL > alter database recover managed standby database disconnect from session;

Database altered.

GCPROD1_SQL > ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
Database altered.

GCPROD1_SQL > alter database open;
Database altered.


At this point our new primary database is open and available

Now we've to setup the log shipping related INIT parameters on our new primary.

a) LOG_ARCHIVE_DEST_n setting

Here's our old settings --
GCPROD1_SQL > show parameter log_archive_dest

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest string
log_archive_dest_1 string LOCATION=+FLASHDG
log_archive_dest_10 string
log_archive_dest_2 string
log_archive_dest_3 string

...
...
[removed for brevity]

GCPROD1_SQL > ALTER SYSTEM SET log_archive_dest_1='location="+FLASHDG", valid_for=(ALL_LOGFILES, ALL_ROLES)' SCOPE=BOTH;

GCPROD1_SQL > ALTER SYSTEM SET log_archive_dest_2='service="GCPROD_STBY", LGWR ASYNC NOAFFIRM delay=0 OPTIONAL max_failure=10 max_connections=4 reopen=15 db_unique_name="GCPROD" register net_timeout=180 valid_for=(online_logfile,pr
imary_role)' SCOPE=BOTH;

Also don't forget to enable the destination (else the logs don't ship automatically).

GCPROD1_SQL > ALTER SYSTEM SET log_archive_dest_state_2='ENABLE' SCOPE=BOTH;


b) FAL settings
Primary database doesn't need any FAL settings, so we may remove those.

GCPROD1_SQL > ALTER SYSTEM SET fal_client='' SCOPE=BOTH;
GCPROD1_SQL > ALTER SYSTEM SET fal_server='' SCOPE=BOTH;

This completes the SWITCHOVER activity pretty much.
If the DB_UNIQUE names are different for some reason on the primary and DR sides, we also need to update LOG_ARCHIVE_CONFIG parameter for dg_config setting otherwise we might see the following errors in the alert log of the primary database --

Errors in file /u01/app/oracle/admin/GCPROD/bdump/gcprod1_arc1_1316.trc:
ORA-16047: DGID mismatch between destination setting and standby
...
PING[ARC1]: Heartbeat failed to connect to standby 'GCPROD_STBY'. Error is 16047.


The solution is to set the LOG_ARCHIVE_CONFIG parameter to show the db_unique_name (if that is different than db_name, for any reason)

GCPROD1_SQL > ALTER SYSTEM SET log_archive_config='dg_config=(GCPROD, GCPRODDR)' SCOPE=BOTH;

[where GCPROD is db_unique_name on primary side and GCPRODDR is the db_name on the DR side]

All is set now. We can verify it by performing a few log switches on the primary side
The only remaining item would be to make sure the TNSNAMES file is updated on both sides.

Tuesday, October 23, 2007

Oracle Streams (Downstream) with Target on RAC

Since we did it here several times, I thought I would share this with others who might be looking for examples on how to setup Oracle Streams (downstream approach) with target database in a RAC environment.

Overview
Our setup (as shown in the picture) has the source database sitting on Unix and the downstream database is on a different server (also solaris). Archivelogs generated on source_db are available on the DOWNSTREAM_SERVER because we have a hot standby database sitting there (you could manually move the logs via scp or ftp). These archivelogs are then registered with the downstream database where the CAPTURE and PROPAGATE are running. The Propagate sends the changes to the Target_db via a Queue-To-Queue communication.

Detailed Steps
This is broken down into 4 sections as explained below.


A. Preparing the SOURCE environment
1. Create the STRMDBA id with the required privileges in the SOURCE_DB database. This will be the administrator account for Oracle streams there.
connect / as sysdba
create user STRMDBA identified by strmdba default tablespace USERS temporary tablespace TEMP;
grant CREATE SESSION, SELECT ANY DICTIONARY to STRMDBA;
grant EXECUTE on DBMS_CAPTURE_ADM to STRMDBA;
grant EXECUTE on DBMS_STREAMS_RPC to STRMDBA;
grant EXEMPT ACCESS POLICY to STRMDBA;

2. Prepare tables for instantiation. This ensures that all the required information for generating a LCR (Logical Change Record) gets added to the redologs. This step also turns on the supplemental logging on Primary and Unique Keys (default) or all columns (SUPPLEMENTAL_LOGGING='ALL' option). Supplemental logging for ALL columns is required if the source table does not have a primary or a unique key.
exec DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION (table_name => 'SCOTT.EMP');

exec DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION(table_name => 'SCOTT.T_EMP', SUPPLEMENTAL_LOGGING=>'ALL');




B. Preparing the DOWNSTREAM environment
1. In the downstream database, create a new tablespace called LOGMINER for use by the logminer process (by default, logminer uses the SYSAUX tablespace). Steps below include a drop tablespace statement and the default tablespace is set to SYSTEM prior to dropping the tablespace.

[Note: Not setting SYSTEM as default for logminer process before dropping the LOGMINER tablespace might corrupt the entire dictionary].

connect / as sysdba
alter user SYSTEM default tablespace SYSTEM;
exec DBMS_LOGMNR_D.SET_TABLESPACE('SYSTEM');

drop tablespace LOGMINER including contents and datafiles;
create tablespace LOGMINER datafile '/u16/oradata/$ORACLE_SID/lgm01$ORACLE_SID.dbf' size 50M autoextend on next 20M maxsize 500M extent management local uniform size 64K segment space management auto;

alter user system default tablespace LOGMINER;
exec DBMS_LOGMNR_D.SET_TABLESPACE('LOGMINER');
alter user system default tablespace SYSTEM;


2 . Create a STREAMS_Q tablespace in the downstream database to store the database objects for Oracle streams.
drop tablespace STREAMS_Q including contents and datafiles;

create tablespace STREAMS_Q datafile '/u16/oradata/$ORACLE_SID/str01$ORACLE_SID.dbf' size 50M autoextend on next 20M maxsize 200M extent management local uniform size 64K segment space management auto;

3. Create the Streams Administrator account in the downstream database.
connect / as sysdba
create user STRMDBA identified by &passwd default tablespace STREAMS_Q quota unlimited on STREAMS_Q temporary tablespace TEMP;

grant create session, dba to strmdba;
exec dbms_streams_auth.grant_admin_privilege('STRMDBA');


4. Create a capture Q (notice, I did that under the STRMDBA schema). This step will also create a Queue Table to store the LCRs.
connect STRMDBA
begin
dbms_streams_adm.set_up_queue(
queue_table => 'T_STRM_Q',
storage_clause => 'TABLESPACE STREAMS_Q',
queue_name => 'STRM_CAPTURE_Q',
queue_user => 'STRMDBA');
end;
/


5. Create a database link to the SOURCE and TARGET databases. Again, this is a private link owned by STRMDBA id.
connect STRMDBA
drop database link SOURCE_DB;
create database link SOURCE_DB> connect to STRMDBA identified by &passwd using 'SOURCE_DB';

drop database link TARGET_DB;
create database link TARGET_DB connect to STRMDBA identified by &passwd using 'TARGET_DB';


6. Create a capture process called STRM_CAPTURE to capture changes into the Capture Queue called STRM_CAPTURE_Q (created in step 4)
begin
dbms_capture_adm.create_capture(
queue_name => 'STRMDBA.STRM_CAPTURE_Q',
capture_name => 'STRM_CAPTURE',
rule_set_name => NULL,
start_scn => NULL,
source_database => 'SOURCE_DB',
use_database_link => TRUE,
first_scn => NULL,
logfile_assignment => 'explicit');
end;
/


7. Similarly, create a propagation process called STRM_PROPAGATE to propagate changes from the Capture Queue to Apply Queue (notice, in RAC environment it is a Queue-To-Queue propagation as APPLY is running only on a single node)
connect STRMDBA
BEGIN
DBMS_PROPAGATION_ADM.CREATE_PROPAGATION(
propagation_name => 'STRM_PROPAGATE',
source_queue => 'STRMDBA.STRM_CAPTURE_Q',
destination_queue => 'STRMDBA.STRM_APPLY_Q',
destination_dblink => 'TARGET_DB',
queue_to_queue => TRUE);
END;

/

8. Add table CAPTURE RULES for all the tables you want to setup streams on (lookup documentation for syntax if you'd like to do it on entire schema). Notice, I'm including both DML and DDL changes to be captured here.
connect STRMDBA
begin
dbms_streams_adm.add_table_rules(
table_name => 'SCOTT.DEPARTMENT',
streams_type => 'CAPTURE',
streams_name => 'STRM_CAPTURE',
queue_name => 'STRMDBA.STRM_CAPTURE_Q',
include_dml => TRUE,
include_ddl => TRUE,
include_tagged_lcr => FALSE,
source_database => 'SOURCE_DB',
inclusion_rule => TRUE);
end;
/


9. Next, add table PROPAGATION RULES. Notice the use of target database link for queue-to-queue propagation. The destination Queue (STRM_APPLY_Q) will be created in the next section.
begin
dbms_streams_adm.add_table_propagation_rules(
table_name => 'SCOTT.DEPARTMENT',
streams_name => 'STRM_PROPAGATE',
source_queue_name => 'STRMDBA.STRM_CAPTURE_Q',
destination_queue_name=> 'STRMDBA.STRM_APPLY_Q@TARGET_DB',
include_dml => TRUE,
include_ddl => TRUE,
source_database => 'SOURCE_DB',
inclusion_rule => TRUE,
queue_to_queue => TRUE);
end;
/


This sets up the DOWNSTREAM enviroment. We have not yet started the CAPTURE process which we will do at the end.



C. Preparing the TARGET environment

1. In the Target database, create the STREAMS_Q and LOGMINER tablespaces, just like we did it in the downstream database. Since this is RAC and we use ASM diskgroups, both these tablespaces are created under the +DATA diskgroup.
Notice, I include the drop statements also in case we are redoing this effort. The STREAM_Q tablespace cannot be dropped unless we drop the Queue Table there.
connect / as sysdba
alter user system default tablespace SYSTEM;
exec DBMS_LOGMNR_D.SET_TABLESPACE('SYSTEM');

drop tablespace LOGMINER including contents and datafiles;
create tablespace LOGMINER datafile '+DATA' size 100M autoextend on next 20M maxsize 4000M extent management local uniform size 64K segment space management auto;

alter user system default tablespace LOGMINER;
exec DBMS_LOGMNR_D.SET_TABLESPACE('LOGMINER');
alter user system default tablespace SYSTEM;

drop tablespace STREAMS_Q including contents and datafiles;
exec DBMS_AQADM.DROP_QUEUE_TABLE('STRMDBA.T_STRM_Q',TRUE);
create tablespace STREAMS_Q datafile '+DATA' size 50M autoextend on next 20M maxsize 1000M extent management local uniform size 64K segment space management auto;'


2. Create the streams Administrator account with the right privileges in the Target database. I remember hitting issues if I tried without the DBA privs, but you might want to test that also.
connect / as sysdba
create user STRMDBA identified by &passwd default tablespace STREAMS_Q quota unlimited on STREAMS_Q temporary tablespace TEMP;

grant create session, dba to STRMDBA;
exec dbms_streams_auth.grant_admin_privilege('STRMDBA');


3. Create the Apply Queue (STRMDBA schema) in the Target database. This also creates a Queue Table to store the propagated LCRs.
connect STRMDBA
begin
dbms_streams_adm.set_up_queue(
queue_table => 'T_STRM_Q',
storage_clause => 'TABLESPACE STREAMS_Q',
queue_name => 'STRM_APPLY_Q',
queue_user => 'STRMDBA');
end;
/


4. Add table APPLY RULES.
connect STRMDBA
begin
dbms_streams_adm.add_table_rules(
table_name => 'SCOTT.DEPARTMENT',
streams_type => 'APPLY',
streams_name => 'STRM_APPLY',
queue_name => 'STRMDBA.STRM_APPLY_Q',
include_dml => TRUE,
include_ddl => TRUE,
source_database => 'SOURCE_DB');
end;
/


[Note: You might have a requirement to add a filter to the APPLY process. In those cases, the include_dml should be set to FALSE and a subset rule should be added.
For example, to apply changes for records where department_code is 3 or 4, set the include_dml=FALSE above and then add the following subset rule.

connect strmdba
begin
DBMS_STREAMS_ADM.ADD_SUBSET_RULES(
table_name => 'SCOTT.DEPARTMENT',
dml_condition => ' DEPT_CODE in (3,4) ',
streams_type => 'APPLY',
streams_name => 'STRM_APPLY',
queue_name => 'STRM_APPLY_Q');
end;
/


5. Alter the APPLY process to set DISABLE_ON_ERROR to 'N'. The default is 'Y' which disables the APPLY on the very first unresolved error.
connect / as sysdba
begin
dbms_apply_adm.alter_apply(
apply_name => 'STRM_APPLY',
apply_user => 'STRMDBA');
end;
/

connect STRMDBA
begin
dbms_apply_adm.set_parameter(
apply_name => 'STRM_APPLY',
parameter => 'DISABLE_ON_ERROR',
value => 'N');
end;
/




D. Starting the CAPTURE/APPLY processes and register Archivelogs
1. Our Source, Downstream and Target environments are all set for capturing changes for SCOTT.DEPARTMENT table. Start the CAPTURE process in the Downstream database (check the STATUS column of DBA_CAPTURE) and APPLY process in the Target database (STATUS column of DBA_APPLY).
In the Target database --
connect STRMDBA
exec dbms_apply_adm.start_apply(apply_name => 'STRM_APPLY');
connect / as sysdba


In Downstream database --
connect STRMDBA
exec dbms_capture_adm.start_capture (capture_name => 'STRM_CAPTURE');


2. The archivelogs generated on the source database should be scp'ed to the downstream server (I get them by virtue of a hot standby there but, as I mentioned earlier, you might have to initiate a job to do that unless you want to add a new destination in the source itself). Once the archivelogs are on the downstream server, go ahead and manually register them in the Downstream database (well, I use a perl script to do that so it is all automated)
alter database register logical logfile '/u02/oradata/SOURCE_DB/log_SOURCE_DB_625925510_1_3023.log' for 'STRM_CAPTURE'


Bonus item..
Adding a Global Rule to exclude certain DDLs like Grants, Truncate Table etc

There could also be a need to exclude certain DDL statements from being applied to the Target database like Truncate Table, Grant, Alter trigger etc. This could be achieved by adding a Global Rule on the Downstream database where CAPTURE is running. [Be aware that this will turn supplemental logging on Primary Key, Unique Key and Foreign Key at the database level in the SOURCE_DB i.e. implicitly execute "alter database add supplemental log data(PRIMARY KEY,UNIQUE INDEX,FOREIGN KEY) columns" in the Source database].

connect STRMDBA
begin
dbms_streams_adm.add_global_rules(
streams_type => 'CAPTURE',
streams_name => 'STRM_CAPTURE',
queue_name => 'STRMDBA.STRM_CAPTURE_Q',
include_dml => FALSE,
include_ddl => TRUE,
include_tagged_lcr => FALSE,
source_database => 'SOURCE_DB',
inclusion_rule => FALSE,
and_condition =>'(:ddl.get_command_type()=''CREATE INDEX'''||' or :ddl.get_command_type()=''GRANT''' || ' or :ddl.get_command_type()=''TRUNCATE TABLE'''||' or :ddl.get_command_type()=''ALTER TRIGGER'''||')');
end;
/


Ensure that the changes on SCOTT.DEPARTMENT are streaming into the Target by generating a log in the Source database and then manually registering it to the Downstream database as shown above. You should be all set now!

Monday, October 22, 2007

ORA-12514 in Oracle Streams

I recently got stuck on setting up streams (downstream approach) with the target database in a RAC environment (I'll detail the steps of doing this in a later blog entry).

Anyway, here's the error that was showing up on the downstream database --

ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

Propagation Schedule for (STRMDBA.STRM_CAPTURE_Q, "STRMDBA"."STRM_APPLY_Q"@PERG) disabled due to 16 consecutive errors.

Here's what I did to fix this error (also note that earlier my "global_name" was also not set correctly, so it is quite likely that the service name was initially picked from the wrong global_name entry) --

PERG1_SQL > show parameter service

NAME TYPE VALUE
------------------------------------ -------- ------------------------------
service_names string SYS$STRMDBA.STRM_APPLY_Q.PERG.REGRESS.RDBMS.DEV.US.ORACLE.COM, PERG

PERG1_SQL > alter system set service_names='SYS$STRMDBA.STRM_APPLY_Q.PERG, PERG';

System altered.

PERG1_SQL > show parameter service

NAME TYPE VALUE
------------------------------------ -------- ------------------------------
service_names string SYS$STRMDBA.STRM_APPLY_Q.PERG, PERG

The propagation was still giving errors (as it potentially recorded the wrong service_names entry in the dictionary), so I had to drop and recreate the propagation rules. After that, all went fine.

Wednesday, March 21, 2007

I've no art on blogging

I've been working in Oracle databases for over 14 years now. I know blogging is an art that I do not possess at all. However, there is something new that I learn about Oracle almost everyday so, after talking to several other "gurus" in the art of blogging, I also thought to put some of my experiences/learnings here.. Well, will attempt to. These will be solely to help myself in future (should I forget) or someone else facing similar issues.

For past several years, I've been administering databases for the Banner application from SCT/SHE (Sungard Higher Ed) at University of Illinois, Urbana Champaign (well, nice little community but nothing much happens around outside the University). I also wrote an article for the SELECT journal covering Oracle VPD in details and got some good feedback on it (Quarter 3, 2006 http://www.ioug.org/selectjournal/issues/06q3.cfm). Trying to expand my learning horizon and socializing with other experts in order to learn and share.

Recently we've been doing some RAC installation here at the University to support the online education initiative together with the help of Oracle and Dell. I'll be posting some of the things that I've learnt and will learn in my blogs in near future.

Update: In Nov 2008, I switched my job and moved to Texas (Dallas area). I now work for a company called Ameripath (a Quest Diagnostics company). We manage a bigger RAC architecture here together w/ a logical standby (for reporting) and tighter rules. New challenges! I enjoyed working in a challenging environment.