Tuesday, October 23, 2007

Oracle Streams (Downstream) with Target on RAC

Since we did it here several times, I thought I would share this with others who might be looking for examples on how to setup Oracle Streams (downstream approach) with target database in a RAC environment.

Overview
Our setup (as shown in the picture) has the source database sitting on Unix and the downstream database is on a different server (also solaris). Archivelogs generated on source_db are available on the DOWNSTREAM_SERVER because we have a hot standby database sitting there (you could manually move the logs via scp or ftp). These archivelogs are then registered with the downstream database where the CAPTURE and PROPAGATE are running. The Propagate sends the changes to the Target_db via a Queue-To-Queue communication.

Detailed Steps
This is broken down into 4 sections as explained below.


A. Preparing the SOURCE environment
1. Create the STRMDBA id with the required privileges in the SOURCE_DB database. This will be the administrator account for Oracle streams there.
connect / as sysdba
create user STRMDBA identified by strmdba default tablespace USERS temporary tablespace TEMP;
grant CREATE SESSION, SELECT ANY DICTIONARY to STRMDBA;
grant EXECUTE on DBMS_CAPTURE_ADM to STRMDBA;
grant EXECUTE on DBMS_STREAMS_RPC to STRMDBA;
grant EXEMPT ACCESS POLICY to STRMDBA;

2. Prepare tables for instantiation. This ensures that all the required information for generating a LCR (Logical Change Record) gets added to the redologs. This step also turns on the supplemental logging on Primary and Unique Keys (default) or all columns (SUPPLEMENTAL_LOGGING='ALL' option). Supplemental logging for ALL columns is required if the source table does not have a primary or a unique key.
exec DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION (table_name => 'SCOTT.EMP');

exec DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION(table_name => 'SCOTT.T_EMP', SUPPLEMENTAL_LOGGING=>'ALL');




B. Preparing the DOWNSTREAM environment
1. In the downstream database, create a new tablespace called LOGMINER for use by the logminer process (by default, logminer uses the SYSAUX tablespace). Steps below include a drop tablespace statement and the default tablespace is set to SYSTEM prior to dropping the tablespace.

[Note: Not setting SYSTEM as default for logminer process before dropping the LOGMINER tablespace might corrupt the entire dictionary].

connect / as sysdba
alter user SYSTEM default tablespace SYSTEM;
exec DBMS_LOGMNR_D.SET_TABLESPACE('SYSTEM');

drop tablespace LOGMINER including contents and datafiles;
create tablespace LOGMINER datafile '/u16/oradata/$ORACLE_SID/lgm01$ORACLE_SID.dbf' size 50M autoextend on next 20M maxsize 500M extent management local uniform size 64K segment space management auto;

alter user system default tablespace LOGMINER;
exec DBMS_LOGMNR_D.SET_TABLESPACE('LOGMINER');
alter user system default tablespace SYSTEM;


2 . Create a STREAMS_Q tablespace in the downstream database to store the database objects for Oracle streams.
drop tablespace STREAMS_Q including contents and datafiles;

create tablespace STREAMS_Q datafile '/u16/oradata/$ORACLE_SID/str01$ORACLE_SID.dbf' size 50M autoextend on next 20M maxsize 200M extent management local uniform size 64K segment space management auto;

3. Create the Streams Administrator account in the downstream database.
connect / as sysdba
create user STRMDBA identified by &passwd default tablespace STREAMS_Q quota unlimited on STREAMS_Q temporary tablespace TEMP;

grant create session, dba to strmdba;
exec dbms_streams_auth.grant_admin_privilege('STRMDBA');


4. Create a capture Q (notice, I did that under the STRMDBA schema). This step will also create a Queue Table to store the LCRs.
connect STRMDBA
begin
dbms_streams_adm.set_up_queue(
queue_table => 'T_STRM_Q',
storage_clause => 'TABLESPACE STREAMS_Q',
queue_name => 'STRM_CAPTURE_Q',
queue_user => 'STRMDBA');
end;
/


5. Create a database link to the SOURCE and TARGET databases. Again, this is a private link owned by STRMDBA id.
connect STRMDBA
drop database link SOURCE_DB;
create database link SOURCE_DB> connect to STRMDBA identified by &passwd using 'SOURCE_DB';

drop database link TARGET_DB;
create database link TARGET_DB connect to STRMDBA identified by &passwd using 'TARGET_DB';


6. Create a capture process called STRM_CAPTURE to capture changes into the Capture Queue called STRM_CAPTURE_Q (created in step 4)
begin
dbms_capture_adm.create_capture(
queue_name => 'STRMDBA.STRM_CAPTURE_Q',
capture_name => 'STRM_CAPTURE',
rule_set_name => NULL,
start_scn => NULL,
source_database => 'SOURCE_DB',
use_database_link => TRUE,
first_scn => NULL,
logfile_assignment => 'explicit');
end;
/


7. Similarly, create a propagation process called STRM_PROPAGATE to propagate changes from the Capture Queue to Apply Queue (notice, in RAC environment it is a Queue-To-Queue propagation as APPLY is running only on a single node)
connect STRMDBA
BEGIN
DBMS_PROPAGATION_ADM.CREATE_PROPAGATION(
propagation_name => 'STRM_PROPAGATE',
source_queue => 'STRMDBA.STRM_CAPTURE_Q',
destination_queue => 'STRMDBA.STRM_APPLY_Q',
destination_dblink => 'TARGET_DB',
queue_to_queue => TRUE);
END;

/

8. Add table CAPTURE RULES for all the tables you want to setup streams on (lookup documentation for syntax if you'd like to do it on entire schema). Notice, I'm including both DML and DDL changes to be captured here.
connect STRMDBA
begin
dbms_streams_adm.add_table_rules(
table_name => 'SCOTT.DEPARTMENT',
streams_type => 'CAPTURE',
streams_name => 'STRM_CAPTURE',
queue_name => 'STRMDBA.STRM_CAPTURE_Q',
include_dml => TRUE,
include_ddl => TRUE,
include_tagged_lcr => FALSE,
source_database => 'SOURCE_DB',
inclusion_rule => TRUE);
end;
/


9. Next, add table PROPAGATION RULES. Notice the use of target database link for queue-to-queue propagation. The destination Queue (STRM_APPLY_Q) will be created in the next section.
begin
dbms_streams_adm.add_table_propagation_rules(
table_name => 'SCOTT.DEPARTMENT',
streams_name => 'STRM_PROPAGATE',
source_queue_name => 'STRMDBA.STRM_CAPTURE_Q',
destination_queue_name=> 'STRMDBA.STRM_APPLY_Q@TARGET_DB',
include_dml => TRUE,
include_ddl => TRUE,
source_database => 'SOURCE_DB',
inclusion_rule => TRUE,
queue_to_queue => TRUE);
end;
/


This sets up the DOWNSTREAM enviroment. We have not yet started the CAPTURE process which we will do at the end.



C. Preparing the TARGET environment

1. In the Target database, create the STREAMS_Q and LOGMINER tablespaces, just like we did it in the downstream database. Since this is RAC and we use ASM diskgroups, both these tablespaces are created under the +DATA diskgroup.
Notice, I include the drop statements also in case we are redoing this effort. The STREAM_Q tablespace cannot be dropped unless we drop the Queue Table there.
connect / as sysdba
alter user system default tablespace SYSTEM;
exec DBMS_LOGMNR_D.SET_TABLESPACE('SYSTEM');

drop tablespace LOGMINER including contents and datafiles;
create tablespace LOGMINER datafile '+DATA' size 100M autoextend on next 20M maxsize 4000M extent management local uniform size 64K segment space management auto;

alter user system default tablespace LOGMINER;
exec DBMS_LOGMNR_D.SET_TABLESPACE('LOGMINER');
alter user system default tablespace SYSTEM;

drop tablespace STREAMS_Q including contents and datafiles;
exec DBMS_AQADM.DROP_QUEUE_TABLE('STRMDBA.T_STRM_Q',TRUE);
create tablespace STREAMS_Q datafile '+DATA' size 50M autoextend on next 20M maxsize 1000M extent management local uniform size 64K segment space management auto;'


2. Create the streams Administrator account with the right privileges in the Target database. I remember hitting issues if I tried without the DBA privs, but you might want to test that also.
connect / as sysdba
create user STRMDBA identified by &passwd default tablespace STREAMS_Q quota unlimited on STREAMS_Q temporary tablespace TEMP;

grant create session, dba to STRMDBA;
exec dbms_streams_auth.grant_admin_privilege('STRMDBA');


3. Create the Apply Queue (STRMDBA schema) in the Target database. This also creates a Queue Table to store the propagated LCRs.
connect STRMDBA
begin
dbms_streams_adm.set_up_queue(
queue_table => 'T_STRM_Q',
storage_clause => 'TABLESPACE STREAMS_Q',
queue_name => 'STRM_APPLY_Q',
queue_user => 'STRMDBA');
end;
/


4. Add table APPLY RULES.
connect STRMDBA
begin
dbms_streams_adm.add_table_rules(
table_name => 'SCOTT.DEPARTMENT',
streams_type => 'APPLY',
streams_name => 'STRM_APPLY',
queue_name => 'STRMDBA.STRM_APPLY_Q',
include_dml => TRUE,
include_ddl => TRUE,
source_database => 'SOURCE_DB');
end;
/


[Note: You might have a requirement to add a filter to the APPLY process. In those cases, the include_dml should be set to FALSE and a subset rule should be added.
For example, to apply changes for records where department_code is 3 or 4, set the include_dml=FALSE above and then add the following subset rule.

connect strmdba
begin
DBMS_STREAMS_ADM.ADD_SUBSET_RULES(
table_name => 'SCOTT.DEPARTMENT',
dml_condition => ' DEPT_CODE in (3,4) ',
streams_type => 'APPLY',
streams_name => 'STRM_APPLY',
queue_name => 'STRM_APPLY_Q');
end;
/


5. Alter the APPLY process to set DISABLE_ON_ERROR to 'N'. The default is 'Y' which disables the APPLY on the very first unresolved error.
connect / as sysdba
begin
dbms_apply_adm.alter_apply(
apply_name => 'STRM_APPLY',
apply_user => 'STRMDBA');
end;
/

connect STRMDBA
begin
dbms_apply_adm.set_parameter(
apply_name => 'STRM_APPLY',
parameter => 'DISABLE_ON_ERROR',
value => 'N');
end;
/




D. Starting the CAPTURE/APPLY processes and register Archivelogs
1. Our Source, Downstream and Target environments are all set for capturing changes for SCOTT.DEPARTMENT table. Start the CAPTURE process in the Downstream database (check the STATUS column of DBA_CAPTURE) and APPLY process in the Target database (STATUS column of DBA_APPLY).
In the Target database --
connect STRMDBA
exec dbms_apply_adm.start_apply(apply_name => 'STRM_APPLY');
connect / as sysdba


In Downstream database --
connect STRMDBA
exec dbms_capture_adm.start_capture (capture_name => 'STRM_CAPTURE');


2. The archivelogs generated on the source database should be scp'ed to the downstream server (I get them by virtue of a hot standby there but, as I mentioned earlier, you might have to initiate a job to do that unless you want to add a new destination in the source itself). Once the archivelogs are on the downstream server, go ahead and manually register them in the Downstream database (well, I use a perl script to do that so it is all automated)
alter database register logical logfile '/u02/oradata/SOURCE_DB/log_SOURCE_DB_625925510_1_3023.log' for 'STRM_CAPTURE'


Bonus item..
Adding a Global Rule to exclude certain DDLs like Grants, Truncate Table etc

There could also be a need to exclude certain DDL statements from being applied to the Target database like Truncate Table, Grant, Alter trigger etc. This could be achieved by adding a Global Rule on the Downstream database where CAPTURE is running. [Be aware that this will turn supplemental logging on Primary Key, Unique Key and Foreign Key at the database level in the SOURCE_DB i.e. implicitly execute "alter database add supplemental log data(PRIMARY KEY,UNIQUE INDEX,FOREIGN KEY) columns" in the Source database].

connect STRMDBA
begin
dbms_streams_adm.add_global_rules(
streams_type => 'CAPTURE',
streams_name => 'STRM_CAPTURE',
queue_name => 'STRMDBA.STRM_CAPTURE_Q',
include_dml => FALSE,
include_ddl => TRUE,
include_tagged_lcr => FALSE,
source_database => 'SOURCE_DB',
inclusion_rule => FALSE,
and_condition =>'(:ddl.get_command_type()=''CREATE INDEX'''||' or :ddl.get_command_type()=''GRANT''' || ' or :ddl.get_command_type()=''TRUNCATE TABLE'''||' or :ddl.get_command_type()=''ALTER TRIGGER'''||')');
end;
/


Ensure that the changes on SCOTT.DEPARTMENT are streaming into the Target by generating a log in the Source database and then manually registering it to the Downstream database as shown above. You should be all set now!

No comments: