Pages

Tuesday, 27 November 2012




Dear Readers,

      I thank you all for sparing your valuable time to read my posts. Since last few days, I have writing on my new blog . Hence, for latest posts visit the following link:

http://oracleinaction.blogspot.in/

 Do visit and give your valuable comments/feedback.

Regards

Sunday, 30 September 2012

RECOVERY USING STANDBY DATABASE


                                      RECOVERY USING STANDBY DATABASE

In this post, I will demonstrate the use of standby database to restore a datafile which got

corrupted on primary.

In  the following scenario we have OP1 primary database and standby database is OP2 and catalog

database is CDB. TNS entries for op1 and op2 are there on the machne hosting recovery catalog

database cdb. Now the plan is to recover a lost datafile of primary database from standby

database.....

OVERVIEW:

- Create Recovery Catalog
- Register Primary database with catalog.
- Connect to catalog and primary. Configure persistent RMAN settings for primary .
- Connect to catalog and standby. Configure persistent RMAN settings for standby .
- Switch logs on primary so that tablespace is created on standby also
- Corrupt the datafile fo tbs_Sales on primary.
- Connect to standby database as target database using RMAN.
- Connect to primary database as auxiliary .
- Take backup of tbs_sales from standby database so that backup file is created on primary.
- connect to catalog and primary.
- Catalog the backup file created earlier
- Restore/recover tablespace

IMPLEMENTATION:

-- Create Recovery Catalog --

Log in to CDB catalog and do the following

SQL> grant connect ,resource to catown identified by c;

Grant succeeded.

SQL> grant recovery_catalog_owner to catown;

Grant succeeded.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/cdb/system01.dbf
/u01/app/oracle/oradata/cdb/sysaux01.dbf
/u01/app/oracle/oradata/cdb/undotbs01.dbf
/u01/app/oracle/oradata/cdb/users01.dbf

SQL> create tablespace cattbs datafile '/u01/app/oracle/oradata/cdb/cat1a.dbf' size 300m;

Tablespace created.

SQL> alter user catown default tablespace cattbs quota unlimited on cattbs;

User altered.

[oracle@node1 ~]$ rman catalog catown/c

Recovery Manager: Release 11.2.0.1.0 - Production on Sat Sep 29 09:28:08 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to recovery catalog database

RMAN> create catalog tablespace cattbs ;

recovery catalog created


#### Register OP1 with catalog CDB

RMAN> register database;

database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

RMAN configuration parameters for database with db_unique_name OP1 are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/snapcf_op1.f';

# default

RMAN> report schema for db_unique_name op1;

Report of database schema for database with db_unique_name OP1

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    670      SYSTEM               YES     /u01/app/oracle/oradata/op1/system01.dbf
2    490      SYSAUX               NO      /u01/app/oracle/oradata/op1/sysaux01.dbf
3    45       UNDOTBS1             YES     /u01/app/oracle/oradata/op1/undotbs01.dbf
4    5        USERS                NO      /u01/app/oracle/oradata/op1/users01.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    20       TEMP                 32767       /u01/app/oracle/oradata/op1/temp01.dbf

RMAN> show all for db_unique_name op1;

RMAN configuration parameters for database with db_unique_name OP1 are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/snapcf_op1.f';

# default

RMAN> CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;

new RMAN configuration parameters:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
new RMAN configuration parameters are successfully stored
starting full resync of recovery catalog
full resync complete

-- Configure persistent setting so that archivelogs are not deleted
-- until they have been applied to standby

RMAN> configure archivelog deletion policy to applied on all standby ;

new RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
new RMAN configuration parameters are successfully stored
starting full resync of recovery catalog
full resync complete

RMAN> show all for db_unique_name op1;

RMAN configuration parameters for database with db_unique_name OP1 are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/snapcf_op1.f';

# default

RMAN> configure db_unique_name op1 connect identifier 'op1' ;

new RMAN configuration parameters:
CONFIGURE DB_UNIQUE_NAME 'op1' CONNECT IDENTIFIER  'op1';
new RMAN configuration parameters are successfully stored
starting full resync of recovery catalog
full resync complete

####REGISTER SERVICE OF STANDBY WITH RMAN
RMAN> configure db_unique_name op2 connect identifier 'op2' ;

new RMAN configuration parameters:
CONFIGURE DB_UNIQUE_NAME 'op2' CONNECT IDENTIFIER  'op2';
new RMAN configuration parameters are successfully stored
starting full resync of recovery catalog
full resync complete

RMAN> list db_unique_name of database;

List of Databases
DB Key  DB Name  DB ID            Database Role    Db_unique_name
------- ------- ----------------- ---------------  ------------------
161     OP1      588960404        PRIMARY          OP1               
161     OP1      588960404        STANDBY          OP2    

RMAN> report schema for db_unique_name op2;

Report of database schema for database with db_unique_name OP2

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    670      SYSTEM               YES   
2    490      SYSAUX               NO    
3    45       UNDOTBS1             YES   
4    5        USERS                NO  

####connect to standby database
[oracle@node1 ~]$ rman target sys@op2

Recovery Manager: Release 11.2.0.1.0 - Production on Sat Sep 29 09:42:40 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

target database Password:
connected to target database: OP1 (DBID=588960404, not open)

RMAN> connect catalog catown@cdb

recovery catalog database Password:
connected to recovery catalog database

RMAN> show all for db_unique_name op2;

RMAN configuration parameters for database with db_unique_name OP2 are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE DB_UNIQUE_NAME 'op1' CONNECT IDENTIFIER  'op1';
CONFIGURE DB_UNIQUE_NAME 'op2' CONNECT IDENTIFIER  'op2';
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/snapcf_op2.f';

# default

-- Configure persistent settings for Standby database
-- Controlfile autobackup on
-- backup optimization
-- archivelogs are not deleted unless they are backed up once


RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON ;

new RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP ON;
new RMAN configuration parameters are successfully stored

RMAN> CONFIGURE BACKUP OPTIMIZATION ON ;

new RMAN configuration parameters:
CONFIGURE BACKUP OPTIMIZATION ON;
new RMAN configuration parameters are successfully stored

RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DEVICE TYPE DISK ;

old RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
new RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK;
new RMAN configuration parameters are successfully stored

#### On primary create the tablespace and create a table in the tablespace

SQL> create tablespace tbs_sales datafile '/u01/app/oracle/oradata/op1/tbsales.dbf' size 50m;

Tablespace created.

SQL> create table scott.sales3 (id numbeR) tablespace tbs_sales;

Table created.

SQL> insert into scott.sales3 values(90);

1 row created.

SQL> commit;

Commit complete.

-- On primary, corrupt the datafile for tbs_sales tablespace with echo command but before this make

sure the log group should be switched so that tablespace is created on standby also...

SYS@OP1>alter system switch logfile;

[oracle@node1 ~]$ echo > /u01/app/oracle/oradata/op1/tbsales.dbf

-- Flush buffer cache and query newly created table. The query will fail as the
   datafile has been corrupted

SQL> alter system flush buffer_cache;

System altered.

SQL> select * from scott.sales3;
select * from scott.sales3
                    *
ERROR at line 1:
ORA-00376: file 5 cannot be read at this time
ORA-01110: data file 5: '/u01/app/oracle/oradata/op1/tbsales.dbf'


-- Now is the time to recover primary tablespace from standby
[oracle@node1 ~]$ rman target sys@op2

Recovery Manager: Release 11.2.0.1.0 - Production on Sat Sep 29 09:50:30 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

target database Password:
connected to target database: OP1 (DBID=588960404, not open)

-- Connect to primary database using auxiliary channel

RMAN> connect auxiliary sys@op1

auxiliary database Password:
connected to auxiliary database: OP1 (DBID=588960404)

-- Take image copy backup of tbs_sales tablespace from standby database
   on primary database


RMAN> backup as copy datafile 5 auxiliary format '/u01/app/sales_sby.dbf' ;

Starting backup at 29-SEP-12
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=40 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=/u01/app/oracle/oradata/op2/tbsales.dbf
output file name=/u01/app/sales_sby.dbf tag=TAG20120929T095124
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 29-SEP-12

####CONNECT TO PRIMARY

[oracle@node1 ~]$ rman target/

Recovery Manager: Release 11.2.0.1.0 - Production on Sat Sep 29 10:07:06 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: OP1 (DBID=588960404)

RMAN> connect catalog catown@cdb

recovery catalog database Password:
connected to recovery catalog database

-- Catalog the image copy backup with primary since the backup was taken without
   connecting to catalog


RMAN> catalog datafilecopy '/u01/app/sss5.dbf' ;

starting full resync of recovery catalog
full resync complete
cataloged datafile copy
datafile copy file name=/u01/app/sss5.dbf RECID=1 STAMP=795262055

RMAN>alter tablespace tbs_sales offline;

RMAN> run {
           restore datafile 5  ;
           recover datafile 5 ; }


####CHECK ON PRIMARY

SQL>select * from scott.sales5
*
ERROR at line 1:
ORA-00376: file 5 cannot be read at this time
ORA-01110: data file 5: '/u01/app/oracle/oradata/op1/tbsales5.dbf'

SQL> alter tablespace tbs_sales5 online;
alter tablespace tbs_sales5 online
*
ERROR at line 1:
ORA-01113: file 5 needs media recovery
ORA-01110: data file 5: '/u01/app/sss5.dbf'

#### DO THE RECOVERY FROM RMAN on Primary
RMAN> recover tablespace tbs_sales5;

Starting recover at 29-SEP-12
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=43 device type=DISK

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 29-SEP-12

#### COME BACK TO PRIMARY

SQL> alter tablespace tbs_sales5 online;

Tablespace altered.

SQL> select * from scott.sales5;

        ID
----------
        90
        90

#### ENJOY ####
---------------------------------------------------------------------
Related links:


HOME

11G DATA GUARD NEW FEATURES
DATA GUARD SETUP


Thursday, 27 September 2012

MIGRATE OUTLINES TO SQL MANAGEMENT BASE

                      MIGRATE OUTLINES TO SQL MANAGEMENT BASE

In this post, we will learn about outlines and more importantly in 11g Release 2 sometimes we need to migrate outlines from older versions to 11G... so thats what we will see .
_____________________________________________________-

Outlines can be created automatically by Oracle or manually for specific statements. The automatic creation of outlines is controlled using the create_stored_outlines parameter than can be set at session or instance level using the following commands.

ALTER SYSTEM SET create_stored_outlines=TRUE;
ALTER SESSION SET create_stored_outlines=TRUE;

Outlines can be generated for specific statements using the CREATE OUTLINE statement or the DBMS_OUTLN.CREATE_OUTLINE procedure and also we need to give privileges to the user who is doing this if he is not a DBA. The privileges are CREATE ANY OUTLINE and EXECUTE_CATALOG_ROLE .

Now let me show you a small demo

SQL> create outline outline_karan for category sales on select * from emp;
SQL> SELECT name, category, sql_text FROM user_outlines WHERE category = 'SALES';


Note that outline stores the information in the form of hints





SQL> SELECT node, stage, join_pos, hint FROM user_outline_hints WHERE name = 'OUTLINE_KARAN'

      NODE      STAGE   JOIN_POS HINT
---------- ---------- ---------- --------------------------------------------------------------------------------
         1          1          1 FULL(@"SEL$1" "EMP"@"SEL$1")
         1          1          0 OUTLINE_LEAF(@"SEL$1")
         1          1          0 ALL_ROWS
         1          1          0 DB_VERSION('11.2.0.1')
         1          1          0 OPTIMIZER_FEATURES_ENABLE('11.2.0.1')
         1          1          0 IGNORE_OPTIM_EMBEDDED_HINTS

## Now you see below the outline is unused because the query associated with outline is not fired.
]
SQL>  SELECT name, category, used FROM user_outlines where name='OUTLINE_KARAN';

NAME                           CATEGORY                       USED
------------------------------ ------------------------------ ------
OUTLINE_KARAN                  SALES                          UNUSED

###Lets fire the query see

SQL> select * from emp;

## I have removed the 14 rows of the output to reduce the size of demo

### Note that outline is used when the query is executed (USED = USED)

SQL>  SELECT name, category, used FROM user_outlines where name='OUTLINE_KARAN';

NAME                           CATEGORY                       USED
------------------------------ ------------------------------ ------
OUTLINE_KARAN                  SALES                          USED

## Now we will migrate information in the outlilne to SQL management base

-- Fine out the names of the outlines --



SQL> select name,sql_text from user_outlines ;

-- Migrate the stored outline

SQL>variable v1 clob;

SQL> begin
:v1 := dbms_spm.migrate_stored_outline(
attribute_name=>'outline_name',
attribute_value=>'OUTLINE_KARAN',fixed=>'NO');
end;
/

### NOTE := I had already give extra special privileges to scott who is doing this work, the privileges are CREATE ANY OUTLINE, ALTER ANY OUTLINE
ADMINISTER SQL MANAGEMENT OBJECT

###Now how can we determine whether stored outline was migrated or not.. See the following
SQL> select name,sql_text,migrated FROM user_outlines where name='OUTLINE_KARAN';

NAME                           SQL_TEXT                                                                         MIGRATED
------------------------------ -------------------------------------------------------------------------------- ------------
OUTLINE_KARAN                  select * from emp                                                                MIGRATED

## Login as SYSDBA and see the below dictionary query to check that outline has been stored
     as baseline in SQL Management Base
SQL> select SQL_HANDLE,ORIGIN,ENABLED,ACCEPTED from dba_sql_plan_baselines where SQL_TEXT like 'select * from em%';

SQL_HANDLE                     ORIGIN         ENA ACC
------------------------------ -------------- --- ---
SYS_SQL_612b4fadc881121e       STORED-OUTLINE YES YES

#### Some more methods to show you how to create and work on outlines, but first lets say i deleted my previous outline

SQL> drop outline OUTLINE_KARAN;
drop outline OUTLINE_KARAN
*
ERROR at line 1:
ORA-18006: DROP ANY OUTLINE privilege is required for this operation

SQL> conn / as sysdba
Connected.
SQL> grant DROP ANY OUTLINE to scott;

SQL> conn scott/tiger

SQL> drop outline OUTLINE_KARAN;



-- Create a outline by setting create_stored_outlines=true at instance level --

SQL> conn / as sysdba

SQL> ALTER SYSTEM SET create_stored_outlines=TRUE;

System altered.

SQL> select * from scott.dept;

    DEPTNO DNAME          LOC
---------- -------------- -------------
        10 ACCOUNTING     NEW YORK
        20 RESEARCH       DALLAS
        30 SALES          CHICAGO
        40 OPERATIONS     BOSTON

SQL> ALTER SYSTEM SET create_stored_outlines=FALSE;

-- Check that outline has been created --

SQL>  select name,owner,SQL_TEXT from  dba_outlines;

NAME                           OWNER                          SQL_TEXT
------------------------------ ------------------------------ --------------------------------------------------------------------------------
SYS_OUTLINE_12092715271822003  SYS                            select * from scott.dept



-- Create outline by using DBMS_OUTLN.create_outline procedure 

  
-- Issue the statement for which outline is to be stored
 

SQL> conn scott/tiger

SQL>select dname,loc from dept where deptno=10;

SQL> conn / as sysdba
Connected.

-- Find out sql_id of the statement --

SQL> select sql_id,HASH_VALUE,CHILD_NUMBER from v$sql where sql_text like 'select dname,loc from dept where dept%';

SQL_ID        HASH_VALUE CHILD_NUMBER
------------- ---------- ------------
6kd9zwrdtur6h 3684523216            0


-- Create outline --

SQL> BEGIN
      DBMS_OUTLN.create_outline(
        hash_value    =>3684523216,
     child_number  => 0,
      category      => 'PAYABLES_OUTLINES');
   END;
 /

PL/SQL procedure successfully completed.

-- check that outline has been craeted --  

SQL> select name,owner,SQL_TEXT from  dba_outlines;

NAME                           OWNER                          SQL_TEXT
------------------------------ ------------------------------ --------------------------------------------------------------------------------
SYS_OUTLINE_12092715305717804  SCOTT                          select dname,loc from dept where deptno=10

********THANKS FOR YOUR TIME ******************************


Related links:

Home
 http://koenigocm.blogspot.in/

SQL PLAN MANAGEMENT PART-I
 http://koenigocm.blogspot.in/search/label/SQL%20PLAN%20MANAGEMENT%E2%80%93%20ORACLE%20%2011g%20%3A%20Part-1
 
SQL PLAN MANAGEMENT PART-II
http://koenigocm.blogspot.in/search/label/SQL%20PLAN%20MANAGEMENT%20%E2%80%93%20ORACLE%2011g%20%3A%20Part-2
--

Friday, 21 September 2012

HOW TO CALL AN EXTERNAL C FUNCTION FROM WITHIN ORACLE



How to call an external C function from within Oracle

-------------------------------------------------------------------------


One very useful feature of Oracle is its ability to call external C procedures from within the database.
It's handy for creating interfaces to legacy systems.
The following procedure will demonstrate how to compile a simple C program, and how to configure the database to call it.
The C program will contain a single procedure for converting strings into uppercase.

Several layers of objects and configuration will be required to make this work:
C Program/library
(andy_lib.so)
   |
Listener
(extproc)
   |
Database library object
(andy_lib)
   |
Database function
(andy_lib_upper)

This might seem a little daunting and over complicated, but it's actually quite logical and very simple.
To simplify the process, we will construct the test case in the same order as the diagram above.

To build this example you will need the following:
An oracle database (9i or later)
A working listener configuration
C compiler (gcc is used in the example)
A database user with 'create procedure' and 'create library' privileges

STEPS
--------


1. Create the C program
Using a text editor, create a file named andy_lib.c and paste the following C code into it:
#include <ctype.h>
int andy_upper(char *istr, char *ostr)
{
    int i = 0;
    while(istr[i])
   {
      ostr[i] = toupper(istr[i]);
      i++;
   }
   return 0;
}

-------------------------------------------------------------

2. Compile the shared library and copy it to the Oracle home
To compile and link the library run these commands:

gcc -c andy_lib.c

ld -shared -o andy_lib.so andy_lib.o

The linker will probable create the library with rather excessive permissions.
I'd advise changing them:

chmod 600 andy_lib.so

Now, copy the shared library to the Oracle home:

cp andy_lib.so $ORACLE_HOME/bin/

---------------------------------------------------------------

3. Configure the listener and tnsnames
The listener will need to be configured to handle external procedures (extproc). My listener.ora looks like this:
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME =/u01/app/oracle/product/11.2.0)
(PROGRAM = extproc)
)
)

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = fermat)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
)

You will also need to add the following entry to your tnsnames.ora:

EXTPROC_CONNECTION_DATA =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
(CONNECT_DATA =
(SID = PLSExtProc)
(PRESENTATION = RO)
)
)

Restart the listener and test the configuration with tnsping:

lsnrctl stop;
lsnrctl start;
tnsping EXTPROC_CONNECTION_DATA

----------------------------------------------------------

4. Create a database library object

The library object is a pointer to the library file that we copied into the Oracle home.
Run the following SQL to create it:

sql>create or replace library andy_lib
    as '/app/oracle/product/11.2.0/bin/andy_lib.so';
    /

----------------------------------------------------------

5. Create a database function

The C library in this example contains a single function, but in reality your library might contain several.
We need to create function objects that map onto each of the procedures in the C library:

create or replace function andy_lib_upper(p_istr in varchar2, p_ostr out varchar2)
 return binary_integer
 as external
library andy_lib
name "andy_upper"
language c
parameters (p_istr string, p_ostr string);
/

--------------------------------------------------

6. Test it

Cut and paste the following pl/sql into sqlplus.
It will pass a lowercase string into the C function...

set serveroutput on size 9999
declare
res binary_integer;
v_in CHAR(100);
v_out CHAR(100);
begin
v_in := 'hello world';
res := andy_lib_upper(v_in, v_out);
dbms_output.put_line(res);
dbms_output.put_line(v_in);
dbms_output.put_line(v_out);
end;
/

it works you will see the following:
hello world
HELLO WORLD
-------------------------------------------------------------------------
----------------------------*****************------------------------

Tuesday, 21 August 2012

AUTOMATIC IMPORT USING FILEWATCHER


                       DATA IMPORT IN DATABASE USING FILE WATCHER
We want oracle to import all the objects belonging to HR schema automatically whenever a .dmp file arrives in a specified folder on a system. To implement it we will use filewatcher which is a new scheduler object in oracle 11g. It  enables a new type of event-based job that is triggered by the arrival of a file in a specified location. File watchers can be defined to monitor locations on the local server and remote servers, provided they have an agent installed on them.
-- OVERVIEW --
- Setup
- Create a file watcher
- Create a scheduler program to access the event raised by the file watcher.
-  Create directory object named FILE_WATCHER which maps to  os path '/tmp/test'
- Copy the .dmp file in /tmp/test location
- Check that all objects belonging to HR schema have been imported into database.
- IMPLEMENTATION -
- SETUP -
HR>CREATE TABLE TEST AS SELECT * FROM EMPLOYEES;

$mkdir /u01/test
$mkdir /tmp/test

SYS>CREATE DIRECTORY ABC AS '/u01/test';
    GRANT READ, WRITE ON DIRECTORY ABC TO HR;
Now  export hr schema to a dump file .
$ expdp hr/hr directory=abc dumpfile=hr_Schema.dmp logfile=abc.log reuse_dumpfiles=true
-- CREATE FILE WATCHER
By default, file watchers only check for the arrival of files every 10 minutes, but this default interval can be changed by issuing the following statement.
CONN / AS SYSDBA

BEGIN
  DBMS_SCHEDULER.set_attribute(
    'file_watcher_schedule',
    'repeat_interval',
    'freq=minutely; interval=1');
END;
/
I've set the interval to 1 minute for these tests, but I would probably not set it that low in a production system.
The file watcher will need access to the OS to check for files, so we must create a credential for it to log on with. I'm going to use the Oracle software owner, but for a real system I would advise using the lowest privileged user possible.
BEGIN
  DBMS_SCHEDULER.create_credential(
    credential_name => 'local_credential',
    username        => 'oracle',
    password        => 'oracle');
END;
/
Next we create the file watcher itself. Using a "?" in the directory_path parameter signifies the ORACLE_HOME. The file_name parameter can reference a specific file name or a wildcard. Setting the destination parameter to NULL indicates the local server. For remote servers set it to a valid external destination, as shown by the ALL_SCHEDULER_EXTERNAL_DESTS view.
BEGIN
  DBMS_SCHEDULER.create_file_watcher(
    file_watcher_name => 'test_file_watcher',
    directory_path    => '/tmp/test',
    file_name         => '*.dmp',
    credential_name   => 'local_credential',
    destination       => NULL,
    enabled           => FALSE);
END;
/
-- CREATE A SCHEDULER PROGRAM TO ACCESS THE EVENT RAISED BY FILE WATCHER
 The program must reference the event_message to retrieve information about the file, such as its name.Through a metadata argument, the Scheduler can pass to an event-based job the message content of the event that started the job. The following rules apply
  • The job must use a named program of type STORED_PROCEDURE.
  • One of the named program's arguments must be a metadata argument with metadata_attribute set to EVENT_MESSAGE.
  • The stored procedure that implements the program must have an argument at the position corresponding to the named program's metadata argument. The argument type must be the data type of the queue where your application queues the job-start event.
If you use the RUN_JOB procedure to manually run a job that has an EVENT_MESSAGE metadata argument, the value passed to that argument is NULL.
BEGIN
  DBMS_SCHEDULER.create_program(
    program_name        => 'file_watcher_test_prog',
    program_type        => 'stored_procedure',
    program_action      => 'file_watcher_test_proc',
    number_of_arguments => 1,
    enabled             => FALSE);
END;
/

BEGIN
  DBMS_SCHEDULER.define_metadata_argument(
    program_name       => 'file_watcher_test_prog',
    metadata_attribute => 'event_message',
    argument_position  => 1);
END;
/
Next we define the stored procedure that we referenced in the program definition. It must accept and argument of the SCHEDULER_FILEWATCHER_RESULT type. the procedure created here will import all the hr user objects from export file in hr user schema 
create or replace procedure file_watcher_test_proc(p_sfwr sys.scheduler_filewatcher_result)
as
file_name varchar2(98);
handle number;
begin
file_name:=p_sfwr.actual_file_name;
handle := dbms_datapump.open('IMPORT','SCHEMA');
      dbms_datapump.add_file(handle,file_name,'FILE_WATCHER');
      dbms_datapump.metadata_filter(handle,'SCHEMA_EXPR','= ''HR''');
      dbms_datapump.set_parallel(handle,4);
      dbms_datapump.start_job(handle);
      dbms_datapump.detach(handle);
end;
/
Next we create a job that references the objects we've just created. The queue_spec parameter is set to the name of the file watcher, while the program_name parameter is set to the name of the program object we defined, not the procedure name.
BEGIN
  DBMS_SCHEDULER.create_job(
    job_name        => 'file_watcher_test_job',
    program_name    => 'file_watcher_test_prog',
    event_condition => NULL,
    queue_spec      => 'test_file_watcher',
    auto_drop       => FALSE,
    enabled         => FALSE);
END;
/
By default, the arrival of new files will be ignored if the job is already running. If you need the job to fire for each new arrival, regardless of whether the job is already running or not, set thePARALLEL_INSTANCES attribute for the job to true. The job will then be run as a lightweight job.
BEGIN
  DBMS_SCHEDULER.set_attribute('file_watcher_test_job','parallel_instances',TRUE);
END;
/
With all the pieces in place, we can now enable all the objects we created.
EXEC DBMS_SCHEDULER.enable('test_file_watcher');
EXEC DBMS_SCHEDULER.enable('file_watcher_test_prog');
EXEC DBMS_SCHEDULER.enable('file_watcher_test_job');
-  CREATE DIRECTORY OBJECT NAMED FILE_WATCHER WHICH MAPS TO OS PATH  '/tmp/test'
create directory file_watcher as '/tmp/test';
Information about file watchers is available from the *_SCHEDULER_FILE_WATCHERS views.
SET LINESIZE 100
COLUMN file_watcher_name FORMAT A20
COLUMN destination FORMAT A15
COLUMN directory_path FORMAT A15
COLUMN file_name FORMAT A10
COLUMN credential_name FORMAT A20

SELECT file_watcher_name, destination, directory_path, file_name, credential_name
FROM   user_scheduler_file_watchers;

FILE_WATCHER_NAME    DESTINATION     DIRECTORY_PATH  FILE_NAME  CREDENTIAL_NAME
-------------------- --------------- --------------- ---------- --------------------
TEST_FILE_WATCHER                    /tmp/test       *.txt      LOCAL_CREDENTIAL

SQL>
-- TESTING
- Drop test table from hr schema and verify that the table does not exist
HR>DROP TABLE TEST PURGE;
   SELECT TABLE_NAME
    FROM USER_TABLES
   WHERE TABLE_NAME = 'TEST';
no rows selected

Now  copy  hr schema dumpfile to directory /tmp/test

$cp  /u01/test/hr_Schema.dmp /tmp/test/hr_sch.dmp
- Verify that the table test has been imported automatically.
HR> SELECT TABLE_NAME
    FROM USER_TABLES
   WHERE TABLE_NAME = 'TEST';
TABLE_NAME
------------------------------
TEST

---------------------------------------------------------------------------------------------
Related links :

HOME