DataPump를 이용한 간단한 마이그레이션 절차입니다. 


1. schema level export 


# expdp \'/ as sysdba\' directory=mig dumpfile=scott.dmp log=scott_export.log schemas=scott


Export: Release 11.2.0.3.0 - Production on Wed Jan 21 18:24:16 2015


Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Partitioning, Oracle Label Security, OLAP, Data Mining

and Real Application Testing options

Legacy Mode Active due to the following parameters:

Legacy Mode Parameter: "log=scott_export.log" Location: Command Line, Replaced with: "logfile=scott_export.log"

Legacy Mode has set reuse_dumpfiles=true parameter.

Starting "SYS"."SYS_EXPORT_SCHEMA_01":  "/******** AS SYSDBA" directory=mig dumpfile=scott.dmp logfile=scott_export.log schemas=scott reuse_dumpfiles=true

Estimate in progress using BLOCKS method...

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 384 KB

Processing object type SCHEMA_EXPORT/USER

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

Processing object type SCHEMA_EXPORT/ROLE_GRANT

Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

. . exported "SCOTT"."TEST_NLS"                          5.320 KB       1 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP64"  6.710 KB       1 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP67"  6.734 KB       2 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP72"  6.710 KB       1 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP76"  6.710 KB       1 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP61"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP62"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP63"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP65"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP66"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP68"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP69"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP70"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP71"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP73"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP74"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP75"      0 KB       0 rows

Master table "SYS"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded

******************************************************************************

Dump file set for SYS.SYS_EXPORT_SCHEMA_01 is:

  /home/oracle/pump/scott.dmp

Job "SYS"."SYS_EXPORT_SCHEMA_01" successfully completed at 18:24:32



2. ddl 추출 


# impdp \'/ as sysdba\' directory=mig dumpfile=scott.dmp sqlfile=cr_scott.sql


Import: Release 11.2.0.3.0 - Production on Wed Jan 21 19:27:32 2015


Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Partitioning, Oracle Label Security, OLAP, Data Mining

and Real Application Testing options

Master table "SYS"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded

Starting "SYS"."SYS_SQL_FILE_FULL_01":  "/******** AS SYSDBA" directory=mig dumpfile=scott.dmp sqlfile=cr_scott.sql

Processing object type SCHEMA_EXPORT/USER

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

Processing object type SCHEMA_EXPORT/ROLE_GRANT

Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at 19:27:36


3. DDL 수행


SQL> @cr_scott


4. import 


# impdp \'/ as sysdba\' directory=mig dumpfile=scott.dmp logfile=scott_imp.log ignore=y


Import: Release 11.2.0.3.0 - Production on Wed Jan 21 19:29:22 2015


Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Partitioning, Oracle Label Security, OLAP, Data Mining

and Real Application Testing options

Legacy Mode Active due to the following parameters:

Legacy Mode Parameter: "ignore=TRUE" Location: Command Line, Replaced with: "table_exists_action=append"

Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded

Starting "SYS"."SYS_IMPORT_FULL_01":  "/******** AS SYSDBA" directory=mig dumpfile=scott.dmp logfile=scott_imp.log table_exists_action=append

Processing object type SCHEMA_EXPORT/USER

ORA-31684: Object type USER:"SCOTT" already exists

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

Processing object type SCHEMA_EXPORT/ROLE_GRANT

Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Table "SCOTT"."SALES_RANGE_HASH" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append

Table "SCOTT"."TEST_NLS" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

. . imported "SCOTT"."TEST_NLS"                          5.320 KB       1 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP76"  6.710 KB       1 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP61"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP62"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP63"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP65"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP66"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP68"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP69"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP70"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP71"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP73"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP74"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP75"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP64"  6.710 KB       1 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP67"  6.734 KB       2 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP72"  6.710 KB       1 rows

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Job "SYS"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 19:29:28








oracle의 logminer는 말그대로 redo log 혹은 archive file내에서 사용된 sql을 캐내는 툴이다. 

가끔 operation 실수나 해킹(?)으로 인해 데이터 이상이 생겼을 때 어떤 시점에 어떤 operation이 발생했고 그에 대한 undo sql이 무엇인지 등을 보여 준다.

1. log 생성 database 정보의 내부 정보 수집.
exec dbms_logmnr_d.build('logmnr_dic.ora','/home');

2. 추출할 archive file이나 redo log file  지정. 
exec dbms_logmnr.add_logfile('/archive/archive_1_1000.arch',dbms_logmnr.NEW);
exec dbms_logmnr.add_logfile('/archive/archive_1_1001.arch',dbms_logmnr.ADDFILE);

3. logminer 시작.
exec dbms_logmnr.start_logmnr(dictfilename=>'/home/logmnr_dic.ora');

4. 정보 추출. 
v$logmnr_contents

 위의 작업은 한 세션에서만 해당 내용을 볼 수 있다. 즉 위에서 발생하는 데이터는 오라클 데이터베이스 내에 저장되는 것이 아니라 데이터베이스 외부에서 발생하고 보여주는 데이터 이다..

더 자세한 내용은 요기서..

Oracle Logminer의 활용 Tips

[추가] oracle 10g에서 online catalog를 사용하는 방법이다.

1. add Archive Files which were analyzed 
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '/a01b/TGDOM/arch/TGDOM_1_217_630787281.dbf',OPTIONS => DBMS_LOGMNR.NEW); 
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '/a01b/TGDOM/arch/TGDOM_1_218_630787281.dbf',OPTIONS => DBMS_LOGMNR.ADDFILE); 
execute DBMS_LOGMNR.ADD_LOGFILE (LOGFILENAME => '/a01b/TGDOM/arch/TGDOM_1_219_630787281.dbf',OPTIONS => DBMS_LOGMNR.ADDFILE);

2. Start LogMiner With  Online Catalog 
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
PL/SQL procedure successfully completed.

3. Query v$logminr_contents 
SQL> select username, operation, SQL_REDO,SQL_UNDO FROM V$LOGMNR_CONTENTS where SEG_OWNER='SCOTT';
no rows selected

4. End LogMiner 
SQL> SQL> execute dbms_logmnr.end_logmnr ;
PL/SQL procedure successfully completed.


  

+ Recent posts