serial direct scan은 11g에 소개된 기능으로 대상 테이블을 size로 구분하여 이에 대해
SGA내의 buffer scan (기존 방식)을 사용할지, direct path read 방식으로 수행할 지를 자동으로 결정하는 기능입니다.
즉, Parallel query를 사용하지 않고 serial scan으로 수행되더라도 direct path 방식으로 수행할 수 있습니다.
유의할 부분은 direct path read는 dirty buffer의 checkpoint를 유발하니 OLTP에서는 유의해야 합니다.  

_serial_direct_read parameter는 segment size와 parameter 설정값에 따라 다르게 동작합니다. 

_serial_direct_read
Segment size
Small
Medium
Large
Very Large
TRUE
Direct Path Read
Direct Path Read
Direct Path Read
Direct Path Read
FALSE
Buffer Cache Read
Buffer Cache Read
Buffer Cache Read
Buffer Cache Read
AUTO
Buffer Cache Read
Object Stat 존재 시 Buffer Cache Read 수행
그외 비용 분석후 결정
비용 분석 후 결정
Direct Path Read

read 방식을 결정하는 segment size는 대상 object의 block 개수와 _small_table_threshold, _very_large_object_threshold parameter와 Buffer cache size에 의해 결정됩니다. 

Segment Size정의
SmallSegment Block < _small_table_threshold
Medium_small_table_threshold < Segment blocks < Buffer cache의 10% 이하
LargeMedium과 Very Large 사이 일 경우
Very LargeBuffer cache * (_very_large_object_threshold/100) 보다 큰 경우

관련 Parameter 
_small_table_threshold : lower threshold level of table size for direct reads. 
                            Default value : 20 (Buffer cache의 2%)

_very_large_object_threshold :
upper threshold level of object size for direct reads.
                                 Default Value : 500 (Buffer cache의 5배)



Oracle 12c 부터 RAW device가 desupport 됩니다. 

Oracle 12c, Desupport for Raw Storage Devices

 

어찌어찌 해서 raw device를 사용한채로 upgrade를 한다해도 raw device를 direct로 사용하면 오류 발생할 거라네요. 

12c로 upgrade 할땐 ASM으로의 migration 까지도 염두해 둬야 되겠네요.. OTL


언듯 raw device 별로 diskgroup을 만들어 tablespace를 생성하는 꼼수가 머리를 스치는 군요 .. ㅋ 



Starting with Oracle Database version 12.1 (release date TBD), support for storing data files, spfiles, controlfiles, online redo logfiles, OCRs and voting files on raw devices directly will end. This means commands such as the following will report an error while attempting to use raw devices directly in Oracle Database Version 12.1:


SQL> create tablespace ABC DATAFILE '/dev/raw/raw1' size 2GB;


Note that while the direct use of raw devices will be de-supported for Oracle Database 12.1, customers can choose to create Oracle ASM diskgroups on top of raw devices. While it is recommended to store all shared files on ASM diskgroups, storing those files on NFS or certified cluster file systems remains supported.


The following SQL commands will not return an error while attempting to use raw devices. Reason: the raw devices in the example below are used indirectly via Oracel ASM (no direct use of raw devices here):


SQL>alter diskgroup MYDG add disk '/dev/raw/ABC1.dbf';

 OR

SQL>create diskgroup MYDB EXTERNAL REDUNDANCY disk 'dev/raw/ABC1.dbf'


Then use the following command to create the tablespace


SQL> create tablespace ABC DATAFILE '+MYDG' size 2GB;


If raw devices are not being used directly in the current release then no further actions need to be taken. However, if raw devices are being used directly currently, then planning should be performed to migrate respective files off raw devices. There are many choices currently to replace raw devices, including Oracle ASM, NFS, and supported cluster file systems.


참고 : Announcement of De-Support of using RAW devices in Oracle Database Version 12.1 (Doc ID 578455.1)



DataPump를 이용한 간단한 마이그레이션 절차입니다. 


1. schema level export 


# expdp \'/ as sysdba\' directory=mig dumpfile=scott.dmp log=scott_export.log schemas=scott


Export: Release 11.2.0.3.0 - Production on Wed Jan 21 18:24:16 2015


Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Partitioning, Oracle Label Security, OLAP, Data Mining

and Real Application Testing options

Legacy Mode Active due to the following parameters:

Legacy Mode Parameter: "log=scott_export.log" Location: Command Line, Replaced with: "logfile=scott_export.log"

Legacy Mode has set reuse_dumpfiles=true parameter.

Starting "SYS"."SYS_EXPORT_SCHEMA_01":  "/******** AS SYSDBA" directory=mig dumpfile=scott.dmp logfile=scott_export.log schemas=scott reuse_dumpfiles=true

Estimate in progress using BLOCKS method...

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 384 KB

Processing object type SCHEMA_EXPORT/USER

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

Processing object type SCHEMA_EXPORT/ROLE_GRANT

Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

. . exported "SCOTT"."TEST_NLS"                          5.320 KB       1 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP64"  6.710 KB       1 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP67"  6.734 KB       2 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP72"  6.710 KB       1 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP76"  6.710 KB       1 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP61"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP62"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP63"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP65"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP66"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP68"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP69"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP70"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP71"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP73"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP74"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP75"      0 KB       0 rows

Master table "SYS"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded

******************************************************************************

Dump file set for SYS.SYS_EXPORT_SCHEMA_01 is:

  /home/oracle/pump/scott.dmp

Job "SYS"."SYS_EXPORT_SCHEMA_01" successfully completed at 18:24:32



2. ddl 추출 


# impdp \'/ as sysdba\' directory=mig dumpfile=scott.dmp sqlfile=cr_scott.sql


Import: Release 11.2.0.3.0 - Production on Wed Jan 21 19:27:32 2015


Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Partitioning, Oracle Label Security, OLAP, Data Mining

and Real Application Testing options

Master table "SYS"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded

Starting "SYS"."SYS_SQL_FILE_FULL_01":  "/******** AS SYSDBA" directory=mig dumpfile=scott.dmp sqlfile=cr_scott.sql

Processing object type SCHEMA_EXPORT/USER

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

Processing object type SCHEMA_EXPORT/ROLE_GRANT

Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at 19:27:36


3. DDL 수행


SQL> @cr_scott


4. import 


# impdp \'/ as sysdba\' directory=mig dumpfile=scott.dmp logfile=scott_imp.log ignore=y


Import: Release 11.2.0.3.0 - Production on Wed Jan 21 19:29:22 2015


Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Partitioning, Oracle Label Security, OLAP, Data Mining

and Real Application Testing options

Legacy Mode Active due to the following parameters:

Legacy Mode Parameter: "ignore=TRUE" Location: Command Line, Replaced with: "table_exists_action=append"

Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded

Starting "SYS"."SYS_IMPORT_FULL_01":  "/******** AS SYSDBA" directory=mig dumpfile=scott.dmp logfile=scott_imp.log table_exists_action=append

Processing object type SCHEMA_EXPORT/USER

ORA-31684: Object type USER:"SCOTT" already exists

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

Processing object type SCHEMA_EXPORT/ROLE_GRANT

Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Table "SCOTT"."SALES_RANGE_HASH" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append

Table "SCOTT"."TEST_NLS" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

. . imported "SCOTT"."TEST_NLS"                          5.320 KB       1 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP76"  6.710 KB       1 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP61"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP62"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP63"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP65"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP66"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP68"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP69"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP70"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP71"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP73"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP74"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP75"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP64"  6.710 KB       1 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP67"  6.734 KB       2 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP72"  6.710 KB       1 rows

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Job "SYS"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 19:29:28








Oracle 12c 부터 session-specific GTT(Global Temporary table)에 대해 세션 개별의 통계정보 수집이 가능합니다. 

GTT의 경우 성능이슈로 permanent table을 만들어 drop 하는 등의 작업을 많이 했는데 

12c의 경우 session-private statistic 수집으로 이러한 이슈들이 해결되지 않을까 기대해 봅니다. 


SQL> exec dbms_stats.gather_table_stats(ownname=>'SH', tabname=>'TEMP_GTT');


but, Parallel DML (update, delete, merge)에 대한 제약은 여전히 11g와 동일하네요..


Session-Private Statistics for Global Temporary Tables

Traditionally, global temporary tables had only one set of statistics that were shared among all sessions even though the table could contain different data in different sessions. In Oracle Database 12c Release 1 (12.1), global temporary tables now have session-private statistics. That is a different set of statistics for each session. Queries issued against the global temporary table use the statistics from their own session.

Session-private statistics for global temporary tables improves the performance and manageability of temporary tables. Users no longer need to manually set statistics for the global temporary table on a per session basis or rely on dynamic sampling. This reduces the possibility of errors in the cardinality estimates for global temporary tables and ensures that the optimizer has the data to identify optimal execution plans.

See Also:

Oracle Database SQL Tuning Guide for details



Oracle 12c에 Exadata를 위한 System Statistics 수집 기능 추가되었습니다. 근데 이건 11gR2(exadata X2)에서도 해줘야 됬던 기능인거 같은뎅..  

참고 : http://kerryosborne.oracle-guy.com/2013/09/system-statistics-exadata-mode/

암튼 system statistic 수집은 (workload statistics mode) 아래의 system 성능 정보를 수집하며 SQL optimizer가 SQL Plan 수립에 이 system 성능정보를 참고하므로 꼭 한번은, 그리고 system hardware 변경시에 수행해야 합니다..

  • Single and multiblock read times
  • mbrc
  • CPU speed (cpuspeed)
  • Maximum system throughput
  • Average slave throughput

system statistics 수집 방법 

-- 특정 주기동안 수집

SQL> exec DBMS_STATS.GATHER_SYSTEM_STATS('start') 

SQL> exec DBMS_STATS.GATHER_SYSTEM_STATS('stop') 


-- 특정 interval(분단위) 동안 수집 

SQL> exec DBMS_STATS.GATHER_SYSTEM_STATS('interval', interval=>N)


-- exadata 성능 수집

SQL> exec DBMS_STATS.GATHER_SYSTEM_STATS('exadata')


-- noworkload statistics 수집

SQL> exec DBMS_STATS.GATHER_SYSTEM_STATS()


Enhancements to System Statistics

System statistics allow the optimizer to account for the hardware on which the database system is running. With the introduction of smart storage, such as Exadata storage, the optimizer needs additional system statistics in order to account for all of the smart storage capabilities.

The introduction of the new system statistics gathering method allows the optimizer to more accurately account for the performance characteristics of smart storage, such as Exadata storage.

See Also:

Oracle Database SQL Tuning Guide for details



Tom Kyte has picked his top 12 features of Oracle Database 12c and put them into a presentation. Here are his picks:



Tom Kyte, Vice President of Oracle, shares the top 12 features of 12c Databases at the Oracle Database 12c Launch 2013.


1. Even better PL/SQL from SQL

2. Improved defaults

3. Increased size limits for some datatypes

4. Easy top-n and pagination queries

5. Row pattern matching

6. Partitioning improvements

7. Adaptive execution plans

8. Enhanced statistics

9. Temporary undo

10. Data optimization capabilities

11. Application Continuity and Transaction Guard

12. Pluggable databases


On Oracle Database 12c, Part 1

On Oracle Database 12c, Part 2





Exadata의 Smart Scan 중 주요 기능인 Storage index는 

disk내 데이터를 1M 단위로 쪼개서 해당 block 내의 데이터의 min, max, NULL 값 존재 여부를 

메모리에 기억하여 해당 block을 읽기 전 읽을 필요가 있는지를 판단하게 됩니다. 

또한 Block 내의 데이터 중 column 8개에 대해서만 이러한 summary data를 갖게 됩니다. 


이러한 operation은 내부적으로 진행되고 어떤 column에 대해 summary를 갖을지 

사용자는 control 할 필요가 없습니다. 


그러나 성능 문제가 발생하게 된다면 사용자가 예상하는 column의 정보를 갖는지 확인이 필요한데 

SQL이나 cellcli에서 해당 내용을 판단할 수 있는 방법은 아직 없습니다. 

따라서 이에 대한 확인을 하려면 아래와 같이 trace를 생성해서 확인해야 합니다.


Storage index에 대한 trace 수행 방법과 trace file에 대해 간단히 내용입니다.  


1. Storage index trace 걸기 

SQL>alter system set "_kcfis_storageidx_diag_mode"=2 scope=memory;

SQL> select ...


2. Storage index trace 제거 

SQL> alter system set "_kcfis_storageidx_diag_mode"=0 scope=memory;


3. Storage index trace 

..

2014-11-17 17:18:32.419680*: RIDX (0x7c40f0) for SQLID 6hv5zxwck5dd4 filter 1

--> Storage index(0x7c40f0)를 사용한 SQL ID

2014-11-17 17:18:32.419680*: RIDX (0x7c40f0) : st 2 validBitMap 0 tabn 0 id {75884 7 2415342635}  

--> object number : tablespace number, DB id 

2014-11-17 17:18:32.419680*: RIDX: strt 32 end 2048 offset 115359744 size 1032192 rgnIdx 110 RgnOffset 16384 scn: 0x0000.000ddc2e hist: 0x1

2014-11-17 17:18:32.419680*: RIDX validation history: 0:PartialRead 1:Undef 2:Undef 3:Undef 4:Undef 5:Undef 6:Undef 7:Undef 8:Undef 9:Undef

2014-11-17 17:18:32.419780*: Col id [7] numFilt 4 flg 2:

2014-11-17 17:18:32.419780*: lo: 42 3a 20 33 30 30 30 30

2014-11-17 17:18:32.419780*: hi: 42 3a 20 33 30 30 30 30

--> storage index 대상 column 

2014-11-17 17:18:32.419811*: RIDX (0x7c4168) for SQLID 6hv5zxwck5dd4 filter 1

2014-11-17 17:18:32.419811*: RIDX (0x7c4168) : st 2 validBitMap 0 tabn 0 id {75884 7 2415342635}

2014-11-17 17:18:32.419811*: RIDX: strt 32 end 2048 offset 116408320 size 1032192 rgnIdx 111 RgnOffset 16384 scn: 0x0000.000ddc30 hist: 0x1

2014-11-17 17:18:32.419811*: RIDX validation history: 0:PartialRead 1:Undef 2:Undef 3:Undef 4:Undef 5:Undef 6:Undef 7:Undef 8:Undef 9:Undef

2014-11-17 17:18:32.419874*: Col id [7] numFilt 4 flg 2:

2014-11-17 17:18:32.419874*: lo: 42 3a 20 33 30 30 30 30

2014-11-17 17:18:32.419874*: hi: 42 3a 20 33 30 30 30 30

...


4. 참고 


그동안 궁금해했던 부분 하나. 


Storage index가 한 테이블에 최대 8개의 column에 대해서 min, max, null 사용정보를 구성한다는데..

조건에 9개 column을 쓰는 query가 있다면? 

아님.. 엄한 query가 먼저 돌아 8개를 다 채워버린다면? 


아래 John Clarke's Blog에 관련 언급이 있네요..

workload에 따라 cellsrv가 지속적으로 storage index를 update 함 !!


(같은 object에 대해 workload에 따라 storage index가 재생성되는 것인지 아니면 여러 storage index가 한 object에 생성된다는 건지는 확인이 필요할 듯..)


Oracle documents that Exadata maintains storage indexes for up to 8 columns in a table, which means that if you have an application that accesses a table using more than 8 columns in your predicates (across the entire workload), each with Smart Scan, not all of your columns/queries will benefit from Storage Indexes.  cellsrv dynamically updates these region indexes based on your workload, and one of the interesting things to note is that each region index on the same object can contain different combinations of columns.   Like the high and low values tracked, it's very data-dependent in design.  

참고 : http://jccoracle.blogspot.kr/2013/01/tracing-storage-indexes-with.html




Exadata의 cellserver는 기존의 disk server와 개념과 구조가 많이 달라

기본적으로 사용하고 있는 system 모니터링 툴의 기능 만으로 전반적인 내용을 판단하기 어렵습니다. 


Oracle Enterprise Manager를 사용해보신 분은 알겠지만..

오라클은 matric에 대한 임계치를 설정해 그에 따른 경고를 보내 주는 기능을 조아합니다. 

가장 많이 사용한 EM의 matric은 active session 등이 특정 개수 이상일 경우 경고해주는 등이죠..


Exadata의 cellserver도 그 와 비슷하게 여러 matric을 만들어 놓고 이에 따른 임계치를 설정할 수 있습니다. 

이 임계치 설정에 따라 미리 설정된 sms나 메일로 경고를 받아 볼 수 있습니다. 

아래 리스트는 Exadata cellserver에서 설정할 수 있는 전체 matric 입니다. 

대부분 성능에 대한 matric이나 file system 사용량이나 CPU 사용량, IO error 에 대한 임계치를 미리 설정할 만 하네요.


CellCLI> list METRICDEFINITION attributes name, description
         CD_IO_BY_R_LG                   "Number of megabytes read in large blocks from a cell disk"
         CD_IO_BY_R_LG_SEC               "Number of megabytes read in large blocks per second from a cell disk"
         CD_IO_BY_R_SM                   "Number of megabytes read in small blocks from a cell disk"
         CD_IO_BY_R_SM_SEC               "Number of megabytes read in small blocks per second from a cell disk"
         CD_IO_BY_W_LG                   "Number of megabytes written in large blocks to a cell disk"
         CD_IO_BY_W_LG_SEC               "Number of megabytes written in large blocks per second to a cell disk"
         CD_IO_BY_W_SM                   "Number of megabytes written in small blocks to a cell disk"
         CD_IO_BY_W_SM_SEC               "Number of megabytes written in small blocks per second to a cell disk"
         CD_IO_ERRS                      "Number of IO errors on a cell disk"
         CD_IO_ERRS_MIN                  "Number of IO errors on a cell disk per minute"
         CD_IO_RQ_R_LG                   "Number of requests to read large blocks from a cell disk"
         CD_IO_RQ_R_LG_SEC               "Number of requests to read large blocks per second from a cell disk"
         CD_IO_RQ_R_SM                   "Number of requests to read small blocks from a cell disk"
         CD_IO_RQ_R_SM_SEC               "Number of requests to read small blocks per second from a cell disk"
         CD_IO_RQ_W_LG                   "Number of requests to write large blocks to a cell disk"
         CD_IO_RQ_W_LG_SEC               "Number of requests to write large blocks per second to a cell disk"
         CD_IO_RQ_W_SM                   "Number of requests to write small blocks to a cell disk"
         CD_IO_RQ_W_SM_SEC               "Number of requests to write small blocks per second to a cell disk"
         CD_IO_ST_RQ                     "Average service time per request for small IO requests to a cell disk"
         CD_IO_TM_R_LG                   "Cumulative latency of reading large blocks from a cell disk"
         CD_IO_TM_R_LG_RQ                "Average latency of writing large blocks per request to a cell disk"
         CD_IO_TM_R_SM                   "Cumulative latency of reading small blocks from a cell disk"
         CD_IO_TM_R_SM_RQ                "Average latency of reading small blocks per request from a cell disk"
         CD_IO_TM_W_LG                   "Cumulative latency of writing large blocks to a cell disk"
         CD_IO_TM_W_LG_RQ                "Average latency of writing large blocks per request to a cell disk"
         CD_IO_TM_W_SM                   "Cumulative latency of writing small blocks to a cell disk"
         CD_IO_TM_W_SM_RQ                "Average latency of writing small blocks per request to a cell disk"
         CG_FC_IO_BY_SEC                 "Number of megabytes of I/O per second for this consumer group to flash cache"
         CG_FC_IO_RQ                     "Number of IO requests issued by a consumer group to flash cache"
         CG_FC_IO_RQ_SEC                 "Number of IO requests issued by a consumer group to flash cache per second"
         CG_FD_IO_BY_SEC                 "Number of megabytes of I/O per second for this consumer group to flash disks"
         CG_FD_IO_LOAD                   "Average I/O load from this consumer group for flash disks"
         CG_FD_IO_RQ_LG                  "Number of large IO requests issued by a consumer group to flash disks"
         CG_FD_IO_RQ_LG_SEC              "Number of large IO requests issued by a consumer group to flash disks per second"
         CG_FD_IO_RQ_SM                  "Number of small IO requests issued by a consumer group to flash disks"
         CG_FD_IO_RQ_SM_SEC              "Number of small IO requests issued by a consumer group to flash disks per second"
         CG_IO_BY_SEC                    "Number of megabytes of I/O per second for this consumer group to hard disks"
         CG_IO_LOAD                      "Average I/O load from this consumer group for hard disks"
         CG_IO_RQ_LG                     "Number of large IO requests issued by a consumer group to hard disks"
         CG_IO_RQ_LG_SEC                 "Number of large IO requests issued by a consumer group to hard disks per second"
         CG_IO_RQ_SM                     "Number of small IO requests issued by a consumer group to hard disks"
         CG_IO_RQ_SM_SEC                 "Number of small IO requests issued by a consumer group to hard disks per second"
         CG_IO_UTIL_LG                   "Percentage of disk resources utilized by large requests from this consumer group"
         CG_IO_UTIL_SM                   "Percentage of disk resources utilized by small requests from this consumer group"
         CG_IO_WT_LG                     "IORM wait time for large IO requests issued by a consumer group"
         CG_IO_WT_LG_RQ                  "Average IORM wait time per request for large IO requests issued by a consumer group"
         CG_IO_WT_SM                     "IORM wait time for small IO requests issued by a consumer group"
         CG_IO_WT_SM_RQ                  "Average IORM wait time per request for small IO requests issued by a consumer group"
         CL_CPUT                         "Cell CPU Utilization is the percentage of time over the previous minute that the system CPUs were not idle (from /proc/stat)."
         CL_FANS                         "Number of working fans on the cell"
         CL_FSUT                         "Percentage of total space on this file system that is currently used"
         CL_MEMUT                        "Percentage of total physical memory on the cell that is currently used"
         CL_RUNQ                         "Average number (over the preceding minute) of processes in the Linux run queue marked running or uninterruptible (from /proc/loadavg)."
         CL_TEMP                         "Temperature (Celsius) of the server, provided by the BMC"
         CT_FC_IO_BY_SEC                 "Number of megabytes of I/O per second for this category to flash cache"
         CT_FC_IO_RQ                     "Number of IO requests issued by an IORM category to flash cache"
         CT_FC_IO_RQ_SEC                 "Number of IO requests issued by an IORM category to flash cache per second"
         CT_FD_IO_BY_SEC                 "Number of megabytes of I/O per second for this category to flash disks"
         CT_FD_IO_LOAD                   "Average I/O load from this category for flash disks"
         CT_FD_IO_RQ_LG                  "Number of large IO requests issued by an IORM category to flash disks"
         CT_FD_IO_RQ_LG_SEC              "Number of large IO requests issued by an IORM category to flash disks per second"
         CT_FD_IO_RQ_SM                  "Number of small IO requests issued by an IORM category to flash disks"
         CT_FD_IO_RQ_SM_SEC              "Number of small IO requests issued by an IORM category to flash disks per second"
         CT_IO_BY_SEC                    "Number of megabytes of I/O per second for this category to hard disks"
         CT_IO_LOAD                      "Average I/O load from this category for hard disks"
         CT_IO_RQ_LG                     "Number of large IO requests issued by an IORM category to hard disks"
         CT_IO_RQ_LG_SEC                 "Number of large IO requests issued by an IORM category to hard disks per second"
         CT_IO_RQ_SM                     "Number of small IO requests issued by an IORM category to hard disks"
         CT_IO_RQ_SM_SEC                 "Number of small IO requests issued by an IORM category to hard disks per second"
         CT_IO_UTIL_LG                   "Percentage of disk resources utilized by large requests from this category"
         CT_IO_UTIL_SM                   "Percentage of disk resources utilized by small requests from this category"
         CT_IO_WT_LG                     "IORM wait time for large IO requests issued by an IORM category"
         CT_IO_WT_LG_RQ                  "Average IORM wait time per request for large IO requests issued by an IORM category"
         CT_IO_WT_SM                     "IORM wait time for small IO requests issued by an IORM category"
         CT_IO_WT_SM_RQ                  "Average IORM wait time per request for small IO requests issued by an IORM category"
         DB_FC_IO_BY_SEC                 "Number of megabytes of I/O per second for this database to flash cache"
         DB_FC_IO_RQ                     "Number of IO requests issued by a database to flash cache"
         DB_FC_IO_RQ_SEC                 "Number of IO requests issued by a database to flash cache per second"
         DB_FD_IO_BY_SEC                 "Number of megabytes of I/O per second for this database to flash disks"
         DB_FD_IO_LOAD                   "Average I/O load from this database for flash disks"
         DB_FD_IO_RQ_LG                  "Number of large IO requests issued by a database to flash disks"
         DB_FD_IO_RQ_LG_SEC              "Number of large IO requests issued by a database to flash disks per second"
         DB_FD_IO_RQ_SM                  "Number of small IO requests issued by a database to flash disks"
         DB_FD_IO_RQ_SM_SEC              "Number of small IO requests issued by a database to flash disks per second"
         DB_IO_BY_SEC                    "Number of megabytes of I/O per second for this database to hard disks"
         DB_IO_LOAD                      "Average I/O load from this database for hard disks"
         DB_IO_RQ_LG                     "Number of large IO requests issued by a database to hard disks"
         DB_IO_RQ_LG_SEC                 "Number of large IO requests issued by a database to hard disks per second"
         DB_IO_RQ_SM                     "Number of small IO requests issued by a database to hard disks"
         DB_IO_RQ_SM_SEC                 "Number of small IO requests issued by a database to hard disks per second"
         DB_IO_UTIL_LG                   "Percentage of disk resources utilized by large requests from this database"
         DB_IO_UTIL_SM                   "Percentage of disk resources utilized by small requests from this database"
         DB_IO_WT_LG                     "IORM wait time for large IO requests issued by a database"
         DB_IO_WT_LG_RQ                  "Average IORM wait time per request for large IO requests issued by a database"
         DB_IO_WT_SM                     "IORM wait time for small IO requests issued by a database"
         DB_IO_WT_SM_RQ                  "Average IORM wait time per request for small IO requests issued by a database"
         FC_BYKEEP_OVERWR                "Number of megabytes pushed out of the FlashCache because of space limit for 'keep' objects"
         FC_BYKEEP_OVERWR_SEC            "Number of megabytes per second pushed out of the FlashCache because of space limit for 'keep' objects"
         FC_BYKEEP_USED                  "Number of megabytes used for 'keep' objects on FlashCache"
         FC_BY_USED                      "Number of megabytes used on FlashCache"
         FC_IO_BYKEEP_R                  "Number of megabytes read from FlashCache for 'keep' objects"
         FC_IO_BYKEEP_R_SEC              "Number of megabytes read per second from FlashCache for 'keep' objects"
         FC_IO_BYKEEP_W                  "Number of megabytes written to FlashCache for 'keep' objects"
         FC_IO_BYKEEP_W_SEC              "Number of megabytes per second written to FlashCache for 'keep' objects"
         FC_IO_BY_R                      "Number of megabytes read from FlashCache"
         FC_IO_BY_R_MISS                 "Number of megabytes read from disks because not all requested data was in FlashCache"
         FC_IO_BY_R_MISS_SEC             "Number of megabytes read from disks per second because not all requested data was in FlashCache"
         FC_IO_BY_R_SEC                  "Number of megabytes read per second from FlashCache"
         FC_IO_BY_R_SKIP                 "Number of megabytes read from disks for IO requests with a hint to bypass FlashCache"
         FC_IO_BY_R_SKIP_SEC             "Number of megabytes read from disks per second for IO requests with a hint to bypass FlashCache"
         FC_IO_BY_W                      "Number of megabytes written to FlashCache"
         FC_IO_BY_W_SEC                  "Number of megabytes per second written to FlashCache"
         FC_IO_ERRS                      "Number of IO errors on FlashCache"
         FC_IO_RQKEEP_R                  "Number of read IO requests for 'keep' objects satisfied from FlashCache"
         FC_IO_RQKEEP_R_MISS             "Number of read IO requests for 'keep' objects which did not find all data in FlashCache"
         FC_IO_RQKEEP_R_MISS_SEC         "Number of read IO requests per second for 'keep' objects which did not find all data in FlashCache"
         FC_IO_RQKEEP_R_SEC              "Number of read IO requests for 'keep' objects per second satisfied from FlashCache"
         FC_IO_RQKEEP_R_SKIP             "Number of read IO requests for 'keep' objects with a hint to bypass FlashCache"
         FC_IO_RQKEEP_R_SKIP_SEC         "Number of read IO requests per second for 'keep' objects with a hint to bypass FlashCache"
         FC_IO_RQKEEP_W                  "Number of IO requests for 'keep' objects which resulted in FlashCache being populated with data"
         FC_IO_RQKEEP_W_SEC              "Number of IO requests per second for 'keep' objects which resulted in FlashCache being populated with data"
         FC_IO_RQ_R                      "Number of read IO requests satisfied from FlashCache"
         FC_IO_RQ_R_MISS                 "Number of read IO requests which did not find all data in FlashCache"
         FC_IO_RQ_R_MISS_SEC             "Number of read IO requests per second which did not find all data in FlashCache"
         FC_IO_RQ_R_SEC                  "Number of read IO requests satisfied per second from FlashCache"
         FC_IO_RQ_R_SKIP                 "Number of read IO requests with a hint to bypass FlashCache"
         FC_IO_RQ_R_SKIP_SEC             "Number of read IO requests per second with a hint to bypass FlashCache"
         FC_IO_RQ_W                      "Number of IO requests which resulted in FlashCache being populated with data"
         FC_IO_RQ_W_SEC                  "Number of IO requests per second which resulted in FlashCache being populated with data"
         FD_IO_LOAD                      "Average I/O load for flash disks"
         GD_IO_BY_R_LG                   "Number of megabytes read in large blocks from a grid disk"
         GD_IO_BY_R_LG_SEC               "Number of megabytes read in large blocks per second from a grid disk"
         GD_IO_BY_R_SM                   "Number of megabytes read in small blocks from a grid disk"
         GD_IO_BY_R_SM_SEC               "Number of megabytes read in small blocks per second from a grid disk"
         GD_IO_BY_W_LG                   "Number of megabytes written in large blocks to a grid disk"
         GD_IO_BY_W_LG_SEC               "Number of megabytes written in large blocks per second to a grid disk"
         GD_IO_BY_W_SM                   "Number of megabytes written in small blocks to a grid disk"
         GD_IO_BY_W_SM_SEC               "Number of megabytes written in small blocks per second to a grid disk"
         GD_IO_ERRS                      "Number of IO errors on a grid disk"
         GD_IO_ERRS_MIN                  "Number of IO errors on a grid disk per minute"
         GD_IO_RQ_R_LG                   "Number of requests to read large blocks from a grid disk"
         GD_IO_RQ_R_LG_SEC               "Number of requests to read large blocks per second from a grid disk"
         GD_IO_RQ_R_SM                   "Number of requests to read small blocks from a grid disk"
         GD_IO_RQ_R_SM_SEC               "Number of requests to read small blocks per second from a grid disk"
         GD_IO_RQ_W_LG                   "Number of requests to write large blocks to a grid disk"
         GD_IO_RQ_W_LG_SEC               "Number of requests to write large blocks per second to a grid disk"
         GD_IO_RQ_W_SM                   "Number of requests to write small blocks to a grid disk"
         GD_IO_RQ_W_SM_SEC               "Number of requests to write small blocks per second to a grid disk"
         IORM_MODE                       "I/O Resource Manager objective for the cell"
         IO_LOAD                         "Average I/O load for hard disks"
         N_MB_DROP                       "Number of megabytes droped during  transmission  to  a particular host"
         N_MB_DROP_SEC                   "Number of megabytes droped during transmission  per second  to  a particular host"
         N_MB_RDMA_DROP                  "Number of megabytes dropped during RDMA transmission to  a particular host"
         N_MB_RDMA_DROP_SEC              "Number of megabytes dropped during RDMA transmission per second  to  a particular host"
         N_MB_RECEIVED                   "Number of megabytes received from a particular host"
         N_MB_RECEIVED_SEC               "Number of megabytes per second received from a particular host"
         N_MB_RESENT                     "Number of megabytes resent  to  a particular host"
         N_MB_RESENT_SEC                 "Number of megabytes resent per second to  a particular host"
         N_MB_SENT                       "Number of megabytes transmitted to  a particular host"
         N_MB_SENT_SEC                   "Number of megabytes transmitted per second to  a particular host"
         N_NIC_NW                        "Number of non-working interconnects"
         N_NIC_RCV_SEC                   "Total number of IO packets received by interconnects per second"
         N_NIC_TRANS_SEC                 "Total number of IO packets transmitted by interconnects per second"
         N_RDMA_RETRY_TM                 "Latency of the retry actions during RDMA transmission to a particular host"


Parallel Query로 수행되는 SQL에 문제가 있을경우

QC session과 PQ session 모두 tracing 하는 방법입니다. 

PQ session 찾아 헤메일 필요 없이 !! 


1. 현재 세션에 ID 부여 (PQ1) 

SQL> exec dbms_session.set_identifier(client_id => 'PQ1');

PL/SQL procedure successfully completed.


2. PQ1으로 정의한 현재 세션에 SQL_TRACE 설정 

SQL> exec dbms_monitor.client_id_trace_enable(client_id => 'PQ1', waits => true, binds => false);

PL/SQL procedure successfully completed.


3. Parallel query 수행 

SQL> select /*+ parallel(a,10) */ count(*) from customers a ..


4. SQL_TRACE 설정 제거 

SQL> exec dbms_monitor.client_id_trace_disable(client_id => 'PQ1');

PL/SQL procedure successfully completed. 


일단..


while true

do

top -f <파일이름>

sleep 60

done


분명히 반복하는 옵션도 있을것 같은데.. 

그건 나중에 찾아보고..


결과물에 대한 자세한 설명은 아래 Link에 ..

TOP 명령 완전정복


메뉴얼 내용에 따르면 Update 사전 check 기능 강화과 사후 작업 자동화가 추가되었습니다.

한번 돌려봐서 확인은 해봐야 겠지만 upgrade를 위해 이것저것 챙겨야할 부분이 자동화되면 upgrade 단계에서의 사고는 확실히 줄 수 있겠네요.

가장 반길만 한 부분은 parallel upgrade 부분입니다. 

upgrade 단계에서 내부 object upgrade 부분은 serial 하게 수행되어 일정 downtime이 요구되었죠.. 

하나 Parallel upgrade로 수행된다 해도 ADG, OGG 등의 제품이 없다면 zero-downtime으로 upgrade 되지 않는건 마찬가지..  

뭐.. Oracle 13c 정도면 zero-downtime upgrade 가 지원되지 않을까.. 하고 기대합니다.  


Enhanced Upgrade Automation

Database upgrade has been enhanced for better ease-of-use by improving the amount of automation applied to the upgrade process. Additional validation steps have been added to the pre-upgrade phase in both the command-line pre-upgrade script and the Database Upgrade Assistant (DBUA). In addition, the pre-upgrade validation steps have been enhanced with the ability to generate a fix-up script to resolve most issues that may be identified before the upgrade.

Post-upgrade steps have also been enhanced to reduce the amount of manual work required for a database upgrade. The post-upgrade status script gives more explicit guidance about the success of the upgrade on a component-by-component basis. Post-upgrade fix-up scripts are also generated to automate tasks that must be performed after the upgrade.

See Also:

Oracle Database Upgrade Guide for details

2.12.1.2 Parallel Upgrade

The database upgrade scripts can now take advantage of multiple CPU cores by using parallel processing to speed up the upgrade process. This results in less downtime due to a database upgrade, and thus improved database availability.

See Also:

Oracle Database Upgrade Guide for details




Oracle 12c부터는 DBMS_QOPATCh package를 통해서 sqlplus 상에서 patch 정보를 확인할 수 있답니다..

Database node가 많은 RAC 환경에서는 편할 수 있겠네요.. 


Queryable Patch Inventory

Using DBMS_QOPATCH, Oracle Database 12c provides a PL/SQL or SQL interface to view the database patches that are installed. The interface provides all the patch information available as part of the OPatch lsinventory -xml command. The package accesses the Oracle Universal Installer (OUI) patch inventory in real time to provide patch and patch meta information.

Using this feature, users can:

  • Query what patches are installed from SQL*Plus.

  • Write wrapper programs to create reports and do validation checks across multiple environments.

  • Check patches installed on Oracle RAC nodes from a single location instead of having to log onto each one in turn.




주말에 통계정보수집하고 월요일 아침 출근했더니 
SQL의 PLAN이 바뀌어 성능이 제대로 나오지 않아 문제 발생하는 경험 ..
아마 누구나 있을 듯 한데요.
10g 이후 통계정보가 수집되면 과거 통계정보를 특정 retention 기간동안 보관하게 되어 있어
통계정보 이상 시점이 확인이 된다면 과거의 잘돌던 통계정보를 다시 restore 해서 문제를 일단 진정시킬 수 있습니다. 

다음은 자동으로 저장되는 과거 통계정보 확인 방법 및 통계정보 restore 하는 방법입니다. 

1. retention 기간 확인 
select DBMS_STATS.GET_STATS_HISTORY_RETENTION from dual;

2. restore가 가능한 가장 오래된 날짜 확인. 
select DBMS_STATS.GET_STATS_HISTORY_AVAILABILITY from dual;

3. 특정 table의 statistics history 확인 
select OWNER,TABLE_NAME,PARTITION_NAME, STATS_UPDATE_TIME from dba_tab_stats_history where table_name = '&TABLE_NAME' order by STATS_UPDATE_TIME;

4. 현재 통계정보 backup 
exec dbms_stats.create_stat_table(ownname => 'sys', stattab => 'old_stats3');
exec dbms_stats.export_table_stats(ownname=>'SCOTT',tabname=>'EMP', stattab=>'old_stats3',statown  => 'SYS');

5. 통계정보 restore. 

-- execute DBMS_STATS.RESTORE_TABLE_STATS ('owner','table',date)
-- execute DBMS_STATS.RESTORE_DATABASE_STATS(date)
-- execute DBMS_STATS.RESTORE_DICTIONARY_STATS(date)
-- execute DBMS_STATS.RESTORE_FIXED_OBJECTS_STATS(date)
-- execute DBMS_STATS.RESTORE_SCHEMA_STATS('owner',date)
-- execute DBMS_STATS.RESTORE_SYSTEM_STATS(date)

예: 
execute dbms_stats.restore_table_stats ('SCOTT','EMP','25-JUL-07 12.01.20.766591 PM +02:00');

참고: Restoring table statistics in 10G onwards (Doc ID 452011.1)


 






참고 : http://youtu.be/6HZewNxDD08



참고 : 

http://www.oaktable.net/content/combining-bloom-filter-offloading-and-storage-indexes-exadata


'정리필요' 카테고리의 다른 글

RWP - 18 Large Linux Pages  (0) 2014.05.20
RAC의 database control을 HA로 구성하기  (0) 2014.05.20



참고 : 

http://oracle-randolf.blogspot.kr/2014/05/12c-hybrid-hash-distribution-with-skew.html

http://www.oaktable.net/content/12c-hybrid-hash-distribution-skew-detection-handling-failing



참고 : http://www.oaktable.net/content/ha-database-control-rac-made-easy

'정리필요' 카테고리의 다른 글

RWP - 18 Large Linux Pages  (0) 2014.05.20
Combining Bloom Filter Offloading and Storage Indexes on Exadata  (0) 2014.05.20


Oracle 12c RMAN의 new feature 중 하나인 table recovery 

아래 table recovery 방법을 보니 AUXILIARY Database를 만들어 거기서 dump datafile을 뽑아내는 모양새. 


과거에 삭제된 table 복구를 위해 수행되었던 복구 절차를 명령어 한줄로 만들어 놓았네요. ㅎ


Recover the tables EMP and DEPT using the following clauses in the RECOVER command: DATAPUMP DESTINATION, DUMP FILE, REMAP TABLE, and NOTABLEIMPORT.


The following RECOVER command recovers the EMP and DEPT tables.


RECOVER TABLE SCOTT.EMP, SCOTT.DEPT

    UNTIL TIME 'SYSDATE-1'

    AUXILIARY DESTINATION '/tmp/oracle/recover'

    DATAPUMP DESTINATION '/tmp/recover/dumpfiles'

    DUMP FILE 'emp_dept_exp_dump.dat'

    NOTABLEIMPORT;


참고 : http://docs.oracle.com/cd/E16655_01/backup.121/e17630/rcmresind.htm#BRADV703



Oracle 12c에서 많이 밀고 있는 Multitenant, ADO 등의 New feature 외에 소소한 몇몇 feature 들... 

어디다가 쓸진 모르겠지만.. 


Invisible Columns

• The new 12c feature allows you to hide columns 

• If a user or developer selects ALL columns from a table (i.e. select *…)  the invisible columns will NOT be displayed. 

• If a user specifically selects the invisible column (i.e. select salary,…) the column WILL be displayed in the output (you have to know it’s there). 

• You can set column(s) to be visible/invisible with an alter table : 

 

SQL> ALTER TABLE EMPLOYEE MODIFY (SSN INVISIBLE); 


이로써 invisible 기능으로 index, row (12c new feature - valid time temporal), column을 숨길수 있게됬군요.. ㅋ


Create Views as Tables  

Export a view as a table and then import it: 


SQL> create view emp_dept as 

(select a.empno, a.ename, b.deptno, b.dname, b.loc 

 from emp a, dept b 

 where a.deptno=b.deptno); 


View created. 

 

$ expdp scott2/tiger VIEWS_AS_TABLES=emp_dept 

 

Processing object type TABLE_EXPORT/VIEWS_AS_TABLES/TABLE 

. . exported "SCOTT2"."EMP_DEPT" 

7.140 KB 14 rows 


view를 table로 export 할 수 있는 기능. 

업무에선 어떻게 쓰일수 있을지 모르겠지만.. 

DB 성능관련 view 들도 table로 간단히 뽑아낼 수 있다면 성능 history 구축하긴 쉽겠네요. 



Question: Can DD, TAR or other OS tools be used to back up a database on storage managed by ASM?

Answer: DD, Tar and other OS tools are not supported with ASM. The answer is use Rman with ASM. Matrix for Supported backup or clongin Options:

-----------------------X
|  DD           |  No  |
-----------------------| 
|  Atomic Snaps | Yes  |
-----------------------|
|  TAR          |  No  |
-----------------------|
|  RMAN         | YES! |
-----------------------X


Automic snaps are split mirror technologies which support atomis splits across several LUN as a consistant point in time copy. Products like TimeFinder or MirrorView. Database must be placed in HOT BACKUP mode for this type of backup for the duration of the split. This is because the split can not capture a block-consistent view of all datafiles while the database is open for write. The split is typically a few seconds so the performance impact is minimal.


오류 메세지 : 

VT-x/AMD-V 하드웨어 가속 기능이 활성화되었지만, 현재 작동하지 않습니다.
64비트 게스트 운영 체제에서 64비트 CPU를 인식할 수 없어서 부팅할 수 없습니다.

컴퓨터 바이오스에서 VT-x/AMD-V를 활성화했는지 확인해 보십시오.


bios 상에서 Virtual Technology 라는 기능이 있습니다.  
제 bios 에서는 overclocking 메뉴에서 CPU 항목에 있더군요..


오류 메세지 : 

하드 디스크 D:\test_vm.vdi을(를) 여는 데 실패했습니다.


Cannot register the hard disk 'D:\test_vm.vdi' {2c780ed3-3642-47db-b2d9-12f01bd959ea} because a hard disk 'D:\test_vm_org.vdi' with UUID {2c780ed3-3642-47db-b2d9-12f01bd959ea} already exists.


결과 코드: E_INVALIDARG (0x80070057)

구성 요소: VirtualBox

인터페이스: IVirtualBox {3b2f08eb-b810-4715-bee0-bb06b9880ad2}

호출자 RC: VBOX_E_OBJECT_NOT_FOUND (0x80BB0001)


C:\>"Program Files\Sun\VirtualBox\VBoxManage" internalcommands sethduuid d:\test_vm.vdi


  •  DBFS로 구성된 파일 시스템은 NFS을 지원하지 않습니다.
  •  파일명을 한글로 사용하는 것을 원칙적으로 지원하지 않습니다.
  •  파일이 내부적으로는 DB내에 Lob 테이블로 저장되기 때문에, 일반 파일시스템과 다르게 mv명령와 같은 경우 Row별로 데이터를 삭제 후 다시 저장되기 때문에 성능적인 차이가 존재합니다.
  •  DBFS의 경우 실행파일은 수행되지 않습니다..

Exadata에서 DBFS를 구성해 보니 DBFS용으로 추가 구성한 DB 내 LOB file을 filesystem 형식으로 보여주는 .. 형태로 구성되는 군요. 위의 주의사항 처럼 DBFS를 구성해서 filesystem으로 보인다 해도 여기에 어떠한 실행파일을 구성할 수 없습니다. 또한 성능도 이러한 구성이라면 그닥 기대할 만하지 못할 듯하네요...


어디다가 쓰지? DBFS? 

External table을 위한 sam file 정도 ? 





1. configure the initial load extract parameter file (source)
     GGSCI> edit param eload01

      sourceistable
      userid oggmgr, password oracle
      rmthost oggtest.localdomain, mgrport 5009
      rmtfile ./dirdat/TCUSTMER.dat, purge
      table scott.tcustmer;
      rmtfile ./dirdat/TCUSTORD.dat, purge
      table scott.tcustord;


2. Execute the initial load capture process (source)
     # extract paramfile dirprm/eload01.prm reportfile dirrpt/ELOD01.rpt
     # ggsci
     GGSCI> view report ELOD01

3. configure the initial load delivery parameter file (target)
     # ls -al ./dirdat/TCUST*.dat
     # ggsci
     GGSCI> edit param loadtcustmer

      specialrun
      end runtime
      userid oggmgr, password oracle
      assumetargetdefs
      extfile ./dirdat/TCUSTMER.dat
      map scott.tcustmer, target scott.tcustmer;


      GGSCI> edit param loadtcustord

      specialrun
      end runtime
      userid oggmgr, password oracle
      assumetargetdefs
      extfile ./dirdat/TCUSTORD.dat
      map scott.tcustord, target scott.tcustord;


4. execute the initial load delivery process
     # replicat paramfile dirprm/loadtcustmer.prm reportfile dirrpt/LOADTCUSTMER.rpt
     # replicat paramfile dirprm/loadtcustord.prm reportfile dirrpt/LOADTCUSTORD.rpt

     # ggsci
     GGSCI> view report LOADTCUSTMER
     GGSCI> view report LOADTCUSTORD


'Oracle Middleware' 카테고리의 다른 글

Oracle Goldengate의 기본적인 단방향 복제  (0) 2013.12.10


1. oracle database 기본 goldengate db user 생성 (source, target)

     SQL> create user oggmgr identified by oracle default tablespace users;
     SQL> grant dba to oggmgr;  -- role_setup script 는 ? 
     
2. oracle database supplemental log & archive log 설정 (source)
     SQL> alter database add supplemental log data; 
     SQL> select supplemental_log_data_min from v$database;
     SQL> shutdown immediate;
     SQL> startup mount
     SQL> alter database archivelog;
     SQL> alter database open;
     SQL> alter system switch logfile;
     SQL> archive log list;
     SQL> show parameter log_archive_dest_1

3. database test user 생성 (source, target)
     SQL> create user app identified by app default tablespace users;
     SQL> grant connect, resource to app;

4. goldengate 설치 (source, target)
     # ggsci
     GGSCI> create subdirs
     GGSCI> edit params ./GLOBALS

     GGSCHEMA OGGMGR
     CHECKPOINTTABLE OGGMGR.GGSCHKPT
     SYSLOG NONE

     GGSCI> edit param mgr

     port 9010 -- target은 9020

     GGSCI> start mgr
     GGSCI> info mgr
     GGSCI> info all

5. 테스트 테이블 생성 (source, target)
     # sqlplus app/app @demo_ora_create

6. 테스트 테이블에 대한 suplemental logging 설정 (source)
     # ggsci
     GGSCI> dblogin userid oggmgr, password oracle
     GGSCI> info trandata app.*
     GGSCI> add trandata app.tcustmer
     GGSCI> add trandata app.tcustord
     GGSCI> info trandata app.*

7. checkpoint table 생성 (target)
     #ggsci
     GGSCI> dblogin userid oggmgr, password oracle
     GGSCI> add checkpointtable

8. extract configuration & start (source)
     GGSCI> edit param ext01
     
     extract ext01
     userid oggmgr, password oracle
     discardfile ./dirout/ext01.dec, append, megabytes 50
     discardrollover at 00:01
     reportcount every 1 records
     reportrollover at 00:01

     exttrail ./dirdat/ex
     table app.*;

     GGSCI> dblogin userid oggmgr, password oracle
     GGSCI> add extract ext01, tranlog, begin now
     GGSCI> add exttrail ./dirdat/ext, extract ext01, megabytes 50
     GGSCI> info ext01
     GGSCI> start ext01

9. pump configuration & start (source)
     # ggsci
     GGSCI> edit param pmp01

     extract pmp01
     passthru
     rmthost oggtest.localdomain, mgrport 9020
     rmttrail ./dirdat/ex
     reportcount every 1 records
     reportrollover at 00:01
     table app.*;

     GGSCI> add extract pmp01, exttrailsource ./dirdat/ex
     GGSCI> add exttrail ./dirdat/ex, extract pmp01, megabytes 50
     GGSCI> start pmp01
     GGSCI> info pmp01

10. replicat configuration & start (target)
     #ggsci
     GGSCI> edit param rep01

     replicat rep01
     userid oggmgr, password oracle
     discardfile ./dirout/rep01.dec, append, megabytes 50
     discardrollover at 00:01
     reportcount every 1 records
     reportrollover at 00:01
     assumetargetdefs
     map app.tcustmer, target app.tcustmer;
     map app.tcustord, target app.tcustord;

     GGSCI> add replicat rep01, exttrail ./dirdat/ex
     GGSCI> start rep01
     GGSCI> info rep01

11. 테스트 (source)
     # sqlplus app/app @demo_ora_insert

12. goldengate stop (source, target)
     #ggsci
     GGSCI> stop *
     GGSCI> stop mgr


'Oracle Middleware' 카테고리의 다른 글

Oracle Goldengate의 initial data load  (0) 2013.12.10


Oracle 12c에 Oracle Multitenant라는 새로운 개념이 등장했습니다. 

Database가 Container DB와 Pluggable DB로 나뉘어 Cloud 환경에 적합한 모양으로 구성 가능합니다. 


자세한 내용은 아래 링크 참조하시면 될 것 같고..

http://www.oracle.com/technetwork/database/multitenant/overview/index.html


Oracle 12c Multitenant라는 기능을 보다 보니까 내가 현재 어떤 DB에 접속하고 있는지 헤깔리기 쉽겠더군요..

PDB (Pluggable DB)를 내리려다 잘못해 Container DB를 내리면 해당 Container DB에 있는 PDB까지 다 내려갈 수 있으니 (걱정도 팔자임), 이는 큰일이 아닐 수 없습니다. ㅋ


해서 SQLPLUS의 prompt에 현재 어떤 DB에 접속하고 있는지 표시하도록 간단히 맹글어 보았습니다. 

모두 아시는 $ORACLE_HOME/sqlplus/admin/glogin.sql에 아래 내용을 넣으시면 됩니다. 

define _editor=vi

column sqlprompt_col new_value sqlprompt_value

set termout off

define sqlprompt_value='NOT CONNECTED'

SELECT SYS_CONTEXT('USERENV','CURRENT_USER')||'('||SYS_CONTEXT('USERENV','CON_NAME')||')'

  as sqlprompt_col

from dual;

set termout on

set sqlprompt '&sqlprompt_value >'


아래처럼 container DB에 접속하던 Pluggable DB에 접속하던 prompt 상에서 보여주게 됩니다. 

그러나 DB 가 내려가 있는 상태에서 접속하면 'NOT CONNECTED'로 나오는 부분은 수정 필요 --; 



Oracle 11g 때부터 raw device를 지원한다, 안한다 이야기가 많더니만,

드디어 Oracle 12c에서는 지원하지 않는다는 문장이 메뉴얼에 등장했네요. 


8.1.10.1 About Upgrading Oracle Database Release 10.2 or 11.1 and OCFS and RAW Devices


If you are upgrading an Oracle Database release 10.2.0.5 or release 11.1.0.7 environment that stores Oracle Clusterware files on OCFS on Windows or RAW devices, then you cannot directly upgrade to Oracle Database 12c. You must first perform an interim upgrade to Oracle Database release 11.2 and migrate the Oracle Clusterware files to Oracle Automatic Storage Management (Oracle ASM). Then you can upgrade from release 11.2 to Oracle Database 12c.


8.1.12 Desupport for Raw Storage Devices


Starting with Oracle Database 12c, block file storage on raw devices is not supported. You must migrate any data files stored on raw devices to Oracle ASM, a cluster file system, or Network File System (NFS).

This also affects the OCR and voting files for Oracle Clusterware. You cannot store the OCR or voting files on raw devices. Oracle Clusterware files must be moved to Oracle ASM before upgrading.


출처 : Oracle® Database Upgrade Guide 12c Release 1 (12.1)

(http://docs.oracle.com/cd/E16655_01/server.121/e17642/deprecated.htm#UPGRD60124)


이제 ASM을 공부해야 할 시간 ... 

Oracle ASM Strategic Best Practices





SQL 관련 문제 발생시 문제 분석이나 Oracle SR 진행시 가장 어려운 부분은 문제 재현입니다. 


이미 분석되어 널리 알려지거나 오류 등에 의한 trace, dump 등이 있다면 문제 재현없이 진행해 볼수 있겠으나, 

그렇지 않은 경우 SR에서 권고하는 각종 diag patch 등을 적용해 다음 문제 발생때 까지 기다려야 할 수 도 있습니다. 

또 문제가 재현된다고 해도 이걸 잘 포장해서 넘기는 것도 일입니다.

다른 서버에서 재현 실패해서 직접 재현해 놓은 서버에 접속해 테스트와 분석이 진행되기도 합니다. 


Oracle 11g에서 이러한 SQL 관련 문제 발생시 SQL 중심으로 object 상태 정보를 수집할 수 있는 diag tool을 하나를 끼여 넣었는데, 그게 SQL Test Case Builder 입니다. 이 SQL Test Case Builde(TCB)는 EM이나 sqlplus에서 Oracle package 형태로 수행되며 분석에 필요한 다음의 두가지 형태의 정보를 자동으로 수집합니다. 

1. Permanent information

  • SQL text
  • PL/SQL functions, procedures, packages
  • Statistics
  • Bind variables
  • Compilation environment
  • User information (like privileges)
  • SQL profiles, stored outlines, or other SQL Management Objects
  • Meta data on all the objects involved
  • Optimizer statistics
  • The execution plan information
  • The table content (sample or full). This is optional.


2. Transient information

  • dynamic sampling results
  • cached information
  • some run time information (like the actual degree of parallelism used)
  • etc.


사용방법은 간단히 directory 생성및 권한 부여 이후 package로 간단히 수행가능합니다.  

declare

  tc_out clob;

begin

   dbms_sqldiag.export_sql_testcase(directory=>'&dump_dir', 

                                    sql_id=>'&sqlid', 

                                    testcase => tc_out);

end;

/


자세한 예제나 설명은 아래 오라클 블로그에서 읽어보세요. 

Oracle keeps closing my TAR because I cannot provide a testcase, can you help?



+ Recent posts