serial direct scan은 11g에 소개된 기능으로 대상 테이블을 size로 구분하여 이에 대해
SGA내의 buffer scan (기존 방식)을 사용할지, direct path read 방식으로 수행할 지를 자동으로 결정하는 기능입니다.
즉, Parallel query를 사용하지 않고 serial scan으로 수행되더라도 direct path 방식으로 수행할 수 있습니다.
유의할 부분은 direct path read는 dirty buffer의 checkpoint를 유발하니 OLTP에서는 유의해야 합니다.  

_serial_direct_read parameter는 segment size와 parameter 설정값에 따라 다르게 동작합니다. 

_serial_direct_read
Segment size
Small
Medium
Large
Very Large
TRUE
Direct Path Read
Direct Path Read
Direct Path Read
Direct Path Read
FALSE
Buffer Cache Read
Buffer Cache Read
Buffer Cache Read
Buffer Cache Read
AUTO
Buffer Cache Read
Object Stat 존재 시 Buffer Cache Read 수행
그외 비용 분석후 결정
비용 분석 후 결정
Direct Path Read

read 방식을 결정하는 segment size는 대상 object의 block 개수와 _small_table_threshold, _very_large_object_threshold parameter와 Buffer cache size에 의해 결정됩니다. 

Segment Size정의
SmallSegment Block < _small_table_threshold
Medium_small_table_threshold < Segment blocks < Buffer cache의 10% 이하
LargeMedium과 Very Large 사이 일 경우
Very LargeBuffer cache * (_very_large_object_threshold/100) 보다 큰 경우

관련 Parameter 
_small_table_threshold : lower threshold level of table size for direct reads. 
                            Default value : 20 (Buffer cache의 2%)

_very_large_object_threshold :
upper threshold level of object size for direct reads.
                                 Default Value : 500 (Buffer cache의 5배)



Oracle 12c 부터 RAW device가 desupport 됩니다. 

Oracle 12c, Desupport for Raw Storage Devices

 

어찌어찌 해서 raw device를 사용한채로 upgrade를 한다해도 raw device를 direct로 사용하면 오류 발생할 거라네요. 

12c로 upgrade 할땐 ASM으로의 migration 까지도 염두해 둬야 되겠네요.. OTL


언듯 raw device 별로 diskgroup을 만들어 tablespace를 생성하는 꼼수가 머리를 스치는 군요 .. ㅋ 



Starting with Oracle Database version 12.1 (release date TBD), support for storing data files, spfiles, controlfiles, online redo logfiles, OCRs and voting files on raw devices directly will end. This means commands such as the following will report an error while attempting to use raw devices directly in Oracle Database Version 12.1:


SQL> create tablespace ABC DATAFILE '/dev/raw/raw1' size 2GB;


Note that while the direct use of raw devices will be de-supported for Oracle Database 12.1, customers can choose to create Oracle ASM diskgroups on top of raw devices. While it is recommended to store all shared files on ASM diskgroups, storing those files on NFS or certified cluster file systems remains supported.


The following SQL commands will not return an error while attempting to use raw devices. Reason: the raw devices in the example below are used indirectly via Oracel ASM (no direct use of raw devices here):


SQL>alter diskgroup MYDG add disk '/dev/raw/ABC1.dbf';

 OR

SQL>create diskgroup MYDB EXTERNAL REDUNDANCY disk 'dev/raw/ABC1.dbf'


Then use the following command to create the tablespace


SQL> create tablespace ABC DATAFILE '+MYDG' size 2GB;


If raw devices are not being used directly in the current release then no further actions need to be taken. However, if raw devices are being used directly currently, then planning should be performed to migrate respective files off raw devices. There are many choices currently to replace raw devices, including Oracle ASM, NFS, and supported cluster file systems.


참고 : Announcement of De-Support of using RAW devices in Oracle Database Version 12.1 (Doc ID 578455.1)



DataPump를 이용한 간단한 마이그레이션 절차입니다. 


1. schema level export 


# expdp \'/ as sysdba\' directory=mig dumpfile=scott.dmp log=scott_export.log schemas=scott


Export: Release 11.2.0.3.0 - Production on Wed Jan 21 18:24:16 2015


Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Partitioning, Oracle Label Security, OLAP, Data Mining

and Real Application Testing options

Legacy Mode Active due to the following parameters:

Legacy Mode Parameter: "log=scott_export.log" Location: Command Line, Replaced with: "logfile=scott_export.log"

Legacy Mode has set reuse_dumpfiles=true parameter.

Starting "SYS"."SYS_EXPORT_SCHEMA_01":  "/******** AS SYSDBA" directory=mig dumpfile=scott.dmp logfile=scott_export.log schemas=scott reuse_dumpfiles=true

Estimate in progress using BLOCKS method...

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 384 KB

Processing object type SCHEMA_EXPORT/USER

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

Processing object type SCHEMA_EXPORT/ROLE_GRANT

Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

. . exported "SCOTT"."TEST_NLS"                          5.320 KB       1 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP64"  6.710 KB       1 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP67"  6.734 KB       2 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP72"  6.710 KB       1 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP76"  6.710 KB       1 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP61"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP62"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP63"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP65"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP66"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP68"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP69"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP70"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP71"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP73"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP74"      0 KB       0 rows

. . exported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP75"      0 KB       0 rows

Master table "SYS"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded

******************************************************************************

Dump file set for SYS.SYS_EXPORT_SCHEMA_01 is:

  /home/oracle/pump/scott.dmp

Job "SYS"."SYS_EXPORT_SCHEMA_01" successfully completed at 18:24:32



2. ddl 추출 


# impdp \'/ as sysdba\' directory=mig dumpfile=scott.dmp sqlfile=cr_scott.sql


Import: Release 11.2.0.3.0 - Production on Wed Jan 21 19:27:32 2015


Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Partitioning, Oracle Label Security, OLAP, Data Mining

and Real Application Testing options

Master table "SYS"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded

Starting "SYS"."SYS_SQL_FILE_FULL_01":  "/******** AS SYSDBA" directory=mig dumpfile=scott.dmp sqlfile=cr_scott.sql

Processing object type SCHEMA_EXPORT/USER

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

Processing object type SCHEMA_EXPORT/ROLE_GRANT

Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at 19:27:36


3. DDL 수행


SQL> @cr_scott


4. import 


# impdp \'/ as sysdba\' directory=mig dumpfile=scott.dmp logfile=scott_imp.log ignore=y


Import: Release 11.2.0.3.0 - Production on Wed Jan 21 19:29:22 2015


Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Partitioning, Oracle Label Security, OLAP, Data Mining

and Real Application Testing options

Legacy Mode Active due to the following parameters:

Legacy Mode Parameter: "ignore=TRUE" Location: Command Line, Replaced with: "table_exists_action=append"

Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded

Starting "SYS"."SYS_IMPORT_FULL_01":  "/******** AS SYSDBA" directory=mig dumpfile=scott.dmp logfile=scott_imp.log table_exists_action=append

Processing object type SCHEMA_EXPORT/USER

ORA-31684: Object type USER:"SCOTT" already exists

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

Processing object type SCHEMA_EXPORT/ROLE_GRANT

Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Table "SCOTT"."SALES_RANGE_HASH" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append

Table "SCOTT"."TEST_NLS" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

. . imported "SCOTT"."TEST_NLS"                          5.320 KB       1 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP76"  6.710 KB       1 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP61"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP62"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP63"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP65"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP66"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP68"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP69"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP70"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP71"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP73"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP74"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q4"."SYS_SUBP75"      0 KB       0 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q1"."SYS_SUBP64"  6.710 KB       1 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q2"."SYS_SUBP67"  6.734 KB       2 rows

. . imported "SCOTT"."SALES_RANGE_HASH":"SALES_Q3"."SYS_SUBP72"  6.710 KB       1 rows

Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Job "SYS"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 19:29:28








Oracle 12c 부터 session-specific GTT(Global Temporary table)에 대해 세션 개별의 통계정보 수집이 가능합니다. 

GTT의 경우 성능이슈로 permanent table을 만들어 drop 하는 등의 작업을 많이 했는데 

12c의 경우 session-private statistic 수집으로 이러한 이슈들이 해결되지 않을까 기대해 봅니다. 


SQL> exec dbms_stats.gather_table_stats(ownname=>'SH', tabname=>'TEMP_GTT');


but, Parallel DML (update, delete, merge)에 대한 제약은 여전히 11g와 동일하네요..


Session-Private Statistics for Global Temporary Tables

Traditionally, global temporary tables had only one set of statistics that were shared among all sessions even though the table could contain different data in different sessions. In Oracle Database 12c Release 1 (12.1), global temporary tables now have session-private statistics. That is a different set of statistics for each session. Queries issued against the global temporary table use the statistics from their own session.

Session-private statistics for global temporary tables improves the performance and manageability of temporary tables. Users no longer need to manually set statistics for the global temporary table on a per session basis or rely on dynamic sampling. This reduces the possibility of errors in the cardinality estimates for global temporary tables and ensures that the optimizer has the data to identify optimal execution plans.

See Also:

Oracle Database SQL Tuning Guide for details



Oracle 12c에 Exadata를 위한 System Statistics 수집 기능 추가되었습니다. 근데 이건 11gR2(exadata X2)에서도 해줘야 됬던 기능인거 같은뎅..  

참고 : http://kerryosborne.oracle-guy.com/2013/09/system-statistics-exadata-mode/

암튼 system statistic 수집은 (workload statistics mode) 아래의 system 성능 정보를 수집하며 SQL optimizer가 SQL Plan 수립에 이 system 성능정보를 참고하므로 꼭 한번은, 그리고 system hardware 변경시에 수행해야 합니다..

  • Single and multiblock read times
  • mbrc
  • CPU speed (cpuspeed)
  • Maximum system throughput
  • Average slave throughput

system statistics 수집 방법 

-- 특정 주기동안 수집

SQL> exec DBMS_STATS.GATHER_SYSTEM_STATS('start') 

SQL> exec DBMS_STATS.GATHER_SYSTEM_STATS('stop') 


-- 특정 interval(분단위) 동안 수집 

SQL> exec DBMS_STATS.GATHER_SYSTEM_STATS('interval', interval=>N)


-- exadata 성능 수집

SQL> exec DBMS_STATS.GATHER_SYSTEM_STATS('exadata')


-- noworkload statistics 수집

SQL> exec DBMS_STATS.GATHER_SYSTEM_STATS()


Enhancements to System Statistics

System statistics allow the optimizer to account for the hardware on which the database system is running. With the introduction of smart storage, such as Exadata storage, the optimizer needs additional system statistics in order to account for all of the smart storage capabilities.

The introduction of the new system statistics gathering method allows the optimizer to more accurately account for the performance characteristics of smart storage, such as Exadata storage.

See Also:

Oracle Database SQL Tuning Guide for details



Tom Kyte has picked his top 12 features of Oracle Database 12c and put them into a presentation. Here are his picks:



Tom Kyte, Vice President of Oracle, shares the top 12 features of 12c Databases at the Oracle Database 12c Launch 2013.


1. Even better PL/SQL from SQL

2. Improved defaults

3. Increased size limits for some datatypes

4. Easy top-n and pagination queries

5. Row pattern matching

6. Partitioning improvements

7. Adaptive execution plans

8. Enhanced statistics

9. Temporary undo

10. Data optimization capabilities

11. Application Continuity and Transaction Guard

12. Pluggable databases


On Oracle Database 12c, Part 1

On Oracle Database 12c, Part 2





Parallel Query로 수행되는 SQL에 문제가 있을경우

QC session과 PQ session 모두 tracing 하는 방법입니다. 

PQ session 찾아 헤메일 필요 없이 !! 


1. 현재 세션에 ID 부여 (PQ1) 

SQL> exec dbms_session.set_identifier(client_id => 'PQ1');

PL/SQL procedure successfully completed.


2. PQ1으로 정의한 현재 세션에 SQL_TRACE 설정 

SQL> exec dbms_monitor.client_id_trace_enable(client_id => 'PQ1', waits => true, binds => false);

PL/SQL procedure successfully completed.


3. Parallel query 수행 

SQL> select /*+ parallel(a,10) */ count(*) from customers a ..


4. SQL_TRACE 설정 제거 

SQL> exec dbms_monitor.client_id_trace_disable(client_id => 'PQ1');

PL/SQL procedure successfully completed. 


메뉴얼 내용에 따르면 Update 사전 check 기능 강화과 사후 작업 자동화가 추가되었습니다.

한번 돌려봐서 확인은 해봐야 겠지만 upgrade를 위해 이것저것 챙겨야할 부분이 자동화되면 upgrade 단계에서의 사고는 확실히 줄 수 있겠네요.

가장 반길만 한 부분은 parallel upgrade 부분입니다. 

upgrade 단계에서 내부 object upgrade 부분은 serial 하게 수행되어 일정 downtime이 요구되었죠.. 

하나 Parallel upgrade로 수행된다 해도 ADG, OGG 등의 제품이 없다면 zero-downtime으로 upgrade 되지 않는건 마찬가지..  

뭐.. Oracle 13c 정도면 zero-downtime upgrade 가 지원되지 않을까.. 하고 기대합니다.  


Enhanced Upgrade Automation

Database upgrade has been enhanced for better ease-of-use by improving the amount of automation applied to the upgrade process. Additional validation steps have been added to the pre-upgrade phase in both the command-line pre-upgrade script and the Database Upgrade Assistant (DBUA). In addition, the pre-upgrade validation steps have been enhanced with the ability to generate a fix-up script to resolve most issues that may be identified before the upgrade.

Post-upgrade steps have also been enhanced to reduce the amount of manual work required for a database upgrade. The post-upgrade status script gives more explicit guidance about the success of the upgrade on a component-by-component basis. Post-upgrade fix-up scripts are also generated to automate tasks that must be performed after the upgrade.

See Also:

Oracle Database Upgrade Guide for details

2.12.1.2 Parallel Upgrade

The database upgrade scripts can now take advantage of multiple CPU cores by using parallel processing to speed up the upgrade process. This results in less downtime due to a database upgrade, and thus improved database availability.

See Also:

Oracle Database Upgrade Guide for details




Oracle 12c부터는 DBMS_QOPATCh package를 통해서 sqlplus 상에서 patch 정보를 확인할 수 있답니다..

Database node가 많은 RAC 환경에서는 편할 수 있겠네요.. 


Queryable Patch Inventory

Using DBMS_QOPATCH, Oracle Database 12c provides a PL/SQL or SQL interface to view the database patches that are installed. The interface provides all the patch information available as part of the OPatch lsinventory -xml command. The package accesses the Oracle Universal Installer (OUI) patch inventory in real time to provide patch and patch meta information.

Using this feature, users can:

  • Query what patches are installed from SQL*Plus.

  • Write wrapper programs to create reports and do validation checks across multiple environments.

  • Check patches installed on Oracle RAC nodes from a single location instead of having to log onto each one in turn.




주말에 통계정보수집하고 월요일 아침 출근했더니 
SQL의 PLAN이 바뀌어 성능이 제대로 나오지 않아 문제 발생하는 경험 ..
아마 누구나 있을 듯 한데요.
10g 이후 통계정보가 수집되면 과거 통계정보를 특정 retention 기간동안 보관하게 되어 있어
통계정보 이상 시점이 확인이 된다면 과거의 잘돌던 통계정보를 다시 restore 해서 문제를 일단 진정시킬 수 있습니다. 

다음은 자동으로 저장되는 과거 통계정보 확인 방법 및 통계정보 restore 하는 방법입니다. 

1. retention 기간 확인 
select DBMS_STATS.GET_STATS_HISTORY_RETENTION from dual;

2. restore가 가능한 가장 오래된 날짜 확인. 
select DBMS_STATS.GET_STATS_HISTORY_AVAILABILITY from dual;

3. 특정 table의 statistics history 확인 
select OWNER,TABLE_NAME,PARTITION_NAME, STATS_UPDATE_TIME from dba_tab_stats_history where table_name = '&TABLE_NAME' order by STATS_UPDATE_TIME;

4. 현재 통계정보 backup 
exec dbms_stats.create_stat_table(ownname => 'sys', stattab => 'old_stats3');
exec dbms_stats.export_table_stats(ownname=>'SCOTT',tabname=>'EMP', stattab=>'old_stats3',statown  => 'SYS');

5. 통계정보 restore. 

-- execute DBMS_STATS.RESTORE_TABLE_STATS ('owner','table',date)
-- execute DBMS_STATS.RESTORE_DATABASE_STATS(date)
-- execute DBMS_STATS.RESTORE_DICTIONARY_STATS(date)
-- execute DBMS_STATS.RESTORE_FIXED_OBJECTS_STATS(date)
-- execute DBMS_STATS.RESTORE_SCHEMA_STATS('owner',date)
-- execute DBMS_STATS.RESTORE_SYSTEM_STATS(date)

예: 
execute dbms_stats.restore_table_stats ('SCOTT','EMP','25-JUL-07 12.01.20.766591 PM +02:00');

참고: Restoring table statistics in 10G onwards (Doc ID 452011.1)


 




참고 : 

http://oracle-randolf.blogspot.kr/2014/05/12c-hybrid-hash-distribution-with-skew.html

http://www.oaktable.net/content/12c-hybrid-hash-distribution-skew-detection-handling-failing


Oracle 12c RMAN의 new feature 중 하나인 table recovery 

아래 table recovery 방법을 보니 AUXILIARY Database를 만들어 거기서 dump datafile을 뽑아내는 모양새. 


과거에 삭제된 table 복구를 위해 수행되었던 복구 절차를 명령어 한줄로 만들어 놓았네요. ㅎ


Recover the tables EMP and DEPT using the following clauses in the RECOVER command: DATAPUMP DESTINATION, DUMP FILE, REMAP TABLE, and NOTABLEIMPORT.


The following RECOVER command recovers the EMP and DEPT tables.


RECOVER TABLE SCOTT.EMP, SCOTT.DEPT

    UNTIL TIME 'SYSDATE-1'

    AUXILIARY DESTINATION '/tmp/oracle/recover'

    DATAPUMP DESTINATION '/tmp/recover/dumpfiles'

    DUMP FILE 'emp_dept_exp_dump.dat'

    NOTABLEIMPORT;


참고 : http://docs.oracle.com/cd/E16655_01/backup.121/e17630/rcmresind.htm#BRADV703



Oracle 12c에서 많이 밀고 있는 Multitenant, ADO 등의 New feature 외에 소소한 몇몇 feature 들... 

어디다가 쓸진 모르겠지만.. 


Invisible Columns

• The new 12c feature allows you to hide columns 

• If a user or developer selects ALL columns from a table (i.e. select *…)  the invisible columns will NOT be displayed. 

• If a user specifically selects the invisible column (i.e. select salary,…) the column WILL be displayed in the output (you have to know it’s there). 

• You can set column(s) to be visible/invisible with an alter table : 

 

SQL> ALTER TABLE EMPLOYEE MODIFY (SSN INVISIBLE); 


이로써 invisible 기능으로 index, row (12c new feature - valid time temporal), column을 숨길수 있게됬군요.. ㅋ


Create Views as Tables  

Export a view as a table and then import it: 


SQL> create view emp_dept as 

(select a.empno, a.ename, b.deptno, b.dname, b.loc 

 from emp a, dept b 

 where a.deptno=b.deptno); 


View created. 

 

$ expdp scott2/tiger VIEWS_AS_TABLES=emp_dept 

 

Processing object type TABLE_EXPORT/VIEWS_AS_TABLES/TABLE 

. . exported "SCOTT2"."EMP_DEPT" 

7.140 KB 14 rows 


view를 table로 export 할 수 있는 기능. 

업무에선 어떻게 쓰일수 있을지 모르겠지만.. 

DB 성능관련 view 들도 table로 간단히 뽑아낼 수 있다면 성능 history 구축하긴 쉽겠네요. 



Question: Can DD, TAR or other OS tools be used to back up a database on storage managed by ASM?

Answer: DD, Tar and other OS tools are not supported with ASM. The answer is use Rman with ASM. Matrix for Supported backup or clongin Options:

-----------------------X
|  DD           |  No  |
-----------------------| 
|  Atomic Snaps | Yes  |
-----------------------|
|  TAR          |  No  |
-----------------------|
|  RMAN         | YES! |
-----------------------X


Automic snaps are split mirror technologies which support atomis splits across several LUN as a consistant point in time copy. Products like TimeFinder or MirrorView. Database must be placed in HOT BACKUP mode for this type of backup for the duration of the split. This is because the split can not capture a block-consistent view of all datafiles while the database is open for write. The split is typically a few seconds so the performance impact is minimal.


Oracle 12c에 Oracle Multitenant라는 새로운 개념이 등장했습니다. 

Database가 Container DB와 Pluggable DB로 나뉘어 Cloud 환경에 적합한 모양으로 구성 가능합니다. 


자세한 내용은 아래 링크 참조하시면 될 것 같고..

http://www.oracle.com/technetwork/database/multitenant/overview/index.html


Oracle 12c Multitenant라는 기능을 보다 보니까 내가 현재 어떤 DB에 접속하고 있는지 헤깔리기 쉽겠더군요..

PDB (Pluggable DB)를 내리려다 잘못해 Container DB를 내리면 해당 Container DB에 있는 PDB까지 다 내려갈 수 있으니 (걱정도 팔자임), 이는 큰일이 아닐 수 없습니다. ㅋ


해서 SQLPLUS의 prompt에 현재 어떤 DB에 접속하고 있는지 표시하도록 간단히 맹글어 보았습니다. 

모두 아시는 $ORACLE_HOME/sqlplus/admin/glogin.sql에 아래 내용을 넣으시면 됩니다. 

define _editor=vi

column sqlprompt_col new_value sqlprompt_value

set termout off

define sqlprompt_value='NOT CONNECTED'

SELECT SYS_CONTEXT('USERENV','CURRENT_USER')||'('||SYS_CONTEXT('USERENV','CON_NAME')||')'

  as sqlprompt_col

from dual;

set termout on

set sqlprompt '&sqlprompt_value >'


아래처럼 container DB에 접속하던 Pluggable DB에 접속하던 prompt 상에서 보여주게 됩니다. 

그러나 DB 가 내려가 있는 상태에서 접속하면 'NOT CONNECTED'로 나오는 부분은 수정 필요 --; 



Oracle 11g 때부터 raw device를 지원한다, 안한다 이야기가 많더니만,

드디어 Oracle 12c에서는 지원하지 않는다는 문장이 메뉴얼에 등장했네요. 


8.1.10.1 About Upgrading Oracle Database Release 10.2 or 11.1 and OCFS and RAW Devices


If you are upgrading an Oracle Database release 10.2.0.5 or release 11.1.0.7 environment that stores Oracle Clusterware files on OCFS on Windows or RAW devices, then you cannot directly upgrade to Oracle Database 12c. You must first perform an interim upgrade to Oracle Database release 11.2 and migrate the Oracle Clusterware files to Oracle Automatic Storage Management (Oracle ASM). Then you can upgrade from release 11.2 to Oracle Database 12c.


8.1.12 Desupport for Raw Storage Devices


Starting with Oracle Database 12c, block file storage on raw devices is not supported. You must migrate any data files stored on raw devices to Oracle ASM, a cluster file system, or Network File System (NFS).

This also affects the OCR and voting files for Oracle Clusterware. You cannot store the OCR or voting files on raw devices. Oracle Clusterware files must be moved to Oracle ASM before upgrading.


출처 : Oracle® Database Upgrade Guide 12c Release 1 (12.1)

(http://docs.oracle.com/cd/E16655_01/server.121/e17642/deprecated.htm#UPGRD60124)


이제 ASM을 공부해야 할 시간 ... 

Oracle ASM Strategic Best Practices





SQL 관련 문제 발생시 문제 분석이나 Oracle SR 진행시 가장 어려운 부분은 문제 재현입니다. 


이미 분석되어 널리 알려지거나 오류 등에 의한 trace, dump 등이 있다면 문제 재현없이 진행해 볼수 있겠으나, 

그렇지 않은 경우 SR에서 권고하는 각종 diag patch 등을 적용해 다음 문제 발생때 까지 기다려야 할 수 도 있습니다. 

또 문제가 재현된다고 해도 이걸 잘 포장해서 넘기는 것도 일입니다.

다른 서버에서 재현 실패해서 직접 재현해 놓은 서버에 접속해 테스트와 분석이 진행되기도 합니다. 


Oracle 11g에서 이러한 SQL 관련 문제 발생시 SQL 중심으로 object 상태 정보를 수집할 수 있는 diag tool을 하나를 끼여 넣었는데, 그게 SQL Test Case Builder 입니다. 이 SQL Test Case Builde(TCB)는 EM이나 sqlplus에서 Oracle package 형태로 수행되며 분석에 필요한 다음의 두가지 형태의 정보를 자동으로 수집합니다. 

1. Permanent information

  • SQL text
  • PL/SQL functions, procedures, packages
  • Statistics
  • Bind variables
  • Compilation environment
  • User information (like privileges)
  • SQL profiles, stored outlines, or other SQL Management Objects
  • Meta data on all the objects involved
  • Optimizer statistics
  • The execution plan information
  • The table content (sample or full). This is optional.


2. Transient information

  • dynamic sampling results
  • cached information
  • some run time information (like the actual degree of parallelism used)
  • etc.


사용방법은 간단히 directory 생성및 권한 부여 이후 package로 간단히 수행가능합니다.  

declare

  tc_out clob;

begin

   dbms_sqldiag.export_sql_testcase(directory=>'&dump_dir', 

                                    sql_id=>'&sqlid', 

                                    testcase => tc_out);

end;

/


자세한 예제나 설명은 아래 오라클 블로그에서 읽어보세요. 

Oracle keeps closing my TAR because I cannot provide a testcase, can you help?




oracle blog(?)에 oracle 11g로 upgrade해야할 이유 6개가 나왔네요. 

오라클 support 기간 종료, security 문제에 대한 fix, 그리고 new feature의 이점들.. 
뭐 훌륭한 feature 들이 많은건 일단 인정.. 
그래도 가장 큰 이유는 support 기간 종료가 아닐까.. 싶네요. 

#1: Oracle support period will end soon or has ended.

#2: The application provider is pushing you to uprgade.

#3: CPU or PSUs - security fixes.

#4: Potential cost savings part 1.

#5: Potential cost savings part 2.

#6: Faster access to LOB data - move to Secure Files.


출처 : Why upgrade?


Oracle 9i에서 소개된 Multitable insert는 한개의 테이블에 여러 row를 넣거나, 여러개의 테이블에 데이터를 한 insert 명령으로 넣을 수 있습니다.  이전 버전에서는 이런 기능은 PL/SQL로 구현을 해야 했으나 9i 이후로는 한 명령으로 수행할 수 있습니다.

INSERT ALL
INTO cust_order (order_nbr, cust_nbr, sales_emp_id,order_dt, expected_ship_dt, status)
VALUES (ord_nbr, cust_nbr, emp_id,ord_dt, ord_dt + 7, status)
INTO cust_order (order_nbr, cust_nbr, sales_emp_id,order_dt, expected_ship_dt, status)
VALUES (ord_nbr + 1, cust_nbr, emp_id,add_months(ord_dt, 1), add_months(ord_dt, 1) + 7, status)
INTO cust_order (order_nbr, cust_nbr, sales_emp_id,order_dt, expected_ship_dt, status)
VALUES (ord_nbr + 2, cust_nbr, emp_id,add_months(ord_dt, 2), add_months(ord_dt, 2) + 7, status)
INTO cust_order (order_nbr, cust_nbr, sales_emp_id,order_dt, expected_ship_dt, status)
VALUES (ord_nbr + 3, cust_nbr, emp_id,add_months(ord_dt, 3), add_months(ord_dt, 3) + 7, status)
INTO cust_order (order_nbr, cust_nbr, sales_emp_id,order_dt, expected_ship_dt, status)
VALUES (ord_nbr + 4, cust_nbr, emp_id,add_months(ord_dt, 4), add_months(ord_dt, 4) + 7, status)
INTO cust_order (order_nbr, cust_nbr, sales_emp_id,order_dt, expected_ship_dt, status)
VALUES (ord_nbr + 5, cust_nbr, emp_id,add_months(ord_dt, 5), add_months(ord_dt, 5) + 7, status)
SELECT 99990 ord_nbr, c.cust_nbr cust_nbr, e.emp_id emp_id,last_day(SYSDATE) ord_dt, 'PENDING' status
FROM customer c CROSS JOIN employee e
WHERE e.fname = 'MARY' and e.lname = 'TURNER'
and c.name = 'Gentech Industries';

INSERT ALL
INTO employee (emp_id, fname, lname, dept_id, hire_date) VALUES (eid, fnm, lnm, did, TRUNC(SYSDATE))
INTO salesperson (salesperson_id, name, primary_region_id) VALUES (eid, fnm || ' ' || lnm, rid)
SELECT 1001 eid, 'JAMES' fnm, 'GOULD' lnm,d.dept_id did, r.region_id rid
 FROM department d, region r
 WHERE d.name = 'SALES' and r.name = 'Southeast US';

위의 예와 같이 대상이 되는 모든 데이터를 같은/다른 테이블에 multi-row의 insert 가 가능하지만 특정 condition에 따라 insert도 가능합니다. 이 경우 INSERT FIRST를 사용하는데 이는 한개의 조건에 만족할 경우 나머지 조건은 skip되어 수행됩니다. 반대로 INSERT ALL의 경우 모든 조건을 판단하게 됩니다. 

INSERT FIRST
 WHEN order_dt < TO_DATE('2001-01-01', 'YYYY-MM-DD') THEN
  INTO cust_order_2000 (order_nbr, cust_nbr, sales_emp_id,sale_price, order_dt)
  VALUES (order_nbr, cust_nbr, sales_emp_id, sale_price, order_dt)
 WHEN order_dt < TO_DATE('2002-01-01', 'YYYY-MM-DD') THEN
  INTO cust_order_2001 (order_nbr, cust_nbr, sales_emp_id,sale_price, order_dt)
  VALUES (order_nbr, cust_nbr, sales_emp_id, sale_price, order_dt)
 WHEN order_dt < TO_DATE('2003-01-01', 'YYYY-MM-DD') THEN
  INTO cust_order_2002 (order_nbr, cust_nbr, sales_emp_id,sale_price, order_dt)
  VALUES (order_nbr, cust_nbr, sales_emp_id, sale_price, order_dt)
SELECT co.order_nbr, co.cust_nbr, co.sales_emp_id,co.sale_price, co.order_dt
 FROM cust_order co
 WHERE co.cancelled_dt IS NULL
 AND co.ship_dt IS NOT NULL;




아래 blog 내용 처럼 v$lock을 조회할때 그리 시간이 많이 걸리는 경험은 없었던것 같지만..
v$lock을 조회하면 내부 fixed table join시 'MERGE JOIN CARTESIAN'이 사용되며 여기에 비용이 많이 든다는 군요.
혹시 v$lock query 결과가 늦으시면 ordered hint를 써보심이...


SQL statement for V$LOCK!!! 
select s.inst_id,l.laddr,l.kaddr,s.ksusenum,r.ksqrsidt,r.ksqrsid1, r.ksqrsid2,l.lmode,l.request,l.ctime,decode(l.lmode,0,0,l.block) from v$_lock l, x$ksuse s, x$ksqrs r where l.saddr=s.addr and concat(USERENV('Instance'),l.raddr)=concat(r.inst_id,r.addr) and s.inst_id = USERENV('Instance'); 




oracle index rebuild를 판단하는 기준이나 rebuild가 필요한지 자체에 대한 많은 말들이 있긴한데.. 

다음의 글은 index_stats view의 del_lf_rows를 이용한 index rebuild 방안에 대한 구조적인 한계에 대한 설명입니다.
뭐 이러한 방안을 대량 delete 작업 후 바로 적용하면 될 거 같은데,
아래와 같은 내용을 알아둬야 .. 수치가 좀 이상하게 나올 경우 당황하지 않겠네요~ 

But, as the Oracle myth busters like Richard Foote have been saying for years,  that's not how Oracle's B-tree indexes work. When you delete an index entry, Oracle marks it as deleted but leaves it in place. When you commit your transaction Oracle does nothing to the index entry – but other processes now know that the entry can be wiped from the block allowing the space to be re-used.      (원문 : Index Rebuilds « Oracle Scratchpad )




alert log file은 오라클 데이터베이스 운영을 위해서는 꼭 확인해야 하는 가장 기본적인 log file이다. 
이 기본적인 alert log file은 이름처럼 꼭 alert을 안줘도 될만한 정보까지도 alerting(?)질을 하곤 하는데, 
online redo log size가 너무 작거나 변경량이 많은 경우 그 내용이 너무 많아 관리자들은 확인에 소홀해 지기도 한다.
하지만 꼭 봐야한다!!  

아래 기술한 내용은 alert log file을 '날짜|로그' format으로 변경해주는 awk 명령이다. 
가끔 장애가 발생하면 연관 서버, 업무들을 시간 순으로 나열할 일이 생기는데 요런 작업을 좀 편하게 해보려고 만들어 봤다.  
추가로 alert log file을 external table을 이용해 DB에 loading 하는 것까지 추가한다. 

1. awk 명령을 이용해 "시간|로그내용"으로 변경하기

+ alert log file

Fri Jul 29 13:40:01 2011
Thread 1 advanced to log sequence 292352
  Current log# 2 seq# 292352 mem# 0: /FS/redo02a.log
  Current log# 2 seq# 292352 mem# 1: /FS/redo02b.log
Thread 1 advanced to log sequence 292353
  Current log# 3 seq# 292353 mem# 0: /FS/redo03a.log
  Current log# 3 seq# 292353 mem# 1: /FS/redo03b.log
Fri Jul 29 13:42:19 2011
Thread 1 advanced to log sequence 292354
  Current log# 4 seq# 292354 mem# 0: /FSredo04a.log
  Current log# 4 seq# 292354 mem# 1: /FSredo04b.log
Fri Jul 29 13:50:01 2011
Thread 1 advanced to log sequence 292355
  Current log# 5 seq# 292355 mem# 0: /FSredo05a.log
  Current log# 5 seq# 292355 mem# 1: /FSredo05b.log
Thread 1 advanced to log sequence 292356
  Current log# 1 seq# 292356 mem# 0: /FSredo01a.log
  Current log# 1 seq# 292356 mem# 1: /FSredo01b.log
Fri Jul 29 13:53:57 2011
Thread 1 advanced to log sequence 292357
  Current log# 2 seq# 292357 mem# 0: /FS/redo02a.log
  Current log# 2 seq# 292357 mem# 1: /FS/redo02b.log

+ awk 명령을 이용해 날짜와 로그 내용을 병합한다. 
 
$ tail -100 /bdump/alert_SID.log | awk '{if (($5=="2011") && $6 =="") {vdate = $0} else {print vdate,"|", $0} }' | grep 2011 > /fs/app/oracle/product/rdbms/log/alert_test.log

$ cat  /fs/app/oracle/product/rdbms/log/alert_test.log
 
Fri Jul 29 13:42:19 2011 | Thread 1 advanced to log sequence 292354
Fri Jul 29 13:42:19 2011 |   Current log# 4 seq# 292354 mem# 0: /FS/redo04a.log
Fri Jul 29 13:42:19 2011 |   Current log# 4 seq# 292354 mem# 1: /FS/redo04b.log
Fri Jul 29 13:50:01 2011 | Thread 1 advanced to log sequence 292355
Fri Jul 29 13:50:01 2011 |   Current log# 5 seq# 292355 mem# 0: /FS/redo05a.log
Fri Jul 29 13:50:01 2011 |   Current log# 5 seq# 292355 mem# 1: /FS/redo05b.log
Fri Jul 29 13:50:01 2011 | Thread 1 advanced to log sequence 292356
Fri Jul 29 13:50:01 2011 |   Current log# 1 seq# 292356 mem# 0: /FS/redo01a.log
Fri Jul 29 13:50:01 2011 |   Current log# 1 seq# 292356 mem# 1: /FS/redo01b.log
Fri Jul 29 13:53:57 2011 | Thread 1 advanced to log sequence 292357
Fri Jul 29 13:53:57 2011 |   Current log# 2 seq# 292357 mem# 0: /FS/redo02a.log
Fri Jul 29 13:53:57 2011 |   Current log# 2 seq# 292357 mem# 1: /FS/redo02b.log
Fri Jul 29 14:00:02 2011 | Thread 1 advanced to log sequence 292358
Fri Jul 29 14:00:02 2011 |   Current log# 3 seq# 292358 mem# 0: /FS/redo03a.log
Fri Jul 29 14:00:02 2011 |   Current log# 3 seq# 292358 mem# 1: /FS/redo03b.log
Fri Jul 29 14:00:02 2011 | Thread 1 advanced to log sequence 292359
Fri Jul 29 14:00:02 2011 |   Current log# 4 seq# 292359 mem# 0: /FS/redo04a.log
Fri Jul 29 14:00:02 2011 |   Current log# 4 seq# 292359 mem# 1: /FS/redo04b.log

2. 변경된 alert log file을 DB로 loading 하기

+ DB내 directory 정보 확인 
SQL> select * from dba_directories;

OWNER                          DIRECTORY_NAME                 DIRECTORY_PATH
------------------------------ ------------------------------ -----------------------------------
SYS                            DATA_PUMP_DIR                  /fs/app/oracle/product/rdbms/log/

+ external table 만들기 
 
SQL> drop table t_alert_log;
SQL> create table t_alert_log (ldate varchar2(25), text_line varchar2(150)) 
 organization external
 (
   type oracle_loader 
   default directory DATA_PUMP_DIR
   ACCESS PARAMETERS 
   ( 
    records delimited by newline 
    fields terminated by '|' 
   ) 
 location ('alert_test.log')); 

+ 입맛대로 query 하기 
 
SQL> select rownum,a.* from t_alert_log a;
 
    ROWNUM LDATE                     TEXT_LINE
---------- ------------------------- ------------------------------------------------------------------------
       386 Fri Jul 29 13:30:17 2011     Current log# 1 seq# 292351 mem# 0: /FS/redo01a.log
       387 Fri Jul 29 13:30:17 2011     Current log# 1 seq# 292351 mem# 1: /FS/redo01b.log
       388 Fri Jul 29 13:40:01 2011   Thread 1 advanced to log sequence 292352
       389 Fri Jul 29 13:40:01 2011     Current log# 2 seq# 292352 mem# 0: /FS/redo02a.log
       390 Fri Jul 29 13:40:01 2011     Current log# 2 seq# 292352 mem# 1: /FS/redo02b.log
       391 Fri Jul 29 13:40:01 2011   Thread 1 advanced to log sequence 292353
       392 Fri Jul 29 13:40:01 2011     Current log# 3 seq# 292353 mem# 0: /FS/redo03a.log
       393 Fri Jul 29 13:40:01 2011     Current log# 3 seq# 292353 mem# 1: /FS/redo03b.log
       394 Fri Jul 29 13:42:19 2011   Thread 1 advanced to log sequence 292354
       395 Fri Jul 29 13:42:19 2011     Current log# 4 seq# 292354 mem# 0: /FS/redo04a.log
       396 Fri Jul 29 13:42:19 2011     Current log# 4 seq# 292354 mem# 1: /FS/redo04b.log

external table의 경우 text file을 loading 한 순서로 보여주므로 log file의 순서대로 보여주게 되나
이 데이터를 다른 table에 옮길경우엔 반드시 rownum을 같이 넘겨여 한다.
안그러면 순서가 뒤죽박죽~ 될 수 있음.  

$sqlplus / as sysdba

SQL*Plus: Release 10.2.0.3.0 - Production on Tue May 31 10:33:11 2011
Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.

/usr/lib/pa20_64/dld.sl: Unable to find library 'libskgxn2.sl'.
ERROR:
ORA-12547: TNS:lost contact


원본이 RAC인 DB를 copy해서 single로 띄울때 sqlplus를 실행하면 위의 오류가 발생할 수 있다.
이 경우 rac option을 빼주는 작업이 필요하다.

cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk rac_off
make -f ins_rdbms.mk ioracle
 

Oracle 11g 이후 default user가 엄청나게 많이 생겼습니다. 

Component Name9iR210gR211gR111gR2                            
Oracle Data MiningODM, ODM_MTRDMSYSSYSSYS
Oracle Enterprise ManagerDBSNMP
SYSMAN
DBSNMP
SYSMAN
MGMT_VIEW
DBSNMP
SYSMAN
MGMT_VIEW
DBSNMP
SYSMAN
MGMT_VIEW
Oracle InterMedia/
Oracle Multimedia 
(11gR1)

ORDPLUGINS
ORDSYS

ORDPLUGINS
ORDSYS
SI_INFORMTN_SCHEMA
ORDPLUGINS
ORDSYS
SI_INFORMTN_SCHEMA
ORDDATA
ORDPLUGINS
ORDSYS
SI_INFORMTN_SCHEMA
Oracle OLAPOLAPSYSOLAPSYSOLAPSYSOLAPSYS
Oracle Label SecurityLBACSYSLBACSYSLBACSYSLBACSYS
Oracle SpatialMDSYSMDDATA
MDSYS
MDDATA 
MDSYS
SPATIAL_CSW_ADMIN_USR
SPATIAL_WFS_ADMIN_USR
MDSYS
MDDATA
SPATIAL_CSW_ADMIN_USR
SPATIAL_WFS_ADMIN_USR
SPATIAL_CSW_ADMIN_USR
SPATIAL_WFS_ADMIN_USR
Oracle TextCTXSYSCTXSYSCTXSYSCTXSYS
Oracle XML DatabaseXDBXDBXDBXDB
Oracle Ultra SearchWKSYS
WKPROXY
WKSYS
WKPROXY
WKSYS
WKPROXY
WK_TEST
The WKUSER role and the WKSYS, WK_TEST, WKPROXY schemas have been deprecated.
Oracle Workspace ManagerWMSYSWMSYSWMSYSWMSYS
Oracle Warehouse BuilderNANAOWBSYSOWBSYS
Oracle Rule Manager & Expression FiltrerNAEXFSYSEXFSYS

EXFSYS


성격이 깔끔하신 분들이 어짜피 lock 걸려 있고 password expire 되어 있는 거 보고 
정리하고 싶은 마음이 굴뚝 같겠지만..

일반 유저 drop 하듯이 정리하면 문제가 발생할 수 있으니 신중하셔야 할 것 같습니다. 
아래 참고에도 list-up 했지만 OUTLN user의 경우 drop 하면 DB가 안뜹니다. 
또 다른 제거 가능한 default user 들도 아래 문서들 참조해서 정상적으로 정리하셔야 합니다. 

database default user에 대한 설명과 안전하게 drop하는 방법, 재 생성하는 방법은 아래 노트를 참조하세요. 

참고 : 
설치된 데이터베이스 구성 요소 및 스키마에 대한 정보 (문서 ID 1608310.1)
Unable To Start The Database With OUTLN Schema Dropped ORA-01092& ORA-18008(문서 ID 855104.1)
http://jonathanlewis.wordpress.com/2010/03/11/dropping-outln/
http://abcdba.com/abcdbaserver11gdefaultschema



오라클은 10g 이후로 active session에 대해 history를 특별한 설정없이 기본적으로 저장합니다. ASH는 v$active_session_history, dba_hist_active_sess_history 두가지 view를 통해 조회할 수 있으며 v$active_session_history는 shared pool내의 ASH 영역에 저장되어 1초 간격의 sampling 데이터입니다. 

dba_hist_active_sess_history 값은 ASH 영역의 active session 정보를 10초 간격으로 sampling 해서 disk로 저장된 정보입니다. 

아래 소개하는 ashdump는 현 시점으로 부터 분 단위의 정보를 sql loader format으로 user dump directory에 저장하게 됩니다. 즉, 문제 시점의 active session 정보를 빠르게 수집할 수 있습니다. 

Command would be like below: where level means minute. lets dump for 10 minutes history

1. SQL> alter session set events 'immediate trace name ashdump level 10'; 
or 
2. SQL> alter system set events 'immediate trace name ashdump level 10'; 
or 
3. SQL> oradebug setmypid 
    SQL> oradebug dump ashdump 10;

수집된 ash file은 Oracle home 밑의 rdbms/demo/ashldr.ctl 파일 통해 sql loader로 DB에 저장할 수 있습니다. 

2594829169,1,161390,"07-18-2003 16:05:21.098717000",13,1,0,"",65535,0,0,2,0,0,0,4294967295,0,0,2,35,100,0,0,10 
05855,0,"oracle@usunrat21 (MMNL)","","",""



참고: 
10g and above Active Session History (Ash) And Analysis Of Ash Online And Offline (Doc ID 243132.1)
http://www.oracle.com/technetwork/database/manageability/ppt-active-session-history-129612.pdf



crs resource의 상태를 crs_stat 명령으로 확인해 보면 가끔 resource의 상태가 UNKNOW인 경우가 가끔 발생한다.
CRS resource가 정상적으로 가동 되지 않은 경우도 있고 또 정상적으로 가동된 것 같은데 UNKNOW인 경우도 가끔 있다.  

oracle CRS resource가 UNKNOWN 상태로 변경되는 이유는 일반적으로 CRS resource start/stop/check 시의 action script 수행이 실패할 때 status가 변경되게 된다. 다음은 CRS resource의 state가 UNKNOWN으로 변경되는 일반적인 원인이다.
    

1.  The permission of the resource trace file is incorrect.
2.  The permission of the action script and other racg script is incorrect.
3.  The server load is very heavy and the action script times out.
4.  The look up to NIS hangs or takes very long time and causes the action script to time out.


CRS resource의 action script는 다음의 명령으로 확인할 수 있다. 
crs_stat -p <resource name such as ora.node1.vip> | grep ACTION_SCRIPT

CRS resource 이름은 다음의 명령으로 확인할 수 있다.
crs_stat | grep -i name

다음은 위에 기술한 CRS resource의 UNKNOWN state의 일반적인 원인에 대해 점검해 볼 항목이다.

1.  The permission of the resource trace file if it is incorrect. 
The resource trace file in in the HOME/log/<node name>/racg directory where HOME is the the HOME directory of action script for the resource.

2.  The permission of racg scripts in the HOME/bin directory if it they are incorrect. 
HOME is the the HOME directory of action script for the resource. Please issue "ls -l HOME/bin/racg*" to get the permission of the racg script. Please issue "ls -l HOME/bin/racg*" as user oracle or a user who normally starts up failing resources.
If any of the racg script is a soft link to another file, then check the permission of the file to which the racg script is soft linked.

3.  Check crsd.log and see if the resource action script timed out. 
If it did, then check if the server load was heavy (95% used or higher) for a minute or longer at the time of the failure. Setting up OSWatcher or IPD/OS can help troubleshooting this if the timeout occurs intermittently. Also, check if the NIC was having a problem at the time of the failure. 


참고 : Common Causes for CRS Resources in UNKNOWN State (Doc ID 860441.1) 

오라클 11G R2 이전 버전에서는 BUG으로 인해 v$SQL_BIND_CAPTURE에서 TIMESTAMP 형태의 bind 값이 NULL로 보입니다. 그러나 ANYDATA.AccessTimestamp(value_anydata) 를 이용해 볼수 있는 workaround 가 있군요.

참고: V$SQL_BIND_CAPTURE Does Not Show The Value For Binds Of Type TIMESTAMP (Doc ID 444551.1)

SQL> declare 
bindts timestamp; 
begin 
bindts := systimestamp(); 
execute immediate 'select /* BIND_CAPTURE_TEST */ 1 from dual where :b1 is 
not null' using bindts; 
execute immediate 'select /* BIND_CAPTURE_TEST */ 1 from dual where :b1 is 
not null' using bindts; 
execute immediate 'select /* BIND_CAPTURE_TEST */ 1 from dual where :b1 is 
not null' using bindts; 
end; 


PL/SQL procedure successfully completed. 

SQL> select sql_id from v$sql where sql_fulltext like '%BIND_CAPTURE_TEST%' 
and sql_fulltext not like '%xxx%' and command_type = 3; 

SQL_ID 
------------- 
1mf1ch9vsr06a 

SQL> select name, position, datatype_string, was_captured, value_string, 
anydata.accesstimestamp(value_anydata) from v$sql_bind_capture where sql_id = 
'1mf1ch9vsr06a'; 

NAME POSITION DATATYPE_STRING WAS 
------------------------------ ---------- --------------- --- 
VALUE_STRING 
------------------------------------------------------------------------------ 
-- 
ANYDATA.ACCESSTIMESTAMP(VALUE_ANYDATA) 
--------------------------------------------------------------------------- 
:B1 1 TIMESTAMP YES 
05-JUL-07 12.20.23.311417000 PM


1. program 내 모듈명, 수행 단계별 Action 명 정의 

+ Module, Action 정의 
SQL> exec dbms_application_info.set_module('Module_name','Action');

+ 단계별 Action 정의 
SQL> exec dbms_application_info.set_action('select member table..');
select ...
SQL> exec dbms_application_info.set_action('update member table');
update ...

+ Recent posts