Pages

Tuesday 10 January 2012

TKPROF INTERPRETATION

Tkprof is an executable that 'parses' Oracle trace files to produce more
readable output. Remember that all the information in TkProf is available
from the base trace file.

If you have a system that is performing badly, a good way to identify 
problem SQL statements is to trace a typical user session and then use TkProf
to format the output using the sort functions on the tkprof command 
line. 

There are a huge number of sort options that can be accessed by simply 
typing 'TkProf' at the command prompt. A useful starting point is the 
'fchela' sort option which orders the output by elapsed time fetching (remember
that timing information is only available with timed_statistics set to true 
at the database level). The resultant .prf file will contain the most time 
consuming SQL statement at the start of the file.

Another useful parameter is sys. This can be used to prevent SQL statements 
run as user SYS from being displayed. This can make the output file much 
shorter and easier to manage.

Again, remember to always check that the TIMED_STATISTICS parameter is set to TRUE
as otherwise no time based comparisons can be made.


Interpreting TkProf Output Guidelines
=====================================

Column Meanings
===============

call :   Statisics for each cursor's activity are divided in to 3 areas: 

           Parse:   statisitics from parsing the cursor. This 
                    includes information for plan generation etc.
           Execute: statisitics for the exection phase of a cursor
           Fetch  : statistics for actually fetching the rows

count :  number of times we have performed a particular activity on this 
         particular cursor

cpu:     cpu time used by this cursor

elapsed: elapsed time for this cursor

disk:    This indicates the number of blocks read from disk. Generally you want
         to see blocks being read from the buffer cache rather than disk.
 
query :  This column is incremented if a buffer is read in Consistent mode.
         A Consistent mode buffer is one that has been generated to give
         a consistent read snapshot for a long running transaction. The buffer
         actually contains this status in its header.
 
current: This column is incremented if a buffer found in the buffer cache 
         that is new enough for the current transaction and is in current mode
         (and it is not a CR buffer). This applies to buffers that have been 
         read in to the cache as well as buffers that already exist in the 
         cache in current mode.

rows:    Rows retrieved by this step 


Explain plan 
============

Firstly, we advise that the autotrace feature of SQL*Plus be used 
on statements rather than using TkProf mainly because the TkProf output 
can be confusing with regard to whether the Rule or Cost Based optimizer 
has been used. 

Because TkProf explain plan does not show any costs or statistics, it is 
sometimes not possible to tell definitively which optimizer has been used.

That said, the following output from Tkprof explain plan is useful.
The Rows column next to the explain plan output shows the number of 
rows processed by that particular step. The information is gathered from the 
STAT lines for each cursor in the raw trace output. 

Remember that if the cursor is not closed then you will not see any output. 
Setting SQL_TRACE to false DOES NOT close PL/SQL child cursors. 
Cursors are closed in SQL*Plus immediately after execution.


TkProf Examples and Discussion
==============================

Examples:

Step 1 - Look at the totals at the end of the tkprof output
===========================================================

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
 
| call    | count |  cpu | elapsed |    disk |  query | current |   rows |
|---------|-------|------|---------|---------|--------|---------|--------|
| Parse   | [A] 7 | 1.87 |    4.53 |     385 |[G] 553 |      22 |      0 |
| Execute | [E] 7 | 0.03 |    0.11 | [P]   0 |[C]   0 | [D]   0 | [F]  0 |
| Fetch   | [E] 6 | 1.39 |    4.21 | [P] 128 |[C] 820 | [D]   3 | [F] 20 |
--------------------------------------------------------------------------
 
Misses in library cache during parse: 5
Misses in library cache during execute: 1
 
     8  user  SQL statements in session.
    12  internal SQL statements in session.
[B] 54  SQL statements in session.
     3  statements EXPLAINed in this session.

1. Compare [A] & [B] to spot over parsing. In this case we 
   have 7 parses for 54 statements which is ok.

2. You can use [P], [C] & [D] to determine the hit ratio.

Hit Ratio is logical reads/physical reads:

Logical Reads = Consistent Gets + DB Block Gets
Logical Reads = query           + current
Logical Reads = Sum[C]          + Sum[D]
Logical Reads = 0+820           + 0+3
Logical Reads = 820             + 3
Logical Reads = 823

Hit Ratio = 1 - (Physical Reads /  Logical Reads)
Hit Ratio = 1 - (Sum[P]         /   Logical Reads)
Hit Ratio = 1 - (128            /   823)
Hit Ratio = 1 - (0.16)
Hit Ratio = 0.84 or 84%

3. We want fetches to be less than the number of rows as this 
   will mean we have done less work (array fetching). 
   To see this we can compare [E] and [F].

[E] =  6 = Number of Fetches
[F] = 20 = Number of Rows

So we are doing 6 fetches to retrieve 20 rows - not too bad.
If arrayfetching was configured then rows could be retrieved with 
less fetches.

Remember that an extra fetch will be done at the end to check that
the end of fetch has been reached.

4. [G] Shows reads on the Dictionary cache for the statements.

   - this should not be a problem on Oracle7. 
   In this case we have done 553 reads from the 
   Library cache.

STEP 2 - Examine statements using high resource
===============================================

update ...
where  ...

| call    | count | cpu | elapsed | disk |   query | current |   rows |
|---------|-------|-----|---------|------|---------|---------|--------|
| Parse   |     1 |   7 |     122 |    0 |       0 |       0 |      0 |
| Execute |     1 |  75 |     461 |    5 | [H] 297 |   [I] 3 | [J]  1 |
| Fetch   |     0 |   0 |       0 |    0 |       0 |       0 |      0 |
-----------------------------------------------------------------------

[H] shows that this query is visiting 297 blocks to find the rows to 
    update
[I] shows that only 3 blocks are visited performing the update
[J] shows that only 1 row is updated.

297 block to update 1 rows is a lot. 
Possibly there is an index missing?


STEP 3 - Look for over parsing
==============================

select ...

| call    | count |     cpu | elapsed | disk |  query | current |  rows |
|---------|-------|---------|---------|------|--------|---------|-------|
| Parse   | [M] 2 | [N] 221 |     329 |    0 |     45 |       0 |     0 |
| Execute | [O] 3 | [P]   9 |      17 |    0 |      0 |       0 |     0 |
| Fetch   |     3 |       6 |       8 |    0 | [L]  4 |       0 | [K] 1 |
-------------------------------------------------------------------------

Misses in library cache during parse: 2 [Q]

[K] is shows that the query has returned 1 row.
[L] shows that we had to read 4 blocks to get this row back.
This is fine.

[M] show that we are parsing the statement twice - this is not desirable
    especially as the cpu usage is high [N] in comparison to the execute 
    figures : [O] & [P]. [Q] shows that these parses are hard parses. If
    [Q] was 1 then the statement would have had 1 hard parse followed by
    a soft parse (which just looks up the already parsed detail in the 
    library cache). See Note:32895.1 for more details.


This is not a particularly bad example since the query has only been 
executed a few times. However excessive parsing should be avoided as far 
as possible by:

o Ensuring that code is shared:

   - use bind variables
   - make shared pool large enough to hold query definitions in memory
     long enough to be reused.


NOTE:
====
We need to ensure, we do not parse a >=9i version 10046 raw trace with an <= 8i tkprof utility.
This is because the timings in raw trace file for <=8i versions and >=9i versions are in 
centiseconds and microseconds respectively.

1 comment:

ㅤㅤjampani said...

Thanks its very useful