Vishal Gupta's Blog

Archive for the ‘Oracle’ Category

Oracle related blogs

Slow Statspack Snapshots

Posted by Vishal Gupta on Oct 10, 2008

 

For quite some time we had been experiencing slow statspack snapshots, taking about 300sec. In a worst case scenario it took 7 hours. My colleague was investigating it, it turned out that on this particular database “_optimizer_ignore_hints” was set to true. So it was ignoring all the optimization put in by Oracle in statspack snapshot code.

 

Environment

OS – Linux

Database – 10.2.0.3

 

 

 

 

Disable Optimizer Hints

 

SQL> set timing on

SQL> alter session set “_optimizer_ignore_hints” = true;

Session altered.

Elapsed: 00:00:00.03

 

SQL> exec statspack.snap

PL/SQL procedure successfully completed.

Elapsed: 00:00:20.73

 

 

Enable Optimizer Hints

 

SQL> alter session set “_optimizer_ignore_hints” = false;

Session altered.

Elapsed: 00:00:00.01

 

SQL> exec statspack.snap

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.75

 

SQL> exec statspack.snap

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.32

 

 

 

Nobody knows why this parameter was set at instance level. DBA who was handling these databases has left. We will follow up with the vendor, if there is no specific reason to disable optimizer hints then _optimizer_ignore_hints hidden parameter will removed from init.ora. For the time being statspack snapshot dbms job has been altered to include following statements.

 

alter session set “_optimizer_ignore_hints” = false;

statspack.snap;

 

 

 

Posted in Oracle, Performance, Statspack | 3 Comments »

Loading File into a Blob

Posted by Vishal Gupta on Sep 23, 2008

 

Yesterday I received a request from a developer to load file into a BLOB. To be honest I have never loaded a file into a BLOB in past. I had some idea that I would have to use DBMS_LOB pl/sql package to achieve this.

Here are the steps to do this.

1. Create an oracle directory object.


create directory tmp as ‘/tmp’;

 

2. Load the file into BLOB


DECLARE

l_blob BLOB;

l_bfile BFILE;

l_offset_dest INTEGER :=1;

l_offset_src INTEGER :=1;

BEGIN

/* Get a BFILE pointer to OS file. */

SELECT bfilename (‘TMP’,’CLIENT_CUST_BLEUPRINT.xml’)

INTO l_bfile

FROM dual;

/* Open the BFILE */

DBMS_LOB.FILEOPEN(l_bfile);

/* Initialize the BLOB */

DBMS_LOB.CREATETEMPORARY(l_blob,TRUE);

DBMS_LOB.LOADBLOBFROMFILE(dest_lob => l_blob

,src_bfile => l_bfile

,amount => DBMS_LOB.LOBMAXSIZE

,dest_offset => l_offset_dest

,src_offset => l_offset_src

);

update table1

set col1 = l_blob;

commit;

/* Close the BFILE */

dbms_lob.FILECLOSE(l_bfile);

end;

/

 

Posted in Oracle | 5 Comments »

Slow drop user

Posted by Vishal Gupta on Aug 24, 2008

For past 3 weeks i was covering for a project dedicated DBA who had gone on leave for 3 weeks (some people are just lucky). Its an Oracle Peoplesoft General Ledger application on Oracle 10.2.0.3 database. Lot of copies of application are hosted in same database as multiple copies for different stages of software lifecyle.

I was asked by application team to refresh 6 schemas from 2 other schemas over the weekend. How hard could it be I thought? Seemed like a simple request. Just take a datapump export of source schema, drop the destination schema and import using datapump with remapped schema and tablespace names.

So i set out exporting source schemas, it was taking more than usual. But i did not pay too much attention as i was very busy and it eventually finished in couple of hours. So onto next job of dropping existing users. Now this is where real problem started. Simple “drop user cascade;” had been running for more than 1 hour. I looked at database session was doing “db sequential read”. Session wait sequence was changing every now and then, which suggests that it was doing sequential read again and again for different data. So I started an 10046 trace of session.

It turns out that peoplesoft had about 100,000 objects for each copy. And this database had 10 copies of it. So there were over 1 million objects (tables, indexes, views etc) in the database !!! Thats a huge number of objects for a database. On top that there were some old master datapump import tables in SYSTEM tables, which were over 1GB each. So SYSTEM tablespace was also 23GB in size. That got me even more suspicious. For each import job of each copy, datapump was creating over 1GB size master table. And it was not cleaning up the master tables somehow. I tried to attach to dp jobs, but could not attach to them as they were not running. So only way to get rid of them was to drop the datapump master table for all old jobs. That reduced the size of SYSTEM tablespace. But it still did not speed up the drop user command. It was still running for over 24 hours. Trace of drop user session showed that session was doing fetch on obj$ table after dropping each object to get next get object details. And this was slowing things down.

Ultimately i generated the a script to drop all (100,000) objects individually and ran it. After that i dropped user with no objects. It reduced the 24 hours to 45 minutes to get rid of the user.

This definitely looks like the inefficient code of “DROP USER CASCADE;” command. I would be raising an TAR with oracle to see what they have to say about it.

Posted in Oracle, Performance | 9 Comments »

 
%d bloggers like this: