Here are some interesting articles regarding log file writing and associated I/O waits.
Jonanthan Lewis – Log File Write
Riyaj Shamsudeen – Log file sync tuning
Christian Bilien – Log file sync wait
Posted by Vishal Gupta on Dec 2, 2008
Here are some interesting articles regarding log file writing and associated I/O waits.
Jonanthan Lewis – Log File Write
Riyaj Shamsudeen – Log file sync tuning
Christian Bilien – Log file sync wait
Posted in Oracle, Performance | Leave a Comment »
Posted by Vishal Gupta on Oct 10, 2008
For quite some time we had been experiencing slow statspack snapshots, taking about 300sec. In a worst case scenario it took 7 hours. My colleague was investigating it, it turned out that on this particular database “_optimizer_ignore_hints” was set to true. So it was ignoring all the optimization put in by Oracle in statspack snapshot code.
Environment
OS – Linux
Database – 10.2.0.3
Disable Optimizer Hints
SQL> set timing on
SQL> alter session set “_optimizer_ignore_hints” = true;
Session altered.
Elapsed: 00:00:00.03
SQL> exec statspack.snap
PL/SQL procedure successfully completed.
Elapsed: 00:00:20.73
Enable Optimizer Hints
SQL> alter session set “_optimizer_ignore_hints” = false;
Session altered.
Elapsed: 00:00:00.01
SQL> exec statspack.snap
PL/SQL procedure successfully completed.
Elapsed: 00:00:01.75
SQL> exec statspack.snap
PL/SQL procedure successfully completed.
Elapsed: 00:00:01.32
Nobody knows why this parameter was set at instance level. DBA who was handling these databases has left. We will follow up with the vendor, if there is no specific reason to disable optimizer hints then _optimizer_ignore_hints hidden parameter will removed from init.ora. For the time being statspack snapshot dbms job has been altered to include following statements.
alter session set “_optimizer_ignore_hints” = false;
statspack.snap;
Posted in Oracle, Performance, Statspack | 3 Comments »
Posted by Vishal Gupta on Aug 24, 2008
For past 3 weeks i was covering for a project dedicated DBA who had gone on leave for 3 weeks (some people are just lucky). Its an Oracle Peoplesoft General Ledger application on Oracle 10.2.0.3 database. Lot of copies of application are hosted in same database as multiple copies for different stages of software lifecyle.
I was asked by application team to refresh 6 schemas from 2 other schemas over the weekend. How hard could it be I thought? Seemed like a simple request. Just take a datapump export of source schema, drop the destination schema and import using datapump with remapped schema and tablespace names.
So i set out exporting source schemas, it was taking more than usual. But i did not pay too much attention as i was very busy and it eventually finished in couple of hours. So onto next job of dropping existing users. Now this is where real problem started. Simple “drop user cascade;” had been running for more than 1 hour. I looked at database session was doing “db sequential read”. Session wait sequence was changing every now and then, which suggests that it was doing sequential read again and again for different data. So I started an 10046 trace of session.
It turns out that peoplesoft had about 100,000 objects for each copy. And this database had 10 copies of it. So there were over 1 million objects (tables, indexes, views etc) in the database !!! Thats a huge number of objects for a database. On top that there were some old master datapump import tables in SYSTEM tables, which were over 1GB each. So SYSTEM tablespace was also 23GB in size. That got me even more suspicious. For each import job of each copy, datapump was creating over 1GB size master table. And it was not cleaning up the master tables somehow. I tried to attach to dp jobs, but could not attach to them as they were not running. So only way to get rid of them was to drop the datapump master table for all old jobs. That reduced the size of SYSTEM tablespace. But it still did not speed up the drop user command. It was still running for over 24 hours. Trace of drop user session showed that session was doing fetch on obj$ table after dropping each object to get next get object details. And this was slowing things down.
Ultimately i generated the a script to drop all (100,000) objects individually and ran it. After that i dropped user with no objects. It reduced the 24 hours to 45 minutes to get rid of the user.
This definitely looks like the inefficient code of “DROP USER CASCADE;” command. I would be raising an TAR with oracle to see what they have to say about it.
Posted in Oracle, Performance | 9 Comments »