Vishal Gupta's Blog

Slow drop user

Posted by Vishal Gupta on Aug 24, 2008

For past 3 weeks i was covering for a project dedicated DBA who had gone on leave for 3 weeks (some people are just lucky). Its an Oracle Peoplesoft General Ledger application on Oracle 10.2.0.3 database. Lot of copies of application are hosted in same database as multiple copies for different stages of software lifecyle.

I was asked by application team to refresh 6 schemas from 2 other schemas over the weekend. How hard could it be I thought? Seemed like a simple request. Just take a datapump export of source schema, drop the destination schema and import using datapump with remapped schema and tablespace names.

So i set out exporting source schemas, it was taking more than usual. But i did not pay too much attention as i was very busy and it eventually finished in couple of hours. So onto next job of dropping existing users. Now this is where real problem started. Simple “drop user cascade;” had been running for more than 1 hour. I looked at database session was doing “db sequential read”. Session wait sequence was changing every now and then, which suggests that it was doing sequential read again and again for different data. So I started an 10046 trace of session.

It turns out that peoplesoft had about 100,000 objects for each copy. And this database had 10 copies of it. So there were over 1 million objects (tables, indexes, views etc) in the database !!! Thats a huge number of objects for a database. On top that there were some old master datapump import tables in SYSTEM tables, which were over 1GB each. So SYSTEM tablespace was also 23GB in size. That got me even more suspicious. For each import job of each copy, datapump was creating over 1GB size master table. And it was not cleaning up the master tables somehow. I tried to attach to dp jobs, but could not attach to them as they were not running. So only way to get rid of them was to drop the datapump master table for all old jobs. That reduced the size of SYSTEM tablespace. But it still did not speed up the drop user command. It was still running for over 24 hours. Trace of drop user session showed that session was doing fetch on obj$ table after dropping each object to get next get object details. And this was slowing things down.

Ultimately i generated the a script to drop all (100,000) objects individually and ran it. After that i dropped user with no objects. It reduced the 24 hours to 45 minutes to get rid of the user.

This definitely looks like the inefficient code of “DROP USER CASCADE;” command. I would be raising an TAR with oracle to see what they have to say about it.

About these ads

7 Responses to “Slow drop user”

  1. 45 minutes for 100K objects is quite good. It takes us the same for 35K objects in the SYSADM schema.
    The drop user cascade command takes 48 hours (or so I have been told)

    Which script are you using for the drop objects as it sounds to be quicker than mine (which I will post tomorrow from work)

    John

  2. John,

    Mine was a simple spool script created to drop all objects (except LOB and package bodies) from dba_objects for a schema. Nothing fancy in my script.

  3. A PeopleSoft application database exists in a single schema in an Oracle database. Unfortunately, PeopleSoft sometimes uses those terms interchangeably.

    You have multiple PeopleSoft databases (schemas) in a single Oracle database. This is supported, but not recommended because of the additional complexity and lack of flexibility. And in your case it sounds like the shear size of the resultant data dictionary may also have a bearing. If you had only one PeopleSoft schema per database, then instead of refreshing schemas you can clone whole databases.

    If you are thinking of doing any performance testing on these refreshed schemas, then bear in mind that you have repacked the data in the tables, and rebuilt the indexes.

    Have the PeopleSoft tablespaces been created with a 1M uniform extent size (which is in the delivered DDL script)?

  4. David,

    I will have to check regarding extent size when i am in office again on Tuesday.

  5. Andrew said

    Hi Vishal,

    Any update from Oracle on the drop user slowness.

    Thanks
    Andrew

  6. Amit Rath said

    nice post Vishal….it helped me a lot while dropping a user in my database of size 430 GB and having 3153 objects in that

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: