All 41 entries tagged Oracle-Tune
No other Warwick Blogs use the tag Oracle-Tune on entries | View entries tagged Oracle-Tune at Technorati | There are no images tagged Oracle-Tune on this blog
February 24, 2014
This morning users couldn’t connect to one test database server. From listener log, the following errors:
TNS-12540: TNS:internal limit restriction exceeded.
Logged in on global zone (not db zone)
prstat -Z, pay attention to the history load “1, 5, 15”. Noticed the load in “15 minute” period is 63. The “psrinfo” show 32 core, 32 * 2 = 64 threads. It meant all CPU were busy.
Top command also worked fine. But each oracle process has 15G in “SIZE” if misleading. It should mean all oracle process have 15G in total, not each has 15G.
“vmstat -S 5” command,
check si and so column for swap in/out
swap in/out is different to page in/out. See Here
March 02, 2011
Check Apache process
while true; do sleep 3; pgrep httpd | wc -l; done
Check server log
tail -f /usr/local/jboss/server/websignon/log/localhost_access_log.2011-03-02.log | grep sentry | awk ’$6 > 1000’
Check the number of connections to Oracle.
If it increases steady, then the SSO is slow and the connection is not released to connection pool quick enough. The Application has to use more connections.
February 24, 2011
The following query (SQL ID : 2avwvgg1qp025 ) took long time & lots of CPU to run.
select count(*) as y0_ from eventlog this_ where this_.event_type='Login' and this_.providerId like :1 and this_.success=:2 and this_.userName<>:3 and this_.eventdate>=:4 and this_.eventdate<=:5
The cost in execution plan is not too big. IO cost is 2 and CPU cost is 7,121.
I queried v$sql_bind_capture view and find the values of bind variable eventdate were null in both positions (:4 and :5).
We then find out the values passed in were one year ago and now. So that explained the cost. The query fetched one year’s data, not our supposed 3 days data.
I wrote a Java program to check why the values of bind variable eventdate were null. If you pass a java.sql.Date variable, then Oracle can record the value correctly. If you pass a java.sql.Timestamp value, however, Oracle will record it null. Then it turned out it is a bug (Note ID 444551.1) and is fixed in 11.2. Our db is 10.2.
March 16, 2009
We need to upgrade to the Oracle driver version 10.2.0.4 to fix a multi-bytes character display problem in out web app.
But with this driver, some SQLs run very slow.
With a lower version driver, the application works fine.
After investigation, I found out the root cause of the problem:
Jboss data source file has a property
This tell Jboss to send java.lang.String as a nvarchar2 type instead of a normal varchar2 type to database.
Since the column datatype in database are defined as varchar2, Oracle optimizer will have to do an internal data conversion, converting the data in the varchar2 column to nvarchar2 data. This will invalidate the index we created on the table.
The oracle trace file show the following:
1 - filter(SYS_OP_C2C("THIS_"."OWNERID")=:1)
December 10, 2008
I was asked to create a index for a simple query. it is using FULL table scan. The index works find on test server.
On live server, it refuse to use the index.
I checked the table stats, it is up-to-date.
I manually execute the query, it was using the index.
When we configured our test app server using the live data, , it used the index correctly.
When the driver used in live server was copied to test server, the query run very slow.
No doubt the JDBC driver cause problem again.
December 13, 2007
September 07, 2007
Combining placehoders and literals
select * from . where col=:x and satus=”yes”
select * from . where col=:x and satus=”no”
if the number of row with”yes” and “no” values in the staus column is very diferent, a single execution plan is likely to be non-optimal for one .
July 11, 2007
According to Oracle doc
- User calls: Number of user calls such as login, parse, fetch, or execute. This metic probabaly incorporates all the activities we are interested from a Java developer’s perspective. It includes Select/update/delete/insert
- recursive calls: Number of recursive calls generated at both the user and system level. An example of call at user level: a SQL call a store procedure which contains a chunk of SQLs. System recursive call are additional calls for housekeeping such as space management, etc.
- Execute count: total number of calls (user and recursive) that execute SQL statements. Does that includes “select” statement? I assume not.
To varify my undstanding of these concepts I wrote a program doing DML against a test database. Result is here
based on Java 3 tier app using JDBC :
- Execute Count = No. of select/insert/delete/update. For easier understanding. do not think it includes recursive calls as doc indicated.
- Recursive Calls = No. of update/insert/delete + Ceiling(Hard parse)
- User Commit = min(No. of update/insert/delete)
- User Calls = No. of select/insert/delete/update + N * Hard Parse (where N =2 when select and N =1 otherwise.)
- All SQLs in a function will includes in EC, UC.
- A function call will have one RC , one EC and UC
- A sample took from live server : User Calls = 509; Execute Count = 174, Recursive Call = 57 User Commit = 5
- The app does not use function, so Execute Count is the number of DMLs send from hibernate to database.
- User Calls is high, it is from hard parse. So the app should use bind variable.(might not true?)
- Recursive Calls are from hard parse and update/delete/insert. Since user commit is low, so most of RC are hard parse.
Another sample: User Calls = 3872; Exeucte : 1237; Recursive: 28
Recusive is low, so function call and hard parse is low. Why User Calls is so higher than Execute Count??
delete from user_preferences WHERE forumid=550 and userid=30387 and name='mailPriority' and value='0'
apparantly cached hard parse cause User Calls high. Cached hard parse wont incure many recursive calls since they are cached. however, one cached hard parse will always generate one user calls.
Action: Ask our developer to use bind variable for this SQL
Expectation: User calls will dropped to EC value.
May 17, 2007
A local index does not enforce the partition elimination. Suppose there is a partition key CA in a table and local index is on (CB,CC). Even CA and CC are same type or same value, a query “select * from tableA where CC=? and CB =? ” wont be able to prune partition.
SELECT STATEMENT PX COORDINATOR PX SEND QC (RANDOM) PX PARTITION RANGE ALL TABLE ACCESS BY LOCAL INDEX ROWID INDEX SKIP SCAN
You must somehow use the partition key in the query.
- PARTITION RANGE ITERATOR: Executes child operations for each partition in the table specified by a range of partition keys
- PARTITION RANGE SINGLE: Executes child operations for a single partition in the table Versions
- PARTITION RANGE ALL: Executes child operations for each partition in the table
- INDEX SKIP SCAN: allows the Oracle CBO to use a composite index even when the index prefix column in a SQL statement has been omitted.
- INDEX RANGE SCAN
January 09, 2007
The following SQL from same Java application, same war file, deployed on two different JBoss server, Use same database. Two JBoss use same JDBC driver.
It has different execution plan
select referrer, count(*) amount from hitlog where url=:1 and trun_logdate between :2 and :3 group by referrer order by amount desc
One version of SOL use full table scan
SELECT STATEMENT SORT ORDER BY HASH GROUP BY FILTER PARTITION RANGE ALL TABLE ACCESS FULL
I run the SQL Tunning Advisor in Grid Control. After half hour, it gave me the following suggestion:
Restructure SQL The predicate “HITLOG”.”TRUN_LOGDATE”>=:B1 used at line ID 5 of the execution plan contains an implicit data type conversion on indexed column “TRUN_LOGDATE”. This implicit data type conversion prevents the optimizer from selecting indices on table “HITLOG”.”HITLOG”. Rewrite the predicate into an equivalent form to take advantage of indices.
The optimizer is unable to use an index if the predicate is an inequality condition or if there is an expression or an implicit data type conversion on the indexed column.
I then noticed that the leading column of the index is trun_logdate, which is used in inequality condition in my query.
It took 10 hour (8pm – 6am) to create a new index on (url, trun_logdate).