Favourite blogs for Jeff's Irregular Movements

My favourites » Oracle/Java/Others

May 01, 2014

memory_target sga_target and db_cache_size

On my two databases, the buffer cache were way too small. One buffer was 128 mb, which was the default value set in db_cache_size; one is 64 mb while the db_cache_size value is 0.

On both servers, the memory_target parameters were set (5g and 3g respectively) while sga_targets are set to zero. It seemed ASSM mis-configured the buffer cache.

After I set the sga_target to a reasonable value, ASSM began to adjust the cache buffer and set them to a much higher value. The first one was set to 700 mb and the second 608 mb.

Both databases are on solaris 10.

Maybe we should either set the sga_target, or set a high value in db_cache_size.

February 23, 2012

Instantiate table in Oracle Streams

Writing about web page http://krish-dba.blogspot.com/2009/01/re-synchronizingrefresh-table-in.html

Credit to this article

1. Stop Capture

2. Stop propagation

3. Stop Apply Process on Target(T)


4. Delete the apply errors on Target

Check the errors in the error queue:

SQL> select apply_name, LOCAL_TRANSACTION_ID, ERROR_MESSAGE from dba_apply_error;

To delete specific transaction:


To delete all errors: 


5. Truncate table 

Truncate table UWTABS.UW_TPRS

6. Re-instantiate the problem table on Source(S)


 source_object_name => 'UWTABS.UW_TPRS', 
 source_database_name => 'source.warwick.ac.uk', 
 instantiation_scn => iscn);

Synchronize the problem table through datapump exp/imp

8. Start the Apply Process on Target(T)


9. Start the propagation Process on the Source(S)


10. Start the Capture Process on the Source(S)


February 21, 2012

job name in datapump

Pay attention to the format of job name

oracle@db$ expdp / attach=nightly_pump_210212-1059

ed to: Oracle Database 11g Enterprise Edition Release - 64bit Production
With the Partitioning and Real Application Testing options
ORA-31626: job does not exist
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.KUPV$FT", line 405
ORA-31638: cannot attach to job nightly_pump_210212-1059 for user NIGHTLY_PUMP_210212
ORA-31632: master table "NIGHTLY_PUMP_210212.nightly_pump_210212-1059" not found, invalid, or inaccessible
ORA-00942: table or view does not exist

Datapump can find correct user after changing the job name.

oracle@db$ expdp / attach=nightly_pump_210212_1059
Connected to: Oracle Database 11g Enterprise Edition Release - 64bit Production
With the Partitioning and Real Application Testing options
ORA-31626: job does not exist
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.KUPV$FT", line 405
ORA-31638: cannot attach to job NIGHTLY_PUMP_210212_1059 for user OPS$ORACLE
ORA-31632: master table "OPS$ORACLE.NIGHTLY_PUMP_210212_1059" not found, invalid, or inaccessible
ORA-00942: table or view does not exist

Seemed the ”-” in the job name caused problem.

January 25, 2012

Use Timestamp type instead of Date type

Today I noticed another problem caused by Date type.

select page3_.*
from sb2.NEWS_ITEM this_ inner join sb2.CONTENT_FETCHER cf1_ on this_.news_id=cf1_.ID
 inner join sb2.CONTENT content2_ on cf1_.content_id=content2_.id 
  inner join sb2.PAGE page3_ on content2_.page_id=page3_.id
where ( this_.discriminator = 'calendar_item') and page3_.LAST_UPDATED_DATE>:1 
order by page3_.LAST_UPDATED_DATE asc, this_.publication_date asc

The SQL Tunning Advisor suggested that “page3_.LAST_UPDATED_DATE>:1 ” has an implicit data type conversion prevent the optimizer from selecting indices.

The LAST_UPDATED_DATE column is of “DATE” type.

From V$SQL_MONITOR, the bind variable is:
<bind name=":1" pos="1" dty="180" dtystr="TIMESTAMP" maxlen="11" len="7" format="hexdump">78640101010101</bind>

After inspecting the source code, it is appear bind as java.util.Date.

I think it should be bound as java.sql.Date, not java.sql.Timestamp or java.util.Date. See http://databaseperformance.blogspot.com/2009/01/java-data-types-and-oracle-index-usage.html

Oracle saw the value it was getting was of type TIMESTAMP, which has sub-second values in it. And rather than lose precision by truncating its value, it was instead converting the DATE value in each data row to a TIMESTAMP value before comparing.

Better to use Timestamp type instead of Date type. The index should work properly whether you bind java.sql.Date or java.sql.Timestamp.

Data Type Conversion
Oracle will convert the value to most precise type:

Column type DATE, predicate value is TIMESTAMP => Right size is more precise => Convert every row to timestamp
Column type Timestap, predicate value is DATE => Left size is more precise => convert predicate to Timestap
Column type varchar, predicate value is number => Right size is more precise => convert every row to varchar
Column type is number, predicate value is varchar => Left size is more precise => convert predicate value to number

January 12, 2012

Delete old rman backup piece manually

My Nagios checking reported there were an old backup dated back 2010. There is no such backup on disk. RMAN crosscheck and delete wont be able to remove it.

Tried to delete it with “nice” rman command:
RMAN> delete backup tag = 'TAG20101029T101009';

allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=317 device type=DISK
specification does not match any backup in the repository

RMAN> delete backuppiece tag = 'TAG20101029T101009';
using channel ORA_DISK_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of delete command at 01/12/2012 09:59:49
RMAN-06168: no backup pieces with this tag found: TAG20101029T101009
Resort to manually deleting in the base table:
SELECT * FROM rc_backup_set where db_id='1320220071' order by start_time
select * from bp where db_key=10492312 and tag = 'TAG20101029T101009'
delete from  bp where db_key=10492312 and tag = 'TAG20101029T101009'

January 10, 2012

Install Oracle Instant Client on Ubuntu

1. Download instantclient-basic-linux.x64- and instantclient-sqlplus-linux.x64- and unzip them into /apps/oracle/instantclient_11_2

2. sudo apt-get install libaio1

3. Append the following line into ~/.bashrc

export LD_LIBRARY_PATH=/apps/oracle/instantclient_11_2
export SQLPATH=~/code/oracle-scripts/scripts:/app/oracle/instantclient_11_2
export TNS_ADMIN=/apps/oracle/instantclient_11_2

No need the TNS_ADMIN if you just want to use easy connect.

4. (optional)
create tnsnames.ora

5. Now sqlplus should works.
sqlplus system/password@oracle_server

This doc help me a lot.

Use online redefinition to change a table into a partitioned table

Created a test user hsun

drop table log purge;
drop table children purge;
drop table parent purge;
drop view children_less_10_v;

create table parent (id number);
alter table parent add constraint pk_parent_id primary key (id);

create table log (msg varchar2(50));

create table children (id number,pid number,  constraint  pk_id  primary key  (id));
alter table children add constraint fk_pid foreign key (pid) references parent (id);
create view children_less_10_v as select * from children where id < 10;

create or replace trigger children_tri 
before insert or update on children 
for each row 
  insert into log values('new' || :new.id);

insert into parent  select rownum from dual connect by level < 15;
insert into children  select rownum, rownum from dual connect by level < 15;

Run the script status.sql that contains the following SQL:

set lin 200;
col object_name format a20
select object_name, object_type, status from user_objects order by 2, 1 ;
select trigger_name , table_name from user_triggers order by 2;
select view_name, text from user_views;
select CONSTRAINT_NAME, table_name, status from user_constraints order by 2;
select table_name, PARTITION_NAME from user_tab_partitions order by 1;

SQL> @/tmp/status.sql
-------------------- ------------------- -------
PK_ID                INDEX               VALID
PK_PARENT_ID         INDEX               VALID
CHILDREN             TABLE               VALID
LOG                  TABLE               VALID
PARENT               TABLE               VALID
CHILDREN_TRI         TRIGGER             VALID
CHILDREN_LESS_10_V   VIEW                VALID

7 rows selected.

TRIGGER_NAME                   TABLE_NAME
------------------------------ ------------------------------
CHILDREN_TRI                   CHILDREN

VIEW_NAME                      TEXT
------------------------------ --------------------------------------------------------------------------------
CHILDREN_LESS_10_V             select "ID","PID" from children where id < 10

CONSTRAINT_NAME                TABLE_NAME                     STATUS
------------------------------ ------------------------------ --------
FK_PID                         CHILDREN                       ENABLED
PK_ID                          CHILDREN                       ENABLED
PK_PARENT_ID                   PARENT                         ENABLED

no rows selected

Run as SYSDBA in a different session:

-- -- Check table can be redefined

drop table hsun.children_p purge;
create table hsun.children_p (id number,pid number) 
 partition by range (id)
 ( partition p1 values less than (5),  
   partition p2 values less than (10),   
   partition p3 values less than (maxvalue));

-- Start Redefinition

  l_num_errors PLS_INTEGER;
       orig_table => 'CHILDREN',
       int_table => 'CHILDREN_P', 
       copy_indexes  => DBMS_REDEFINITION.cons_orig_params,
       num_errors => l_num_errors);
 DBMS_OUTPUT.put_line('l_num_errors=' || l_num_errors);

-- Optionally synchronize new table with interim data before index creation

-- Complete redefinition

drop table hsun.children_p purge;

Run status.sql again:

SQL> @/tmp/sta

-------------------- ------------------- -------
PK_ID                INDEX               VALID
PK_PARENT_ID         INDEX               VALID
CHILDREN             TABLE               VALID
LOG                  TABLE               VALID
PARENT               TABLE               VALID
CHILDREN_LESS_10_V   VIEW                INVALID

10 rows selected.

TRIGGER_NAME                   TABLE_NAME
------------------------------ ------------------------------
CHILDREN_TRI                   CHILDREN

VIEW_NAME                      TEXT
------------------------------ --------------------------------------------------------------------------------
CHILDREN_LESS_10_V             select "ID","PID" from children where id < 10

CONSTRAINT_NAME                TABLE_NAME                     STATUS
------------------------------ ------------------------------ --------
FK_PID                         CHILDREN                       ENABLED
PK_ID                          CHILDREN                       ENABLED
PK_PARENT_ID                   PARENT                         ENABLED

TABLE_NAME                     PARTITION_NAME
------------------------------ ------------------------------
CHILDREN                       P1
CHILDREN                       P3
CHILDREN                       P2

The Database versions

SQL> select * from v$version;

Oracle Database 11g Enterprise Edition Release - 64bit Production
PL/SQL Release - Production
CORE      Production
TNS for Solaris: Version - Production
NLSRTL Version - Production
  • The primary key, foreign key and triggers remains “ENABLED”
  • View and trigger are “INVALID”

January 05, 2012

Solaris df reported disk usage differently to du

I dropped the hitlogging tablespace whose size was 200+ gb.

run df report

bash-3.00# df -h
la-ora01-db            588G    31K    44G     1%    /oradb
la-ora01-db/data       588G   489G    44G    92%    /oradb/data

bash-3.00# du -s -h /oradb/data
 217G   /oradb/data

-bash-3.00# for pid in `ps -ef | grep ora_ | awk '{print $2}'`; do 
 pfiles  $pid | grep log  ; done | sort | uniq
The files I drooped were hitlogging*.dbf and non of them appear in the list above.
After 3-4 hours waiting to remove the possibility of cache effect, I restarted the database

SQL>shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

-bash-3.00# df -h
Filesystem             size   used  avail capacity  Mounted on
/                       39G   8.8G    30G    23%    /
/dev                    39G   8.8G    30G    23%    /dev

la-ora01-db            588G    31K   316G     1%    /oradb
la-ora01-db/data       588G   217G   316G    41%    /oradb/data

Maybe some other user process hold the files? I only inspected the oracle background process and usually the most likely suspect is db file writer (dbwn).

December 20, 2011

UKOUG 2011

Another year of UKOUG conference closed its curtain. I attended all of the four days session, including the oaktable Sunday afternoon. I lost trace of how many times I went to this year end festival.
I now have some time to reflect what’s good :

Good Presentations:
A Deep Dive into the SQL monitoring Report by Greg Rahn
Oracle Optimizer: Upgraing to 11g without Pain by Maria Colgan
Oracle optimizer: Best Practices for Managing Optimizer Stats by Maria Colgan
Implementing Effective Database Auditing in 3 steps

Not too bad
user Experiences with Data Guard Fas-start Failover
Net Services – Best Practices for Performance, Scalability & High Availability by Kussi Mensah

Round Tables
Oracle scurity
RAC and HA roundtables

Keynote by Cary Millsap started to become interesting when I planned to leave. I ended up finished it.

The last day of the conference is not the best day. So if you can only go to conference for one day, don’t choose the last day.

February 16, 2011

Oracle locks

Follow-up to oracle lock – look it again from Oracle/Java/Others

Step 1

create table test (id number)
insert into test values (1);

Before commit or rollback, check the v$lock table.
Two locks:
First one’s lock type is TM (DML enqueue), Lock mode is 3 – row-x(SX), Request is 0 (none)
Second one’s lock type is TX (Transaction enqueue), Lock mode is 6 – exclusive (X), Request is 0 (none).

Check v$locked_object table. It displays the similar information as well

Step 2
In the same session

update test set id = 2

The locks in v$lock and v$locked_objects show same information as step 1.

Step 3
Open a new session

update test set id = 3

Now check the v$lock table
The lock associated with the first session is still same: two locks.
But the “block” column in the “TX” lock is 1 , means it is blocking another session.
The “block” column in “TM” lock is still 0.

Session 2 has two locks now:
TM lock, Lmode is 3 (SX), request is 0
TX lock, LMode is 6 (X), request is 6 (X – exclusive X)

Check the v$locked_object

The row attached to session 1 is same.
A new row attached to session 2 has same (locked_mode, object_id), but XIDUSN, XIDSLOT, XIDSQN are all zero.


v$locked_object is very useful. It displays 1) user 2) object_id information 3) blocker & waiter

how to find out locked object in v$lock?

Other tables

- dba_waiters: including waiters and blocker, mode_held & mode_requested
- dba_blockers: just one column holding_session.
- dba_lock_internal: Very slow to access
- v$access : Very slow to access

November 01, 2010

Restore point test

set db_flashback_retention_target = 120

create restore point, then two days later the flashback log is not deleted.

Some archive log is not deleted since
“RMAN-08139: WARNING: archived redo log not deleted, needed for guaranteed restore point”

Drop restore point, the flashback log was dropped as well in this command.

August 02, 2010

oracle listener config

I puppetized Oracle listener.ora, then suddenly I cannot connect to Oracle

sqlplus system/xxx@db_service

ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Solaris-AMD64 Error: 2: No such file or directory

I checked the configuration, everything seems fine.

After some fruitless effort, I notice the value $oracle_home in of /var/opt/oracle/oratab is /usr/local/oracle. /usr/local/oracle is the symlink of /package/oracle. I changed the value of /var/opt/oracle/oratab to /package/oracle and reset the environment variable ORACLE_HOME to /package/oracle. Restarted the database and I can now connect to database.

February 26, 2008

Wake up Oracle RFS

Follow-up to Force Oracle primary DB to reset connection to logical standby from Oracle/Java/Others

After another short period network outage, the primary database y wont sent archivelog to standby server.

set log_archive_dest_state_n = defer| enable cannot wake up oracle this time.

Fixed one server using the following:

alter system set log_archive_dest_2='service=boston valid_for=(online_logfiles,primary_role) db_unique_name=boston';

alter system set log_archive_dest_2='service=boston lgwr async valid_for=(online_logfiles,primary_role) db_unique_name=boston';

The preceding method only works slight better. During nigthtly rman backup, several logs missing (Omit 100 words …). Eventaully set log_archive_max_processes to a higher value fixed the problem

July 13, 2007

11G features


  • Incident packaging service: automatically pack all diagnosistic data into zip file. Raise a service request is easy
  • Autmatica Diagnostic repository replaces all XX_DUMP_DEST directories
  • Flashback Data ARchive can keep UNDO data as long as required
  • 112g network-based duplication will duplicate source db to clone dbwithout requiring source db to have existing backups.
  • Data recovery advisor will identify which recovery option is feasible

For Developer

  • Create index as Invisible attribute cause CBO to ginore the presence of the index
  • Add a column with a defult value to a table with many rows could take a significant amount of time and holad a lock on the table. This restriction has been removed
  • SecureFiles : store/read files into db, performance is 10X fater than LOB and has silmilar speed as Linux NFS

July 11, 2007

Oracle metric – User Calls, Recursive Calls, Execute Count

According to Oracle doc

  • User calls: Number of user calls such as login, parse, fetch, or execute. This metic probabaly incorporates all the activities we are interested from a Java developer’s perspective. It includes Select/update/delete/insert
  • recursive calls: Number of recursive calls generated at both the user and system level. An example of call at user level: a SQL call a store procedure which contains a chunk of SQLs. System recursive call are additional calls for housekeeping such as space management, etc.
  • Execute count: total number of calls (user and recursive) that execute SQL statements. Does that includes “select” statement? I assume not.


To varify my undstanding of these concepts I wrote a program doing DML against a test database. Result is here


based on Java 3 tier app using JDBC :

  • Execute Count = No. of select/insert/delete/update. For easier understanding. do not think it includes recursive calls as doc indicated.
  • Recursive Calls = No. of update/insert/delete + Ceiling(Hard parse)
  • User Commit = min(No. of update/insert/delete)
  • User Calls = No. of select/insert/delete/update + N * Hard Parse (where N =2 when select and N =1 otherwise.)
With function,
  • All SQLs in a function will includes in EC, UC.
  • A function call will have one RC , one EC and UC


  • A sample took from live server : User Calls = 509; Execute Count = 174, Recursive Call = 57 User Commit = 5
  • The app does not use function, so Execute Count is the number of DMLs send from hibernate to database.
  • User Calls is high, it is from hard parse. So the app should use bind variable.(might not true?)
  • Recursive Calls are from hard parse and update/delete/insert. Since user commit is low, so most of RC are hard parse.

Another sample: User Calls = 3872; Exeucte : 1237; Recursive: 28
Recusive is low, so function call and hard parse is low. Why User Calls is so higher than Execute Count??

Looking at v$sql, I find out hundred of SQL like

delete from user_preferences WHERE 
forumid=550 and userid=30387 and name='mailPriority' and value='0'

apparantly cached hard parse cause User Calls high. Cached hard parse wont incure many recursive calls since they are cached. however, one cached hard parse will always generate one user calls.

Action: Ask our developer to use bind variable for this SQL
Expectation: User calls will dropped to EC value.

June 07, 2007

RMAN vs OS backup on NetApp

Before RMAN emerged, OS backup is the main way to backup a database. It has shortcomings:
  • Cannot do incremental backup. If datafiles are big, the backup time is long
  • It takes more disk space
  • You cannot really know whether your backup will come to rescue in disaster. RMNA offer the ability to validate the backup

RMAN offers the following bonous beyond the normal backup

  • RMAN can compress the backup
  • RMAN can check the corrupted data block during backup.
  • RMAN can delete obsolete backup set

One shortcoming of RMAN is that you have to maintain a repository server to make full use of RMAN.

The pendulum has swayed so far in favour to RMAN that OS backup seems to be dead.

NetApp’s Snapshot change the balance a bit, if not completed reverse the trend.

With Snapshot,
  • the backup time is long. The backup process almost take no tiime and the size of DB is irrelevant. I now take backup every hour.
  • It takes more disk space Snapshot do incremental backup at OS level.
  • RMAN can compress backup With OS level incremental backup, this is not important
  • MAN can check the corrupted data block during backup Still an advantage
  • _RMAN can delete obsolete backup set Snapshots can be deleted by a scripts
Tremendous advantages of Sanpshot are:
  • The backup is superfast
  • Restore is easy. Just normal OS copy. It should be a little faster than RMAN restore since RMAN need to restore the backupsets to OS files.
  • Get rid of repository server.

I have not got a chance to test “SnapManager for Oracle”, which combine Snapshot with RMAN. Maybe after I do that, I will have different opinion.
So far, Snapshot + OS backup will supersede RMAN.

May 30, 2007

NetApp and Oracle backup and restore

Oracle OS Oracle backup:

  1. SQL>alter tablespace test begin backup
  2. Within netapp command line: snap create volume_name snap_name
  3. SQL> alter tablespace test end backup
  4. insert some data into table belonging this tablespace test
  5. rm one datafile test.dbf
  6. SQL>alter system checkpoint; alert log will report missing datafile 9
  7. cp ../.snapshot/../test.dbf dest_location
  8. alter database recover datafile 9;
  9. alter database datafile 9 online;

RMAN backup with SnapManager for Oracle:

May 25, 2007

Easy connect naming method

Easy connect name method is very convenient to use. So you do not need to modify tnsnames.ora any more.

The format copied from Oracle doc is as follows:
CONNECT username/password@host[:port][/service_name][/instance_name]

However, the instance_name one did not work during my test.
The service name works. The service name = sid + doman name.(e.g. orcl.xyz.com)

The sqlnet.ora need to be changed as well.


May 07, 2007

Oracle Listener Error

06-MAY-2007 23:52:30 * (CONNECT_DATA=(SID=wblogs)(CID=(PROGRAM=)(HOST=__jdbc__)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST
= * establish * wblogs * 0
06-MAY-2007 23:52:33 * ping * 0
WARNING: Subscription for node down event still pending

07-MAY-2007 20:20:06 * (CONNECT_DATA=(SID=wblogs)(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))) * (ADDRESS=(PROTOCOL=tcp)(HOST=137.2
05.243.25)(PORT=37842)) * establish * wblogs * 12518
TNS-12518: TNS:listener could not hand off client connection
 TNS-12547: TNS:lost contact
  TNS-12560: TNS:protocol adapter error
   TNS-00517: Lost contact
    Solaris Error: 32: Broken pipe
07-MAY-2007 20:20:06 * 12560
TNS-12560: TNS:protocol adapter error
 TNS-00530: Protocol adapter error
  Solaris Error: 130: Software caused connection abort

According to metalink Note 284602.1,Set SUBSCRIBE_FOR_NODE_DOWN_EVENT_ =OFF will remove the “Subscription for node down event still pending” message and possiblely fix the problem

May 01, 2007

Apache rewriting

A good article about Apache rewrite: Module mode_rewrite

mod_rewrite, the Swiss Army Knife of URL manipulation!

April 27, 2007

Solaris 10 kernal parameters for Oracle software

Oracle doc is inaccurate and misleading.
Metalink doc 317257.1 is worth looking at.

Since Solaris default values are higher than Oracle recommended values, the only resource control that might need to be set is project.max-shm-memory. !!!

shmsys:shminfo_shmmax (max value of shared memory segment) is replaced by project.max-shm-memory(Total amount of System V shared memory allowed for this project. ), The default value 1/4 physical RAM.

To change the value: prctl -n project.max-shm-memory -v 10gb -r -i project default

shmsys:shminfo_shmmni (max number of shared memory identifiers on system) is replaced by project.max-shm-ids (Maximum number of shared memory IDs allowed for this project) default 128
To change the value: prctl -n project.max-shm-ids -v 100 -r -i project

seminfo_semmni (the maximum # of semaphores sets in the system) is replaced by project.max-sem-ids default 128
To change the value: prctl -n project.max-sem-ids -v 100 -r -i project

seminfo_semmsl=256 (# of semaphores in a semaphore set) is replaced by project.max-sem-nsems default: 512 ,
To change the value: prctl -n process.max-sem-nsems -v 256 -r -i project [project name]

The doc here give very good explanation about shared memory and semaphores

Use ipcs -b to display the above info.

You can get all project name from /etc/projects
To list the process in the project : prstat -j [project name]
Oracle is in project “default”, Use prstat -j default to find the process

April 01, 2007

A good entry about crontab

Writing about web page http://www.dbanotes.net/techmemo/crontab_tips.html

A very good article about crontab.

September 04, 2006

Quotes in Unix Shell and JSTL

Double quote and single quote is different.

Double quote: Everything between quotes is take literally, except for the following characters that keep there special meaning:
  • $ variable substitution will occure
  • ` command subtitution will occure
  • \ escape next character
  • ! history character
  • newline
Single quote: everything between quotes is take literally, except for the
  • ! history
  • newline.

For my scripts, it is much safer to use double quote.

One of my script is like this:

/usr/bin/egrep '$2' $1         
if [ $? -eq 0 ] ; then

the single quote surounding $2 caused confusion.

Not so sure about their difference in JSTL. In the “Core JSTL” book, single quote is used.

March 07, 2006

shell conditional testing

Writing about web page http://www.oracle.com/technology/pub/articles/dulaney_test.html

I keep on forgeting the syntax of bash shell condition syntax. So i have to write it down now.

[ condition ] :
note the space between each side the condition when using square brackets.
"[" actu

String Comparation
=, !=, -z (string is null), -n (not null)

-eq, -ne, -gt, -lt, le,ge
[ "$number" -eq "10"] or [ $number -gt 30 ]

"0" means true
"1" means false

September 08, 2005

Users are impatient

Follow-up to Play with Suse Linux from Oracle/Java/Others

I tried to install oracle htmldb on my suse linux box. The pre-install check told that I should install compat-db-4.0.14 firstly. I download the rpm and the check passed and the installtion process began.
The post-install configure reported it cannot found libdo.so.2. I dug arround and cannot find appropirate rpm for suse.

I become impatient since i only want to do a quick test this product. I went back to Solaris platform and install same product on this platform and it works straight away.

It is shame that some many different variance of linux there. It is fun to learn linux, but for a quick test, you better choose a platform distrubute by a sole vendor, like microsoft and solaris.