All 31 entries tagged Oracle-Availability
No other Warwick Blogs use the tag Oracle-Availability on entries | View entries tagged Oracle-Availability at Technorati | There are no images tagged Oracle-Availability on this blog
December 12, 2013
What’s SCAN IP used for?
Receive connection from client and pass on to local listeners
- Each SCAN VIP has SCAN Listener associated with each other and both works as a pair. One SCAN Listener is created corresponding to one SCAN VIP.
- Whenever a new connection request is made the SCAN listener listening on a SCAN IP address and the SCAN port is contracted on a client’s behalf. Because all services on the cluster (HS: even the nodes that have no scan listener running on) are registered with the SCAN listener, the SCAN listener replies with the address of the local listener as Node VIP address on the least-loaded node (Each scan listener keeps updated cluster load statistics) and connection is routed to that node.
What’s VIP used for ?
The local listener run on the VIP. If local node failed, the VIP is moved to surviving node, avoiding the long delay of TCP time out.
srvctl status vip -n rad1dev
srvctl config scan
srvctl config scan_listener
Very good article:
- Virtual IP must be associated with the same interface name on every node that is a part of the cluster
- The Virtual IP address should not be in use at the time of the installation because this is a Virtual IP address that Oracle manages internally to the RAC processes.
- This virtual IP address does not require a separate NIC. The VIPs should be registered in the DNS.
- Virtual IP (VIP) is for fast connection establishment in failover dictation
February 06, 2012
This morning we did a switchover between our primary database and one of the two logical standby databases.
The switchover went smoonthly. When the data began to be replicated from primary, however, the standby database suddenly did successive log switch at very high frequency. This is a bit puzzling since the primary database was not too busy.Loads of waiting events from “SYS” user:
LogMiner builder: DDL oracle@si-sbr-db (MS01) LogMiner reader: buffer oracle@si-sbr-db (MS00) Streams apply: waiting for dependency oracle@si-sbr-db (AS05) Streams apply: waiting for dependency oracle@si-sbr-db (AS04)
The frequent updating SQL was:
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:1
We started to replicate a big log table a couple of days ago. That table was previous skipped in log apply. The old logical standby coped the change very well. Did this change cause problem on the new standby?
I increased the cache size of hibernate sequence to 10,000 from default value 20.
alter sequence hitlog.hibernate_sequence cache 10000
The log switch immediately stopped. I was surprised how effective the SQL was.
At developer’s request, I did switchover again. Went smoothly. Then I was asked to switchover again ( for some reason of course), I encountered an error on the current standby database:
SYS@standby SQL>ALTER DATABASE PREPARE TO SWITCHOVER TO PRIMARY; ALTER DATABASE PREPARE TO SWITCHOVER TO PRIMARY * ERROR at line 1: ORA-16282: operation not permitted during rolling upgrade
After I faffed around on google, then I recalled I forgot to startup the log apply on the current standby.
Well, the “PREPARE TO SWITCHOVER TO PRIMARY” succeeded after I started the log apply.
January 18, 2012
RFS: Assigned to RFS process 18298 RFS: Database mount ID mismatch [0x5f0edd7b:0x5788f47c] (1594809723:1468593276) RFS: Destination database ID mismatch [0x5bb6deb1:0x5775211a] (1538711217:1467293978) RFS: Not using real application clusters
I unset fal_client, fal_server, and log_archive_config the message wont go away.
Suddenly realized that the old primary is still sending archive log to this new standby.
SQL>alter system reset log_archive_dest_2; (this is a 10g db) ERROR at line 1:ORA-00905: missing keyword SQL>alter system set log_archive_dest_state_2=defer; System altered.
That stopped the message.
February 25, 2011
1. When run the command
alter database recover to logical standby pewsos
It just hung there and the log reported “Media Recovery Waiting for thread 1 sequence xxx”
alter system set log_archive_dest_2=’service=pewsos LGWR ASYNC valid_for=(online_logfiles,primary_role) db_unique_name=pewsos’;
I have to manual copy the log file over and completed the command.
It hung again. I get rid of ” LGWR ASYNC” and work properly
2. It failed to restart , reporting:
ORA-16181: SGA specified for Logical Standby is too large Cause: MAX_SGA is larger than the larger of initialization parameters
It turned out to be the “MAX_SGA” parameter in logical standby settings is too big.
September 30, 2010
You can do that.
The old primary db need to be shutdown.
You need to let your application use the standby before start. Can you open your physical standby in read-only mode
1. On primary:
SQL>SELECT SWITCHOVER_STATUS FROM V$DATABASE;
2. On primary:
SQL>ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;
3. Shutdown and then mount former primary
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
4 Verify that the switchover target is ready to be switched to the primary role.
SELECT SWITCHOVER_STATUS FROM V$DATABASE;
5. Switch the target physical standby database role to the primary role.
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN;
6 Open the new primary database.
SQL> ALTER DATABASE OPEN;
7 Start Redo Apply on the new physical standby database.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION;
August 02, 2010
I puppetized Oracle listener.ora, then suddenly I cannot connect to Oracle
sqlplus system/xxx@db_service ORA-01034: ORACLE not available ORA-27101: shared memory realm does not exist Solaris-AMD64 Error: 2: No such file or directory
I checked the configuration, everything seems fine.
After some fruitless effort, I notice the value $oracle_home in of /var/opt/oracle/oratab is /usr/local/oracle. /usr/local/oracle is the symlink of /package/oracle. I changed the value of /var/opt/oracle/oratab to /package/oracle and reset the environment variable ORACLE_HOME to /package/oracle. Restarted the database and I can now connect to database.
July 08, 2010
In 10g, you manually create the password file.
In 11g, you have to copy password from primary. Otherwise, the following error will be reported:
*** 2010-07-08 14:46:17.196 4132 krsh.c Error 1031 received logging on to the standby *** 2010-07-08 14:46:17.196 869 krsu.c Error 1031 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'xxx' ORA-01031: insufficient privileges *** 2010-07-08 14:46:17.197 4132 krsh.c Error 1031 for archive log file 1 to 'xxxx' *** 2010-07-08 14:46:17.197 2747 krsi.c krsi_dst_fail: dest:2 err:1031 force:0 blast:1 ORA-01031: insufficient privileges Destination is specified with ASYNC=61440
You also need to change the name of the password file.
So the command will be:
scp oracle@primary:/usr/orahome/dbs/orapwPrimarySid /usr/orahome/dbs/orapwStandbySid
The database do not need to be restarted.
December 01, 2009
Some new features might help developer:
1. Off load read to active data guard , set standby_max_data_delay = none. It throw exception if standby is not in sync with primary. SB can identify the exception and read data again from primary.
2. Flashback data archive can keep history data for weeks, months. Helpful audit_trail or editing history
3. Direct NFS
4. Make a table read only
5. Add a NOT NULL column with a default value much faster.
invisible index: Test drop index by making it invisible
bdump, cdump and udump is depreciated, but adump is kept.
color sql: let AWR caputre it even it not in top sql
rows are inserted uncompressed and in the routine way
alter index in_tab_01 rebuild online; No blocking at all
May 28, 2009
When you run DBMS_LOGSTDBY.INSTANTIATE_TABLE, it actually create a table SYS_IMPORT_TABLE_nn, (I run as sys. If you run as another user, the table name might be different.).
After the job run successfully, Oracle does not delete the table.
Seems Oracle only allowing 100 such tables (i.e., nn between 0 and 99). If you run the procedure more than 100 times, you will receive an error ORA-31634: job already exists.
Quite a silly omission from Oracle.
You can use : select * from dba_datapump_jobs order by job_name to verify.
The solution is quite simple, just drop the table SYS_IMPORT_TABLE_nn. Before dropping it, make sure the job associated with the table is not running. The status is “NOt Running” in dba_datapump_jobs.
March 17, 2009
It only give the following error messages.
krvxerpt: Errors detected in process 52, role LOGICAL STANDBY COORDINATOR. Tue Mar 17 10:00:05 2009 krvsqn2s: unhandled failure 604
This happened when you set logical standby to skip some schemas.The workaround is documented in metalink 748208.1
set echo on set pagesize 100 spool workaround.log select * from system.logstdby$skip; select distinct nvl(esc, 'NULL') from system.logstdby$skip; select * from system.logstdby$skip where esc is null; update system.logstdby$skip set esc = '\' where esc is NULL; -- Following should return no rows (due to update above) select * from system.logstdby$skip where esc is null; -- should no longer see any NULL in output select distinct nvl(esc, 'NULL') from system.logstdby$skip; -- Capture a snapshot of the final results select * from system.logstdby$skip; -- commit changes commit;