Pages

Friday, January 29, 2016

Change Hostname

to change the hostname permanently, you need to change it in two places:
vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=newHostName
and: a good idea if you have any applications that need to resolve the IP of the hostname)
vi /etc/hosts 
127.0.0.1 newHostName
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
and then
 rebooting the system

Wednesday, January 27, 2016

Executing Linux / UNIX commands from web page

A Web interfaces is almost used by routers and many other sophisticated programs such as webmin. However, why go for a web interface or execute commands from web page? For automation purpose, you need to use a web interfaces. Another advantage is you can access your web-based interface from any computer, running any operating system, anytime in the world :D
In this first part, you will see how to use simple bash (shell) script from web page. In order to execute commands or shell script from a webpage you need:
  1. CGI support with Apache / lighttpd web server.
  2. I'm assuming that you have a properly configured web server.
You need to store program in cgi-bin directory. If you are using Debian Linux default location for cgi-bin directory is /usr/lib/cgi-bin. Under Red Hat / Fedora it is /var/www/cgi-bin. Use text editor such as vi to create a first.cgi program:
$ cd /usr/lib/cgi-bin
$ vi first.cgi
first.cgi code listing:
#!/bin/bash
echo "Content-type: text/html"
echo ""
echo "<html><head><title>Bash as CGI"
echo "</title></head><body>"
echo "<h1>Hello world</h1>"
echo "Today is $(date)"
echo "</body></html>"
Save and close the file. Setup execute permission on the script:
$ chmod +x first.cgi
Fire up your web browser and test the script, for example type url http://localhost/cgi-bin/first.cgi or http://your-ip/cgi-bin/first.cgi
You need to send headers, first three lines are almost same for all your script:
  • #!/bin/bash : First line tell Linux/UNIX how file first.cgi should be run. So it will use /bin/bash interpreter to execute your rest of program.
  • echo "Content-type: text/html" : Send html headers, you must include this line.
  • echo "" : Send a blank line, you must include this line.
Rest is html code. Take a close look at following echo command:
echo "Today is $(date)"
It use shell feature called command substitution. It allows the output of a command to replace the command name:
$(command)
Your bash shell performs the expansion by executing command and replacing the command substitution. So date command get executed by replacing the its output.

Real life example

Here is simple script that collects system information. Create script in cgi-bin directory:
#!/bin/bash
echo "Content-type: text/html"
echo ""
echo "<html><head><title>Bash as CGI"
echo "</title></head><body>"
echo "<h1>General system information for host $(hostname -s)</h1>"
echo ""
echo "<h1>Memory Info</h1>"
echo "<pre> $(free -m) </pre>"
echo "<h1>Disk Info:</h1>"
echo "<pre> $(df -h) </pre>"
echo "<h1>Logged in user</h1>"
echo "<pre> $(w) </pre>"
echo "<center>Information generated on $(date)</center>"
echo "</body></html>"
Save and close the file. Setup execute permission on script:
$ chmod +x script.cgi
Fire up web browser and test it (http://localhost/cgi-bin/script.cgi):

Next time you will see:
  • How to use and place form elements (from POSTs and GETs)
  • Cookies in your environment
  • Use of perl scripting
  • And finally use of special tools

Oracle GoldenGate: Using Macros

Introduction
Oracle GoldenGate (OGG) Macros provide functionality for sharing parameters or other run-time configuration settings across multiple components and externalizing complex configuration settings in order to streamline parameter file contents. In this article we shall discuss how to create a Macro Library and access specific Macros via the OGG Group parameter file.
Main Article
A Macro Library is a collection of OGG Macros used to externalize OGG parameters shared across multiple Groups. The library can be a single file containing multiple Macro definitions, or multiple files. Best practice is to create a directory “dirmac” as part of the OGG installation environment to hold the library files. Another best practice is to use the suffix “.mac” on all library files. This way Macro Library files can be recognized without having to open and read one.
Using these best practice tips, the following Macro Library would be stored inside of the file “$OGG_BASE/dirmac/macrolib.mac”.
MACRO #dbconnect
BEGIN
userid gguser, password AACAAAAAAAAAAAHAAIFBOIYAMCGIMARE, encryptkey default
END;

MACRO #bpsettings
BEGIN
-- The following are "best practice" runtime options which may be
-- used for workload accounting and load balancing purposes.
-- STATOPTIONS RESETREPORTSTATS ensures that process
-- statistics counters will be reset whenever a new report file is
-- created.
STATOPTIONS RESETREPORTSTATS

-- Generate a report every day at 1 minute after midnight.
-- This report will contain the number of operations, by operation
-- type, performed on each table.
REPORT AT 00:01

-- Close the current report file and create a new one daily at 1 
-- minute after midnight. Eleven report files are maintained on disk
-- in the dirrpt directory/folder for each GoldenGate group. The 
-- current report file names are <group name>.rpt. The older reports
-- are <group name>0.rpt through <group name>9.rpt, with the
-- older report files having larger numbers.
REPORTROLLOVER AT 00:01

-- REPORTCOUNT denotes that every 60 seconds the Replicat report file
-- in the dirrpt directory/folder will have a line added to it that reports the
-- total number of records processed since startup, along with the rated
-- number of records processed per second since startup, and the change
-- in rate, or "delta" since the last report.
-- In a production environment, this setting would typically be 1 hour.
REPORTCOUNT EVERY 60 SECONDS, RATE

-- End of "best practices" section
END;

MACRO #funcsmap
PARAMS (#src_table, #target_table)
BEGIN
   -- Map the source table provided in the variable #src_table to the target
   -- table listed in the variable #target_table. There are extra columns in the
   -- target we need to populate, so get the data from either the environment
   -- variable, or the user token data sent over from Extract
   MAP #src_table, TARGET #target_table,
    colmap (usedefaults,
            orders_trans_ts = @GETENV ("GGHEADER", "COMMITTIMESTAMP"),
            trans_rec_loc8tr = @STRCAT (@GETENV ("RECORD", "FILESEQNO"), 
                                        @GETENV ("RECORD", "FILERBA")),
            extract_lag_ms = @TOKEN ("TKN-EXTLAG-MSEC"),
            replicat_lag_ms = @GETENV ("LAG", "MSEC"),
            src_db_name = @TOKEN ("TKN-SRC-DBNAME"),
            src_db_version = @TOKEN ("TKN-SRC-DBVERSION"),
            src_txn_csn = @TOKEN ("TKN-TXN-CSN")
     );
END;
Macros are identified by the keyword “Macro” followed by the macro name “#<name>”. The Macro body is contained within the “BEGIN” and “END” statements. The three Macros provided in this example library file are #dbconnect, #bpsettings, and #funcsmap.
  • The #dbconnect Macro provides a centralized location for storing database connection information. The database login password is Blowfish encrypted using the “DEFAULT” GoldenGate encryption key.
  • #bpsettings activates best practices settings for generating hourly and daily activity counts.
  • #funcsmap accepts input parameters and uses them to build a Replicat map statement.

Using the Macro Library

Consider the following Replicat parameter file:
nolist
include ./dirmac/macrolib.mac
list

replicat rfuncb
#dbconnect ()
#bpsettings ()
sourcedefs ./dirdef/mydefs.defs
discardfile ./dirrpt/rfuncb.dsc, purge
#funcsmap (amer.orders, euro.funcs_test)
The include statement loads the Macro Library file ./dirmac/macrolib.mac when the Replicat RFUNCB is started. The parameter nolist tells the Replicat process to not write any of the following text into its report file; while list restores normal output listing.
If list is not included in the parameter file after using nolist, no information will be written to the Group report file.
As the Replicat process reads its parameter file during the startup process, text substitution will occur for each Macro specified. So, the text “#dbconnect ()” will be replaced with the text from the Macro #dbconnect in the Macro Library file. The same will occur for “#bpsettings ()” and “#funcsmap (amer.orders, euro.funcs_test)”.
You may have noticed that the call to the Macro #funcsmap is a bit different from the other two in that it has two arguments. Because the Macro builds the Replicat MAP statement, it requires source and target table definitions. The PARAMS option is used to define input arguments for the Macro. The following applies when PARAMS is specified:
  • Parameter names are not case-sensitive.
  • Do not use quotes, or else the parameter name will be treated as text and ignored.
  • A parameters clause is optional.
  • The maximum size of a PARAMS statement is 9999 bytes and can contain no more than 99 parameters.
  • Every parameter used in a macro must be declared in the PARAMS statement, and when the macro is invoked, the invocation must include a value for each parameter.

Summary

Oracle GoldenGate (OGG) Macros can be an important trick in your toolkit for sharing parameters or other run-time configuration settings across multiple components and externalizing complex configuration settings in order to streamline parameter file contents.

Tuesday, January 26, 2016

How to drop all indexes in a SCHEMA

How to drop all indexes in a SCHEMA
–Login as the schema user whose indexes you want to drop
SQL>set pages 10000
SQL>set wrap off
SQL>set heading off
SQL>spool /u2/scripts/idxswx.txt
SQL> select ‘ drop index ‘ || index_name ||’;’ from user_indexes;
drop index IND1;
drop index IND2;
drop index IND3;
SQL> spool off;
— This will spool the file to the set location and now run the Script as the schema user
SQL> conn scott/tiger
SQL>@indx.sql
–in the particular schema in which you want to drop indexes.
A similar script can be used for synonyms especially when you import a schema into a different name than when it was exported.


When login as GGADMIN with DBA privs:

set head off
select ' drop index ' || index_name ||';' from dba_indexes where TABLE_NAME like 'AR_%';

例子:
要drop以下表的index:
GGODS.AR_ADJUSTMENTS_ALL;
GGODS.AR_BATCH_SOURCES_ALL;
GGODS.AR_BATCHES_ALL;
GGODS.AR_CASH_RECEIPT_HISTORY_ALL;
GGODS.AR_CASH_RECEIPTS_ALL;
GGODS.AR_DISTRIBUTIONS_ALL;
GGODS.AR_MISC_CASH_DISTRIBUTIONS_ALL;
GGODS.AR_PAYMENT_SCHEDULES_ALL;
GGODS.AR_RECEIPT_CLASSES;
GGODS.AR_RECEIPT_METHODS;
GGODS.AR_RECEIVABLE_APPLICATIONS_ALL;
执行:
SQL> select ' drop index ' || index_name ||';' from dba_indexes where TABLE_NAME like 'AR_%';

结果为:
 drop index I_ARGUMENT1;
 drop index I_ARGUMENT2;
 drop index AR_ADJUSTMENTS_N10;
 drop index AR_ADJUSTMENTS_N11;
 drop index AR_ADJUSTMENTS_N13;
 drop index AR_ADJUSTMENTS_N14;
 drop index AR_ADJUSTMENTS_N15;
 drop index AR_ADJUSTMENTS_N16;
 drop index AR_ADJUSTMENTS_N17;
 drop index AR_ADJUSTMENTS_N2;
 drop index AR_ADJUSTMENTS_N3;
 drop index AR_ADJUSTMENTS_N4;
 drop index AR_ADJUSTMENTS_N5;
 drop index AR_ADJUSTMENTS_N6;
 drop index AR_ADJUSTMENTS_N7;
 drop index AR_ADJUSTMENTS_N8;
 drop index AR_ADJUSTMENTS_N9;
 drop index AR_ADJUSTMENTS_U1;
 drop index AR_ADJUSTMENTS_U2;
 drop index AR_BATCH_SOURCES_U1;
 drop index AR_BATCHES_N1;
 drop index AR_BATCHES_N2;
 drop index AR_BATCHES_N3;
 drop index AR_BATCHES_N4;
 drop index AR_BATCHES_N5;
 drop index AR_BATCHES_N6;
 drop index AR_BATCHES_N7;
 drop index AR_BATCHES_N8;
 drop index AR_BATCHES_N9;
 drop index AR_BATCHES_U1;
 drop index AR_CASH_RECEIPT_HISTORY_N1;
 drop index AR_CASH_RECEIPT_HISTORY_N10;
 drop index AR_CASH_RECEIPT_HISTORY_N2;
 drop index AR_CASH_RECEIPT_HISTORY_N3;
 drop index AR_CASH_RECEIPT_HISTORY_N4;
 drop index AR_CASH_RECEIPT_HISTORY_N6;
 drop index AR_CASH_RECEIPT_HISTORY_N8;
 drop index AR_CASH_RECEIPT_HISTORY_N9;
 drop index AR_CASH_RECEIPT_HISTORY_U1;
 drop index AR_CASH_RECEIPTS_CE_N1;
 drop index AR_CASH_RECEIPTS_N1;
 drop index AR_CASH_RECEIPTS_N10;
 drop index AR_CASH_RECEIPTS_N11;
 drop index AR_CASH_RECEIPTS_N12;
 drop index AR_CASH_RECEIPTS_N13;
 drop index AR_CASH_RECEIPTS_N14;
 drop index AR_CASH_RECEIPTS_N15;
 drop index AR_CASH_RECEIPTS_N2;
 drop index AR_CASH_RECEIPTS_N3;
 drop index AR_CASH_RECEIPTS_N5;
 drop index AR_CASH_RECEIPTS_N6;
 drop index AR_CASH_RECEIPTS_N7;
 drop index AR_CASH_RECEIPTS_N8;
 drop index AR_CASH_RECEIPTS_N9;
 drop index AR_CASH_RECEIPTS_U1;
 drop index AR_DISTRIBUTIONS_N1;
 drop index AR_DISTRIBUTIONS_N2;
 drop index AR_DISTRIBUTIONS_N3;
 drop index AR_DISTRIBUTIONS_U1;
 drop index AR_MISC_CASH_DISTRIBUTIONS_N1;
 drop index AR_MISC_CASH_DISTRIBUTIONS_N2;
 drop index AR_MISC_CASH_DISTRIBUTIONS_N3;
 drop index AR_MISC_CASH_DISTRIBUTIONS_N4;
 drop index AR_MISC_CASH_DISTRIBUTIONS_N5;
 drop index AR_MISC_CASH_DISTRIBUTIONS_U1;
 drop index AR_PAYMENT_SCHEDULES_N1;
 drop index AR_PAYMENT_SCHEDULES_N10;
 drop index AR_PAYMENT_SCHEDULES_N11;
 drop index AR_PAYMENT_SCHEDULES_N12;
 drop index AR_PAYMENT_SCHEDULES_N13;
 drop index AR_PAYMENT_SCHEDULES_N14;
 drop index AR_PAYMENT_SCHEDULES_N15;
 drop index AR_PAYMENT_SCHEDULES_N16;
 drop index AR_PAYMENT_SCHEDULES_N17;
 drop index AR_PAYMENT_SCHEDULES_N18;
 drop index AR_PAYMENT_SCHEDULES_N2;
 drop index AR_PAYMENT_SCHEDULES_N3;
 drop index AR_PAYMENT_SCHEDULES_N4;
 drop index AR_PAYMENT_SCHEDULES_N5;
 drop index AR_PAYMENT_SCHEDULES_N6;
 drop index AR_PAYMENT_SCHEDULES_N7;
 drop index AR_PAYMENT_SCHEDULES_N9;
 drop index AR_PAYMENT_SCHEDULES_V1;
 drop index AR_PAYMENT_SCHEDULES_U1;
 drop index AR_PAYMENT_SCHEDULES_U2;
 drop index AR_PAYMENT_SCHEDULES_U3;
 drop index AR_RECEIPT_CLASSES_U1;
 drop index AR_RECEIPT_METHODS_U1;
 drop index AR_RECEIPT_METHODS_U2;
 drop index AR_RECEIVABLE_APPLICATIONS_N1;
 drop index AR_RECEIVABLE_APPLICATIONS_N10;
 drop index AR_RECEIVABLE_APPLICATIONS_N11;
 drop index AR_RECEIVABLE_APPLICATIONS_N12;
 drop index AR_RECEIVABLE_APPLICATIONS_N13;
 drop index AR_RECEIVABLE_APPLICATIONS_N14;
 drop index AR_RECEIVABLE_APPLICATIONS_N15;
 drop index AR_RECEIVABLE_APPLICATIONS_N16;
 drop index AR_RECEIVABLE_APPLICATIONS_N2;
 drop index AR_RECEIVABLE_APPLICATIONS_N3;
 drop index AR_RECEIVABLE_APPLICATIONS_N4;
 drop index AR_RECEIVABLE_APPLICATIONS_N5;
 drop index AR_RECEIVABLE_APPLICATIONS_N6;
 drop index AR_RECEIVABLE_APPLICATIONS_N7;
 drop index AR_RECEIVABLE_APPLICATIONS_N8;
 drop index AR_RECEIVABLE_APPLICATIONS_N9;
 drop index AR_RECEIVABLE_APPLICATIONS_U1;
 drop index AR_RECEIVABLE_APPLICATIONS_V1;
107 rows selected.

Monday, January 25, 2016

GoldenGate Stats

STATS EXTRACT ILFCOAP, TOTAL, HOURLY, REPORTRATE MIN, RESET, REPORTFETCH



Realtime
STATS EXTRACT EXT_1, TOTAL, LATEST, REPORTRATE MIN, RESET, REPORTFETCH

Active-Active GoldenGate Part1-Configuration

Install GG for Oracle on Linux:
1. unzip
2. mv to /u01/app/ggs
3. vi ~/.bash_profile
 export PATH=$PATH:/u01/app/ggs
 export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/u01/app/ggs
 export GG_HOME=/u01/app/ggs
4. Prep in Database
create tablespace ggadmin datafile '/u01/oradata/centos62/ggadmin01.dbf' size 200m;
create user ggadmin identified by ggadmin default tablespace ggadmin temporary tablespace temp;
alter user ggadmin default tablespace ggadmin;
grant DBA to ggadmin;
-- grant connect,resource to ggadmin;
-- grant select any dictionary, select any table to ggadmin;
-- grant create table to ggadmin;
-- grant flashback any table to ggadmin;
-- grant execute on dbms_flashback to ggadmin;
-- grant execute on utl_file to ggadmin;
SQL> SELECT supplemental_log_data_min, force_logging FROM v$database;
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
SQL> ALTER DATABASE FORCE LOGGING;
ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION=true;
5. GGSCI 1> DBLOGIN USERID ggadmin, PASSWORD ggadmin
6. EDIT PARAMS MGR
PORT 7809
USERID ggadmin, PASSWORD ggadmin
PURGEOLDEXTRACTS /u01/app/ggs/dirdat/ex, USECHECKPOINTS
7. checkpoint table
GGSCI (devu007) 2> EDIT PARAMS ./GLOBALS
GGSCHEMA GGADMIN
CHECKPOINTTABLE GGADMIN.CHKPTAB
GGSCI (devu007) 4> DBLOGIN USERID ggadmin, PASSWORD ggadmin
Successfully logged into database.
GGSCI (devu007) 6> ADD CHECKPOINTTABLE GGADMIN.CHKPTAB

8. Use user scott/scott as test schema:
alter user scott identified by scott account unlock;
9.CREATE TABLE cust
( customer_id number(10) NOT NULL,
  customer_name varchar2(50),
  city varchar2(50),
  CONSTRAINT customers_pk PRIMARY KEY (customer_id)
);
10. Extract EXT_1
ADD EXTRACT ext_1, INTEGRATED TRANLOG, BEGIN now
ADD EXTTRAIL dirdat/e1, EXTRACT ext_1
REGISTER EXTRACT ext_1, DATABASE

-- Identify the Extract group:
EXTRACT ext_1
-- Specify database login information as needed for the database:
USERID ggadmin, PASSWORD ggadmin
-- Specify the local trail that this Extract writes to
-- and the encryption algorithm:
-- ENCRYPTTRAIL algorithm
EXTTRAIL dirdat/e1
-- Exclude Replicat transactions. Uncomment ONE of the following:
-- DB2 z/OS, DB2 LUW, DB2 IBM i, Oracle (classic capture), and
-- Sybase:
-- TRANLOGOPTIONS EXCLUDEUSER Replicat_user
-- Oracle (classic capture) alternative to EXCLUDEUSER:
-- EXCLUDEUSERID Oracle_uid
-- Oracle integrated capture:
TRANLOGOPTIONS EXCLUDETAG 02
-- SQL Server and Sybase:
-- TRANLOGOPTIONS EXCLUDETRANS transaction_name
-- SQL/MX:
-- TRANLOGOPTIONS FILTERTABLE checkpoint_table_name
-- Teradata:
-- SQLEXEC 'SET SESSION OVERRIDE REPLICATION ON;'
-- SQLEXEC 'COMMIT;'
-- Specify API commands if Teradata:
-- VAM library name, PARAMS ('param' [, 'param'] [, ...])
-- Capture before images for conflict resolution:
-- ------------------Improtant FOR CDR
-- GETBEFORECOLS(ON UPDATE ALL)
-- GETBEFORECOLS(ON DELETE ALL)
-- Log all scheduling columns for CDR and if using integrated Replicat
LOGALLSUPCOLS
-- Specify tables to be captured and (optional) columns to fetch:
TABLE scott.cust;

11. Pump Pump_1
ADD EXTRACT pump_1, EXTTRAILSOURCE dirdat/e1, BEGIN now
ADD RMTTRAIL dirdat/r1, EXTRACT pump_1
-- Identify the data pump group:
EXTRACT pump_1
-- Specify database login information as needed for the database:
USERID ggadmin, PASSWORD ggadmin
-- Decrypt the data only if the data pump must process it.
-- DECRYPTTRAIL
-- Specify the name or IP address of the secondary system
-- and optional encryption of data over TCP/IP:
RMTHOST 192.168.137.63, MGRPORT 7809
-- Specify remote trail and encryption algorithm on secondary system:
-- ENCRYPTTRAIL algorithm
RMTTRAIL dirdat/r1
-- Pass data through without mapping, filtering, conversion:
PASSTHRU
-- Specify tables to be captured:
TABLE scott.cust;

12. Replicat 1
ADD REPLICAT rep_1, INTEGRATED , EXTTRAIL dirdat/r1, BEGIN now
-- Identify the Replicat group:
REPLICAT rep_1
-- State that source and target definitions are identical:
ASSUMETARGETDEFS
-- Specify database login information as needed for the database:
USERID ggadmin, PASSWORD ggadmin
-- Specify error handling rules:
-- REPERROR (error, response)
-- Set redo tag for Oracle only replicat via settag
-- Default is 00.
DBOPTIONS SETTAG 01
-- Specify tables for delivery, threads if coordinated Replicat
-- and conflict-resolution:
MAP scott.cust, TARGET scott.cust;
-- Specify mapping of exceptions to exceptions table:
-- MAP [container.|catalog.]owner.*, TARGET owner.exceptions, EXCEPTIONSONLY;

13.  Extract EXT_2
ADD EXTRACT ext_2, INTEGRATED TRANLOG, BEGIN now
ADD EXTTRAIL dirdat/e2, EXTRACT ext_2
REGISTER EXTRACT ext_2, DATABASE
-- Identify the Extract group:
EXTRACT ext_2
-- Specify database login information as needed for the database:
USERID ggadmin, PASSWORD ggadmin
-- Specify the local trail that this Extract writes to
-- and the encryption algorithm:
-- ENCRYPTTRAIL algorithm
EXTTRAIL dirdat/e2
-- Exclude Replicat transactions. Uncomment ONE of the following:
-- DB2 z/OS, DB2 LUW, DB2 IBM i, Oracle, and Sybase:
-- TRANLOGOPTIONS EXCLUDEUSER Replicat_user
-- Oracle alternative to EXCLUDEUSER:
-- EXCLUDEUSERID Oracle_uid
-- Oracle integrated capture:
TRANLOGOPTIONS EXCLUDETAG 01
-- SQL Server and Sybase:
-- TRANLOGOPTIONS EXCLUDETRANS transaction_name
-- SQL/MX:
-- TRANLOGOPTIONS FILTERTABLE checkpoint_table_name
-- Teradata:
-- SQLEXEC 'SET SESSION OVERRIDE REPLICATION ON;'
-- SQLEXEC 'COMMIT;'
-- Oracle:
-- TRACETABLE trace_table_name
-- Log all scheduling columns for CDR and if using integrated Replicat
LOGALLSUPCOLS
-- Capture before images for conflict resolution:
-- GETBEFORECOLS(ON UPDATE ALL, ON DELETE ALL)
-- Specify tables to be captured and (optional) columns to fetch:
TABLE scott.cust;

14. Pump pump_2
ADD EXTRACT pump_2, EXTTRAILSOURCE dirdat/e2, BEGIN now
ADD RMTTRAIL dirdat/r2, EXTRACT pump_2
-- Identify the data pump group:
EXTRACT pump_2
-- Specify database login information as needed for the database:
USERID ggadmin, PASSWORD ggadmin
-- Decrypt the data only if the data pump must process it.
-- DECRYPTTRAIL
-- Specify the name or IP address of the primary system
-- and optional encryption of data over TCP/IP:
RMTHOST 192.168.137.62, MGRPORT 7809
-- Specify the remote trail and encryption algorithm on the primary system:
-- ENCRYPTTRAIL algorithm
RMTTRAIL dirdat/r2
-- Pass data through without mapping, filtering, conversion:
PASSTHRU
-- Specify tables to be captured:
TABLE scott.cust;

15. Replicat rep_2
ADD REPLICAT rep_2, INTEGRATED , EXTTRAIL dirdat/r2, BEGIN now
-- Identify the Replicat group:
REPLICAT rep_2
-- State that source and target definitions are identical:
ASSUMETARGETDEFS
-- Specify database login information as needed for the database:
USERID ggadmin, PASSWORD ggadmin
-- Specify error handling rules:
-- REPERROR (error, response)
DBOPTIONS SETTAG 02
-- Specify tables for delivery, threads if coordinated Replicat
-- and conflict-resolution:
MAP scott.cust, TARGET scott.cust;
-- Specify mapping of exceptions to exceptions table:
-- MAP [container.|catalog.]owner.*, TARGET owner.exceptions, EXCEPTIONSONLY;
16. Test:  select * from cust;
insert into cust values (1, 'test1', 'city1');
insert into cust values (2, 'test2', 'city2');
insert into cust values (3, 'test3', 'insert on Primary, not commit');
insert into cust values (4, 'test4', 'insert on Secondary, not commit');

Sunday, January 24, 2016

Identifies the position in the transaction log of the data: AuditPos

If source is Oracle:

AuditPos
Identifies the position in the transaction log of the data.

To retrieve AuditPos, use:

TOKENS ( RBA = @GETENV ("GGHEADER","LOGPOSITION"));





@GETENV ("TRANSACTION", "CSN")
Returns same value on source and target. Because this is also stored in the trail's token area. On target, it's not the target side commit number!

Tuesday, January 12, 2016

Weblogic Server can't start after Upgrade Java from 1.7 to 1.8

user_projects\domains\MedRecDomain\bin\setDomainEnv.cmd
 
edit setDomainEnv file and try to change the JAVA_HOME value and try to restart the server again.

NFS. Tricks in OVM

http://www.oracle.com/technetwork/server-storage/vm/ovm3-demo-vbox-1680215.pdf


STARTING NFS QUOTAS: CANNOT REGISTER SERVICE: RPC: UNABLE TO RECEIVE; ERRNO = CONNECTION REFUSED

While trying to setup NFS on an OVM 3.2 configuration on OEL 5.9, I followed the documentation to the letter but when time came to start the NFS daemon I kept getting the error above.
1
2
3
4
5
6
[root@ovm /]# service nfs start
Starting NFS services: [ OK ]
Starting NFS quotas: Cannot register service: RPC: Unable to receive; errno = Connection refused
rpc.rquotad: unable to register (RQUOTAPROG, RQUOTAVERS, udp).
[FAILED]
Starting NFS daemon: [FAILED]
After a little googling, it turns out that RPCBind may not be running.
1
2
[root@ovm /]# rpcinfo -p
rpcinfo: can't contact portmapper: RPC: Remote system error - Connection refused
Perhaps I needed to update my NFS Utility packages?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
[root@ovm /]# yum install nfs-utils
Loaded plugins: rhnplugin, security
This system is not registered with ULN.
You can use up2date --register to register.
ULN support will be disabled.
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package nfs-utils.x86_64 1:1.0.9-70.el5 set to be updated
--> Processing Dependency: initscripts >= 8.45.43 for package: nfs-utils
--> Running transaction check
---> Package initscripts.x86_64 0:8.45.44-3.0.1.el5 set to be updated
--> Finished Dependency Resolution
 
Dependencies Resolved
 
=============================================================================================================================================================
Package Arch Version Repository Size
=============================================================================================================================================================
Updating:
nfs-utils x86_64 1:1.0.9-70.el5 el5_latest 409 k
Updating for dependencies:
initscripts x86_64 8.45.44-3.0.1.el5 el5_latest 1.6 M
 
Transaction Summary
=============================================================================================================================================================
Install 0 Package(s)
Upgrade 2 Package(s)
 
Total download size: 2.0 M
Is this ok [y/N]: y
Downloading Packages:
(1/2): nfs-utils-1.0.9-70.el5.x86_64.rpm | 409 kB 00:00
(2/2): initscripts-8.45.44-3.0.1.el5.x86_64.rpm | 1.6 MB 00:01
-------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 598 kB/s | 2.0 MB 00:03
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Updating : initscripts 1/4
warning: /etc/sysctl.conf created as /etc/sysctl.conf.rpmnew
Updating : nfs-utils 2/4
Cleanup : nfs-utils 3/4
Cleanup : initscripts 4/4
 
Updated:
nfs-utils.x86_64 1:1.0.9-70.el5
 
Dependency Updated:
initscripts.x86_64 0:8.45.44-3.0.1.el5
 
Complete!
Then I tried the RPC info command again, but no luck!
1
2
[root@ovm /]# rpcinfo -p
rpcinfo: can't contact portmapper: RPC: Remote system error - Connection refused
Portmap? Oh, that would explain a lot since NFS apparently requires port mapper service to run.
1
2
[root@ovm /]# chkconfig portmap on
[root@ovm yum.repos.d]# service portmap start
Try RPC info again. Aha, that did it!
1
2
3
4
[root@ovm yum.repos.d]# rpcinfo -p
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
Finally, restart the NFS Service and set it to autostart.
1
2
3
4
5
6
[root@ovm /]# service nfs start
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS daemon: [ OK ]
Starting NFS mountd: [ OK ]
[root@ovm /]# chkconfig nfs on
Hope this was helpful!
Cheers!