Friday, January 31, 2014

Opatch: ApplySession failed during prerequisite checks: Prerequisite check "CheckApplicable" failed.



Trying to apply a patch:

/opt/oracle/fmw11_1_1_5/oracle_common/OPatch/opatch apply -jdk /opt/oracle/java/ -invPtrLoc /opt/oracle/fmw11_1_1_5/oraInst.loc
Invoking OPatch 11.1.0.8.2



Oracle Interim Patch Installer version 11.1.0.8.2

Copyright (c) 2010, Oracle Corporation.  All rights reserved.





Oracle Home       : /opt/oracle/fmw11_1_1_5/oracle_common

Central Inventory : /opt/oracle/orainventory

   from           : /opt/oracle/fmw11_1_1_5/oraInst.loc

OPatch version    : 11.1.0.8.2

OUI version       : 11.1.0.9.0

OUI location      : /opt/oracle/fmw11_1_1_5/oracle_common/oui

Log file location : /opt/oracle/fmw11_1_1_5/oracle_common/cfgtoollogs/opatch/opatch2014-01-31_11-44-50AM.log



Patch history file: /opt/oracle/fmw11_1_1_5/oracle_common/cfgtoollogs/opatch/opatch_history.txt





OPatch detects the Middleware Home as "/opt/oracle/fmw11_1_1_5"



ApplySession applying interim patch '17279791' to OH '/opt/oracle/fmw11_1_1_5/oracle_common'



Running prerequisite checks...

Prerequisite check "CheckApplicable" failed.

The details are:

Patch 17279791: Required component(s) missing : [ oracle.osb.top, 11.1.1.5.0 ]

ApplySession failed during prerequisite checks: Prerequisite check "CheckApplicable" failed.

System intact, OPatch will not attempt to restore the system



OPatch failed with error code 74




Remember, you should ALWAYS set these variables:
export MW_HOME=/opt/oracle/fmw11_1_1_5/
export ORACLE_HOME=/opt/oracle/fmw11_1_1_5/osb
export JDK_HOME=/opt/oracle/java

(thanks to Monica Crugnola for reminding me this)



Dedicated to all developers



Thursday, January 30, 2014

Outage of JavaMonAmour.org

I have deleted by mistake the blog... luckily you have 90 days in blogger.com to recover the blog. Unfortunately the link between javamonamour.org and javamonamour.blogspot.ch was lost, and it was a real challenge to restore it.

Go to settings, basic and add a domain from which to forward the traffic. It MUST begin with www. (this is where I lost a LOT of time... I thought http://javamonamour.org would be ok, but you need to provide http://www.javamonamour.org.

Then you must log into google apps console, export the DNS zone file, log into godaddy control panel, import the DNS Zone File.

For some reason direct authentication between blogger.com and godaddy was not working, even after I set the TXT authentication in godaddy. So I had to do all manually.

Tuesday, January 28, 2014

difference between java and javaw

create this file JavaConsole.java:

import java.io.Console;

public class JavaConsole {
 public static void main(String[] args) {
  Console console = System.console();
  System.out.println("the console is " + console);
 }
}


compile it:
javac JavaConsole.java
run it:
java JavaConsole > outputjava.txt
javaw JavaConsole > outputjavaw.txt

in the second case, the console object is null, because javaw has not associate a Console to allow for stdin and stdout. We can't even print to a console output, that's why I had to redirect to a file in order to observe the result of the println.

When should I use javaw? Well, frankly, I guess when you don't want a console output, like in most server side applications. Why WebLogic uses java and not javaw? Good question.... in fact, you always want to redirect the stdout of WebLogic to a file (hoping that nothing is written to it...)...


Monday, January 27, 2014

Need a X-Server? Say welcome to Mobaxterm, and farewell to XMing

http://mobaxterm.mobatek.net/download.html
I was really fed up with XMing hanging and crashing... mobaxterm is a lot sleeker and loaded with options.... I might even consider going PRO and using it instead of good old stinky putty...


jvisualvm: java.lang.OutOfMemoryError: PermGen space

I get OOM while profiling a JMV.
If you do
jvisualvm -J-XX:MaxPermSize=512m
you get this:
ps -ef | grep visual
soa      16602 15386  0 18:06 pts/0    00:00:00 /bin/bash /opt/oracle/java/bin/../lib/visualvm//platform/lib/nbexec --jdkhome /opt/oracle/java/bin/.. --branding visualvm --clusters /opt/oracle/java/bin/../lib/visualvm//visualvm:/opt/oracle/java/bin/../lib/visualvm//profiler: --userdir /home/soa/.visualvm/7 -J-client -J-Xms24m -J-Xmx256m -J-Dsun.jvmstat.perdata.syncWaitMs=10000 -J-Dsun.java2d.noddraw=true -J-Dsun.java2d.d3d=false -J-XX:MaxPermSize=512m

soa      16702 16602  8 18:06 pts/0    00:00:07 /opt/oracle/java/bin/java -Djdk.home=/opt/oracle/java -classpath /opt/oracle/java/lib/visualvm/platform/lib/boot.jar:/opt/oracle/java/lib/visualvm/platform/lib/org-openide-modules.jar:/opt/oracle/java/lib/visualvm/platform/lib/org-openide-util.jar:/opt/oracle/java/lib/visualvm/platform/lib/org-openide-util-lookup.jar:/opt/oracle/java/lib/visualvm/platform/lib/locale/boot_ja.jar:/opt/oracle/java/lib/visualvm/platform/lib/locale/boot_zh_CN.jar:/opt/oracle/java/lib/visualvm/platform/lib/locale/org-openide-modules_ja.jar:/opt/oracle/java/lib/visualvm/platform/lib/locale/org-openide-modules_zh_CN.jar:/opt/oracle/java/lib/visualvm/platform/lib/locale/org-openide-util_ja.jar:/opt/oracle/java/lib/visualvm/platform/lib/locale/org-openide-util-lookup_ja.jar:/opt/oracle/java/lib/visualvm/platform/lib/locale/org-openide-util-lookup_zh_CN.jar:/opt/oracle/java/lib/visualvm/platform/lib/locale/org-openide-util_zh_CN.jar:/opt/oracle/java/lib/dt.jar:/opt/oracle/java/lib/tools.jar -Dnetbeans.dirs=/opt/oracle/java/bin/../lib/visualvm//visualvm:/opt/oracle/java/bin/../lib/visualvm//profiler: -Dnetbeans.home=/opt/oracle/java/lib/visualvm/platform -client -Xms24m -Xmx256m -Dsun.jvmstat.perdata.syncWaitMs=10000 -Dsun.java2d.noddraw=true -Dsun.java2d.d3d=false -XX:MaxPermSize=512m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/soa/.visualvm/7/var/log/heapdump.hprof org.netbeans.Main --userdir /home/soa/.visualvm/7 --branding visualvm


If your VisualVM hangs on "computing description", just restart the monitored JVMs (see http://stackoverflow.com/questions/6222210/visualvm-hanging-on-startup-computing-description )
Unfortunately, VisualVM has a limit of 64K profiled methods. There are options to limit to a subset of methods though...

Xlib: PuTTY X11 proxy: MIT-MAGIC-COOKIE-1 data did not match

I got this message logging as user1 with X11 forwarding with putty, then sudo su - user2, then run JConsole
solution:
login as user1
xauth list
(you get here a list of entries, say line1, line2...)
sudo su - user2
xauth add line1
xauth add line2
It works!

Sunday, January 26, 2014

How to demo Garbage Collection, JConsole and VisualVM

First, create with JDK 7 this Java Project in Eclipse:
package com.pierre.gctests;

import java.lang.management.ManagementFactory;

import javax.management.InstanceAlreadyExistsException;
import javax.management.MBeanRegistrationException;
import javax.management.MBeanServer;
import javax.management.MalformedObjectNameException;
import javax.management.NotCompliantMBeanException;
import javax.management.ObjectName;

public class GCTestMain {

 private static void init() throws MalformedObjectNameException, InstanceAlreadyExistsException, MBeanRegistrationException, NotCompliantMBeanException {
  MBeanServer mbs = null;
  mbs = ManagementFactory.getPlatformMBeanServer();
  GCTestAgent agent = new GCTestAgent();
  ObjectName agentName;
  agentName = new ObjectName("PVTests:name=GCTestAgent");
  mbs.registerMBean(agent, agentName);
 }
 
 public static void main(String[] args) throws Exception {
  init();
  for (;;) {
   Thread.sleep(1000);
  }
 }
}
package com.pierre.gctests;

public interface GCTestAgentMBean {
 void newThread(String threadName);
 void newCollectableObject(int size);
 void newLeakedObject(int size);
 void clearLeaked();
 void cpuIntensiveOperation(int iterations);
}


package com.pierre.gctests;

import java.util.ArrayList;
import java.util.Date;

public class GCTestAgent implements GCTestAgentMBean, Runnable {
 ArrayList<Object> leakingMap = new ArrayList<Object>(); 
 volatile double val = 10;

 @Override
 public void newThread(String threadName) {
  Thread newThread = new Thread(this);
  newThread.setName(threadName);
  newThread.start();
 }

 @Override
 public void newCollectableObject(int size) {
  createObject(size);
 }

 private Object createObject(int size) {
  ArrayList<String> list = new ArrayList<String>();
  for (int i = 0; i < size; i++) {
   list.add( (new Date()).toString() + " " +  i);
  }
  return list;
 }

 @Override
 public void newLeakedObject(int size) {
  leakingMap.add(createObject(size));
 }

 @Override
 public void run() {
  for (;;) {
   System.out.println(Thread.currentThread().getName());
   try {
    Thread.sleep(10000);
   } catch (InterruptedException e) {
    e.printStackTrace();
   }
  }
 }

 @Override
 public void clearLeaked() {
  leakingMap.clear();
 }

 @Override
 public void cpuIntensiveOperation(int iterations) {
  int[] myArrayToBeSorted = new int[] {4,2,6,7,2,1,6};
  for (int i = 0; i < iterations; i++) {
   for (int j = 0; j < myArrayToBeSorted.length - 1; j++) {
    myArrayToBeSorted[j] = myArrayToBeSorted[j] + myArrayToBeSorted[j + 1];
   }
  }
 }

}






Then, install the VisualVM GC plugin. Run the GCTestMain main, using these JVM arguments: -verbose:gc -Xms256m -Xmx256m. THen connect with JConsole and with VisualVM (I downloaded the one from the main website.... beware that there can be issues on connection when running on Windows when the username has uppercase characters (Windows sucks, don't forget).



Friday, January 24, 2014

Getting rid of Windows line feeds

This script to find infected files:
#!/bin/sh

PATH_TO_SCAN=$1

if [ "${PATH_TO_SCAN}" == "" ]; then
    PATH_TO_SCAN="."
fi

FILELIST=$(find "${PATH_TO_SCAN}" |egrep '\.(java|xml|js|groovy|sql|csv|txt|py)$')

IFS_BCK=$IFS
IFS="
"

for FILEPATH in ${FILELIST}; do
    cat -v "${FILEPATH}" | grep -I -q '\^M'
    if [ $? -eq 0 ]; then echo $FILEPATH; fi
done
IFS=$IFS_BCK


In Eclipse: Windows Preferences, then search "encoding" and set "line terminator" to Unix and Encoding to UTF-8

To convert infected files, in Eclipse do "File/Convert line terminator to..."

Otherwise, in Notepad++ use "edit /EOL conversion"



Wednesday, January 22, 2014

Poll result: Automation and Puppet

In my organization we automate configuration of our servers..

it looks like Puppet is still not that much popular.... infrastructure is still largely configured with ad-hoc home-grown methodologies.



Tuesday, January 21, 2014

Garbage Collection: testing the GC plugin for VisualVM

Watch the excellent tutorial

Install VisualVm GC plugin (I had to download it locally to install it, I could not install directly...) (beware, you should use visualvm for java 7, otherwise the plugin installation will fail)

I run this test code:

package com.pierre.gctests;

import java.util.ArrayList;

public class PVOOM {
 public static void main(String[] args) throws InterruptedException {
  ArrayList al = new ArrayList(); 
  for (int i = 0; i < 1000000; i++) {
   for (int j = 0; j< 1000; j++) {  
    al.add(new Animal(Integer.toString(i)));
   }
   Thread.sleep(1);
   
  }
 }
}

class Animal {
 public Animal(String name) {
  super();
  this.name = name;
 }

 String name;
 
}



I run it with Java 7 and I get this:



Monday, January 20, 2014

zxJDBC, invoking stored procedures passing parameters (zxjdbc callproc)

for the 2 world users of zxJDBC:
This works, the stored procedure is defined as:
create or replace 
PROCEDURE PVTESTPROC AS 
BEGIN
  INSERT INTO PVTEST (COLUMN1) VALUES ('mamma');
  commit;
END PVTESTPROC;



and the Python code to invoke it:

#grab somehow a connection object (conn) for the DB
....
#then invoke stored procedure
procedure='PVTESTPROC'
c  = conn.cursor()
params = [None]
c.callproc(procedure, params)


This fails, I have simply added a parameter:

create or replace 
PROCEDURE PVTESTPROC
(
  PARAM1 IN VARCHAR2  
) AS 
BEGIN
  INSERT INTO PVTEST (COLUMN1) VALUES (PARAM1);
  commit;
END PVTESTPROC;


and the Jython code is the same as before, but with params = ['PLUTO'] . This fails with "PLS-00306: wrong number or types of arguments in call to 'PVTESTPROC'"

see also same problem reported here http://code.activestate.com/lists/python-list/291477/ Frankly I give up.... I think there is definitely some problem with such an old version of Python

Saturday, January 18, 2014

Never, please NEVER buy an IPhone

I wanted to install a FREE App to read cbz files - CloudReaders.
Well, blood-sucking Apple forces you to LOGIN to Apple Store and GIVE THEM AWAY your credit card details, inclusive of Security Code. THIS IS RIDICULOUS.

If you don't provide your Credit Card, you are not allowed to install the FREE App.

Buy an Iphone and you'll be a slave. Stay away. For your good and the good of your family



MACBOOK PRO ? No, thanks

I have made a gift of a MACBOOK PRO to a friend, in late 2012.

In 1 year, the motherboard broke twice and the battery is dead.

First time they replaced the motherboard under warranty, but now the warranty is expired and they are asking ridiculously high amount of money to service it.

Needless to say, after this traumatic experience I will never buy a Mac. I am being told it's a superior machine, much more productive than anything else, but I hate my finances being looted like that.



Wednesday, January 8, 2014

osb, logback, logstash, grok and elasticsearch, putting it all together

the Java class that I invoke with a custom XPath to trace events :
  • "interface" is like "service/operation"
  • "eventtype" is like "FileConsumed", "JMSMessageConsumed", "WSInvoked"
  • TechnicalMessageID and BusinessID are usinque identifiers for the request/payload
  • ServerName is the WebLogic managed server
  • Priority is P1... P5 in case of error
  • Payload is the actual request... it's traced only if level is "DEBUG"
  • Fault is the error description
package com.acme.osb.logging;

import org.apache.xmlbeans.XmlObject;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

/**
 * Used by OSB to report an Event
 * 
 * @author NNVernetPI
 * 
 */

public class MessageTrackerSLF4J {

 public static String logMessage(String technicalMessageid,
   String businessId, String eventType, String interfaceName,
   XmlObject payload) {

  Logger projectLogger = LoggerFactory.getLogger(interfaceName);

  if (projectLogger.isDebugEnabled()) {
   projectLogger.info(" ::InterfaceName:: {} ::EventType:: {} ::TechnicalMessageID:: {} ::BusinessID:: {} ::ServerName:: {} ::Priority:: {} ::Payload:: {} ::Fault:: {}", 
     interfaceName, eventType, technicalMessageid, businessId, System.getProperty("weblogic.Name"), "NONE", payload != null ? payload.xmlText().replaceAll("\\r\\n|\\r|\\n", " ") : "NONE", "NONE");

  } else {
   projectLogger.info(" ::InterfaceName:: {} ::EventType:: {} ::TechnicalMessageID:: {} ::BusinessID:: {} ::ServerName:: {} ::Priority:: {} ::Payload:: {} ::Fault:: {}", 
     interfaceName, eventType, technicalMessageid, businessId, System.getProperty("weblogic.Name"), "NONE", "NONE", "NONE");
  }

  String logmessage = "Info Message Logged for:: " + interfaceName;
  return logmessage;
 }

 public static String errorLogger(String technicalMessageid,
   String businessId, String interfaceName, XmlObject payload,
   String priority, XmlObject fault) {

  Logger projectLogger = LoggerFactory.getLogger(interfaceName);

  projectLogger.error(" ::InterfaceName:: {} ::EventType:: {} ::TechnicalMessageID:: {} ::BusinessID:: {} ::ServerName:: {} ::Priority:: {} ::Payload:: {} ::Fault:: {}", 
    interfaceName, "ERROR", technicalMessageid, businessId, System.getProperty("weblogic.Name"), priority, payload != null ? payload.xmlText().replaceAll("\\r\\n|\\r|\\n", " ") : "NONE", fault != null ? fault.xmlText().replaceAll("\\r\\n|\\r|\\n", " ") : "NONE");

  String responseMessage = "Error Message Logged for:: " + interfaceName;
  return responseMessage;
 }

}



To make my parsing easier, I have decided to remove all newlines from payload and fault, so as to have a single line per event in the logs.

You can also deal with multiline with this codec.

For a valid message, logback will log:
#### 2014-01-08 18:04:01,980 INFO [[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'] - ::InterfaceName:: ProdDeclAven ::EventType:: FileConsumed ::TechnicalMessageID:: ProdDeclAven^BAD_TST_PDA_REV.xml^AVE^1389200641480 ::BusinessID:: 000000000006741320_00376401092900001530 ::ServerName:: osbdev1ms1 ::Priority:: NONE ::Payload:: NONE ::Fault:: NONE ####

logback.xml file:
<?xml version="1.0" ?>
<configuration debug="true" scan="true" scanPeriod="30 seconds">
 <jmxConfigurator/>
 <property name="LOG_DIR" value="/opt/var/log/weblogic/server/"/>
 <appender class="ch.qos.logback.core.ConsoleAppender" name="STDOUT">
  <encoder>
   <pattern>
    #### %date{ISO8601} %level [%thread] - %msg ####%n            
   </pattern>
  </encoder>
 </appender>
 <appender class="ch.qos.logback.core.rolling.RollingFileAppender" name="MyService">
  <file>
   ${LOG_DIR}acmeMyService.log
  </file>
  <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
   <fileNamePattern>
    acmeMyService.%i.log.zip
   </fileNamePattern>
   <minIndex>
    1
   </minIndex>
   <maxIndex>
    10
   </maxIndex>
  </rollingPolicy>
  <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
   <maxFileSize>
    50MB
   </maxFileSize>
  </triggeringPolicy>
  <encoder>
   <pattern>
    #### %date{ISO8601} %level [%thread] - %msg ####%n                
   </pattern>
  </encoder>
 </appender>

 <logger additivity="false" level="INFO" name="MyService_PSDB_RoutingService">
  <appender-ref ref="MyService"/>
 </logger>

 <root level="DEBUG">
  <appender-ref ref="ALL"/>
 </root>
</configuration>





esgrok.conf will be

input {
  file {
    path => "/opt/var/log/weblogic/server/nesoa2*.log"
  }

}
filter {
  grok {
    match => [ "message", "#### %{TIMESTAMP_ISO8601:timestamp} %{WORD:level} \[\[%{WORD:threadstatus}\] %{GREEDYDATA:threadname}\] -  ::InterfaceName:: %{WORD:interfacename} ::EventType:: %{WORD:eventtype} ::TechnicalMessageID:: %{GREEDYDATA:technicalmessageid} ::BusinessID:: %{GREEDYDATA:businessid} ::ServerName:: %{WORD:servername} ::Priority:: %{WORD:priority} ::Payload:: %{GREEDYDATA:payload} ::Fault:: %{GREEDYDATA:fault} ####" ]
  }
}

output {
  elasticsearch {
    embedded => true
  }
}



Preparing the grok match regexp was very time consuming. Using http://grokdebug.herokuapp.com/ the grok debugger was essential. It worked for me only on Chrome, not on Firefox.

Priceless also the list of grok patterns.

Run with nohup java -jar logstash-1.3.2-flatjar.jar agent -f esgrok.conf -- web > logstash.log 2>&1 &
The result is impressive.
Happy Kibana to you!


Groovy book: Groovy 2 Cookbook

http://bit.ly/1axk40q
Packt has given me the honor to review this book.... it was written by an excellent coder, Luciano, so I am sure it's a very accurate book..... I will elaborate more once I have read it.

(some time later) this is a VERY good book, the authors show a deep IT culture and their example are simple enough to be understood, yet far from trivial.

Here is a list of topics:

Groovy Object Browser
GVM
groovysh
CodeNarc
closure

Pogo
@TupleConstructor

Builders
@Canonical
@Delegate
@Mixin

XmlSlurper
GPath
MarkupBuilder

JsonBuilder
JSON Schema
JsonSlurper

Yaml

groovy.sql.Sql
Blob.getBinaryStream

DSL
ExpandoMetaClass
AST

jsch
NLP


Tuesday, January 7, 2014

Oracle DB: which SQL is being run by a given OS user?

assuming the os user is nnvernetpi:
select * from v$sql where sql_id in (select sql_ID from gv$session where lower(OSUSER) = 'nnvernetpi');


on RAC, v$session refers only to the current node. gv$session is the whole RAC
(thanks Alain for the explanation)


Monday, January 6, 2014

oracle.jdbc.ReadTimeout for oracle.sql.CLOB.getChars

we often have this stuck thread on oracle.sql.CLOB.getChars:

####<Jan 6, 2014 10:34:39 AM CET> <Error> <WebLogicServer> <hqchacme110> <osbpr1ms3> <[ACTIVE] ExecuteThread: '19' for queue:
'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <d1a7b26be2c41106:7f955c62:143498f1087:-8000-00000000001dcb6a> <1389
000879486> <BEA-000337> <[STUCK] ExecuteThread: '15' for queue: 'weblogic.kernel.Default (self-tuning)' has been busy for "708"
seconds working on the request "weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl@16c083ce", which is more than the conf
igured time (StuckThreadMaxTime) of "600" seconds. Stack trace:
        java.net.SocketInputStream.socketRead0(Native Method)
        java.net.SocketInputStream.read(SocketInputStream.java:129)
        oracle.net.nt.MetricsEnabledInputStream.read(TcpNTAdapter.java:718)
        oracle.net.ns.Packet.receive(Packet.java:295)
        oracle.net.ns.DataPacket.receive(DataPacket.java:106)
        oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:317)
        oracle.net.ns.NetInputStream.read(NetInputStream.java:262)
        oracle.jdbc.driver.T4CSocketInputStreamWrapper.read(T4CSocketInputStreamWrapper.java:107)
        oracle.jdbc.driver.T4CMAREngine.getNBytes(T4CMAREngine.java:1579)
        oracle.jdbc.driver.T4C8TTILobd.unmarshalLobData(T4C8TTILobd.java:455)
        oracle.jdbc.driver.T4C8TTILob.readLOBD(T4C8TTILob.java:796)
        oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:389)
        oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:204)
        oracle.jdbc.driver.T4C8TTIClob.read(T4C8TTIClob.java:245)
        oracle.jdbc.driver.T4CConnection.getChars(T4CConnection.java:3630)
        oracle.sql.CLOB.getChars(CLOB.java:756)
        oracle.sql.CLOB.getSubString(CLOB.java:398)
        weblogic.jdbc.wrapper.Clob_oracle_sql_CLOB.getSubString(Unknown Source)
        oracle.tip.adapter.db.sp.oracle.TypeConverter.toString(TypeConverter.java:241)
        oracle.tip.adapter.db.sp.oracle.TypeConverter.toString(TypeConverter.java:275)
        oracle.tip.adapter.db.sp.oracle.XMLBuilder.DOM(XMLBuilder.java:198)
        oracle.tip.adapter.db.sp.AbstractXMLBuilder.buildDOM(AbstractXMLBuilder.java:264)
        oracle.tip.adapter.db.sp.SPInteraction.executeStoredProcedure(SPInteraction.java:148)
        oracle.tip.adapter.db.DBInteraction.executeStoredProcedure(DBInteraction.java:1102)
        oracle.tip.adapter.db.DBInteraction.execute(DBInteraction.java:247)
        oracle.tip.adapter.sa.impl.fw.wsif.jca.WSIFOperation_JCA.performOperation(WSIFOperation_JCA.java:529)
        oracle.tip.adapter.sa.impl.fw.wsif.jca.WSIFOperation_JCA.executeOperation(WSIFOperation_JCA.java:353)
        oracle.tip.adapter.sa.impl.fw.wsif.jca.WSIFOperation_JCA.executeReque





at the same time on the Oracle DB side we have:
Fatal NI connect error 12170.

  VERSION INFORMATION:
        TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
        TCP/IP NT Protocol Adapter for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
        Oracle Bequeath NT Protocol Adapter for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
  Time: 06-JAN-2014 10:22:25
  Tracing not turned on.
  Tns error struct:
    ns main err code: 12535

TNS-12535: TNS:operation timed out
    ns secondary err code: 12560
    nt main err code: 505

TNS-00505: Operation timed out
    nt secondary err code: 78
    nt OS err code: 0
  Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=10.56.34.53)(PORT=1125))
Mon Jan 06 10:23:48 2014



I will try to fix this by setting the property oracle.jdbc.ReadTimeout to 1 800 000 (milliseconds = 30 minutes)

Saturday, January 4, 2014

Logstash, getting my feet wet

Some instructions on how to get started: http://logstash.net/docs/1.3.2/tutorials/getting-started-simple

Also this video tutorial is a lifesaver.

mkdir /opt/logstash/
cd /opt/logstash/
wget https://download.elasticsearch.org/logstash/logstash/logstash-1.3.2-flatjar.jar -O logstash.jar

Exercise one: simple input, simple output:
vi sample.conf
input {
  stdin { }
}
output {
  stdout {
    debug => true
  }
}

run it:
java -jar logstash.jar agent -v -f sample.conf
Pipeline started {:level=>:info}
pippo
output received {:event=>#"pippo", "@version"=>"1", 
"@timestamp"=>"2014-01-04T11:11:42.559Z", 
"host"=>"osb-vagrant.acme.com"}, @cancelled=false>, :level=>:info}
{
       "message" => "pippo",
      "@version" => "1",
    "@timestamp" => "2014-01-04T11:11:42.559Z",
          "host" => "osb-vagrant.acme.com"
}

Running "java -jar logstash.jar agent -vv -f sample.conf" can be quite educational.

Removing the "debug => true" from the sample.conf:

java -jar logstash.jar agent -f sample.conf
pippo
2014-01-04T11:34:40.255+0000 osb-vagrant.acme.com pippo



To activate the embedded elasticsearch:
vi es.conf
input {
  file {
    path => "/opt/logstash/myfile.log"
  }
}

output {
  elasticsearch {
    embedded => true
  }
}


at this point, whatever you add in myfile.log will automatically appear in elasticsearch.
If you run logstash with the "web" option:
java -jar logstash.jar agent -f es.conf -- web
then access kibana: http://yourhost:9292
Here http://logstash.net/docs/1.3.2/ you find detailed documentation of each input, codec, output, filter stanzas.

Friday, January 3, 2014

Book: The Logstash Book

http://www.logstashbook.com/ .
I was hoping this book would take me by the hand in small, progressive steps to harness to power of Logstash.
It proved so far to be a confusing book, piling up a lot of disparate information in a quite chaotic way.
It tries to make you set up a very complicated example juggling with a lot of products (syslog with redis and elastisearch, forwarder...) instead of going in a progressive, step by step, way.
Also, scripts are provided only for Ubuntu, so if you use RHEL you are on your own.
Which proves, once more, that one thing is being a great technician, one thing is being a great pedagogue.

However, it's still worth buying and reading; logs are of a capital importance and most of the time they are very poorly managed, so I need to educate myself on the state-of-the-art technology.

Wednesday, January 1, 2014

Redis

Training here http://try.redis.io/
To install:

yum install redis
vi /etc/redis.conf
comment out the "bind 127.0.0.1" line
/usr/sbin/redis-server /etc/redis.conf
ps -ef | grep redis
root      4092     1  0 21:27 ?        00:00:00 /usr/sbin/redis-server /etc/redis.conf


/usr/bin/redis-cli ping
PONG

netstat -an | grep 6379
tcp        0      0 0.0.0.0:6379                0.0.0.0:*                   LISTEN


if in /etc/redis.conf you specify a "bind IP" (IP=10.0.2.159 then you have to use the -h option:
/usr/bin/redis-cli -h 10.0.2.15 shutdown

you can issue several commands:
set key value
get key
incr key



Elasticsearch

download the rmp: http://www.elasticsearch.org/download/
rpm -i elasticsearch-0.90.9.noarch.rpm
ps -ef | grep elasti
496       1734     1  1 10:32 ?        00:00:10 /usr/bin/java -Xms256m -Xmx1g -Xss256k -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Delasticsearch -Des.pidfile=/var/run/elasticsearch/elasticsearch.pid -Des.path.home=/usr/share/elasticsearch -cp :/usr/share/elasticsearch/lib/elasticsearch-0.90.9.jar:/usr/share/elasticsearch/lib/*:/usr/share/elasticsearch/lib/sigar/* -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.work=/tmp/elasticsearch -Des.default.path.conf=/etc/elasticsearch org.elasticsearch.bootstrap.ElasticSearch


less /var/log/elasticsearch/elasticsearch.log

[2014-01-01 10:29:43,058][INFO ][node                     ] [Kala] version[0.90.9], pid[1734], build[a968646/2013-12-23T10:35:28Z]
[2014-01-01 10:29:43,058][INFO ][node                     ] [Kala] initializing ...
[2014-01-01 10:29:43,063][INFO ][plugins                  ] [Kala] loaded [], sites []
[2014-01-01 10:29:45,289][INFO ][node                     ] [Kala] initialized
[2014-01-01 10:29:45,289][INFO ][node                     ] [Kala] starting ...
[2014-01-01 10:29:45,377][INFO ][transport                ] [Kala] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.0.2.15:9300]}
[2014-01-01 10:29:48,424][INFO ][cluster.service          ] [Kala] new_master [Kala][Z9XUjvk0QxK6aXyCOhExqg][inet[/10.0.2.15:9300]], reason: zen-disco-join (elected_as_master)
[2014-01-01 10:29:48,498][INFO ][discovery                ] [Kala] elasticsearch/Z9XUjvk0QxK6aXyCOhExqg
[2014-01-01 10:29:48,526][INFO ][http                     ] [Kala] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/10.0.2.15:9200]}
[2014-01-01 10:29:48,527][INFO ][node                     ] [Kala] started
[2014-01-01 10:29:48,543][INFO ][gateway                  ] [Kala] recovered [0] indices into cluster_state
[2014-01-01 10:35:47,549][INFO ][cluster.service          ] [Kala] added {[Grant, Greer][-TR-MXApSOqMcPWw7_qYMg][inet[/10.0.2.15:9301]],}, reason: zen-disco-receive(join from node[[Grant, Greer][-TR-MXApSOqMcPWw7_qYMg][inet[/10.0.2.15:9301]]])
[2014-01-01 10:42:00,830][INFO ][cluster.service          ] [Kala] removed {[Grant, Greer][-TR-MXApSOqMcPWw7_qYMg][inet[/10.0.2.15:9301]],}, reason: zen-disco-node_left([Grant, Greer][-TR-MXApSOqMcPWw7_qYMg][inet[/10.0.2.15:9301]])



Watch the excellent tutorial

curl 10.0.2.15:9200
{
  "ok" : true,
  "status" : 200,
  "name" : "Kala",
  "version" : {
    "number" : "0.90.9",
    "build_hash" : "a968646da4b6a2d9d8bca9e51e92597fe64e8d1a",
    "build_timestamp" : "2013-12-23T10:35:28Z",
    "build_snapshot" : false,
    "lucene_version" : "4.6"
  },
  "tagline" : "You Know, for Search"
}


Let's put some data:
curl -XPUT 10.0.2.15:9200/books/eco/one -d '
> {
>   "author" : "pierre",
>   "title" : "how to cook Pizza"
> }'

I get a response:

{"ok":true,"_index":"books","_type":"eco","_id":"one","_version":1}

I can examine the mapping:

curl 10.0.2.15:9200/books/_mapping

{"books":{"eco":{"properties":{"author":{"type":"string"},"title":{"type":"string"}}}}}

I can get my document back:

curl 10.0.2.15:9200/books/eco/one

{"_index":"books","_type":"eco","_id":"one","_version":1,"exists":true, "_source" :
{
  "author" : "pierre",
  "title" : "how to cook Pizza"
}}


I can search based on an attribute:
curl 10.0.2.15:9200/books/_search?q=_author=pierre
{"took":71,"timed_out":false,"_shards":{"total":5,"successful":5,"failed":0},"hits":{"total":1,"max_score":0.02250402,"hits":[{"_index":"books","_type":"eco","_id":"one","_score":0.02250402, "_source" :
{
  "author" : "pierre",
  "title" : "how to cook Pizza"
}}]}}


start and stop:
/etc/init.d/elasticsearch start
/etc/init.d/elasticsearch stop


edit configuration:
vi /etc/elasticsearch/elasticsearch.yml

To view configuration: http://youripaddress:9200/ and more http://youripaddress:9200/_status?pretty=true



If you get Content-Type header [application/x-www-form-urlencoded] is not supported", add this: -H 'Content-Type: application/json'

If you get "Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes", put the json in a file pizza.json and invoke curl -H 'Content-Type: application/json' -XPUT 10.0.2.15:9200/books/eco/one -d@pizza.json


References: https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started.html#getting-started