I have just donated to A Jewish Voice For Peace, it seems to me admirable that this people try to maintain some democratic debate in a country dominated by priests and fascists, just like Italy under Mussolini.
Why do I shell out hard earned money for lost causes?
Because if I didn't do it, one day I would despise myself for having lived an empty, selfish, cold-hearted meaningless life.
Thursday, February 27, 2014
Oracle JCA DbAdapter db poller at work
I can see 4 threads
and when it fails I see the full stack trace:
I note down the 4 threads name (12,13,14,24) and execute a poll on a record.
Why 4 threads? Because we have 4 Proxy Services using the oracle.tip.adapter.db.DBActivationSpec to poll a table, and each is configured in the JCA file with NumberOfThreads=1:
See here Polling Thread: By default, as a performance best practice, the Oracle Database Adapter uses one thread to poll the database (NumberOfThreads=1 property in the activation spec). Because the adapter never releases that thread, which is by design, you may see a stuck thread stack trace in the server log. If you set the NumberOfThreads to more than one, you may see stack traces for all of those threads. You can ignore stuck thread stack traces.
Analyzing the logs, it's evident that THE SAME POLLING THREAD IS USED IN OSB TO PROCESS THE MESSAGE.
We have an issue here: if the Processing Time for a batch is > the Polling Period, the Polling Period is not guaranteed.
Something else to explore is to understand the behaviour of the component if we set NumberOfThreads to something > 1.
"[ACTIVE] ExecuteThread: '24' for queue: 'weblogic.kernel.Default (self-tuning)'" waiting for lock oracle.tip.adapter.db.InboundWork@5adcf919 TIMED_WAITING java.lang.Object.wait(Native Method) oracle.tip.adapter.db.InboundWork.run(InboundWork.java:609) oracle.tip.adapter.db.inbound.InboundWorkWrapper.run(InboundWorkWrapper.java:43) weblogic.work.ContextWrap.run(ContextWrap.java:41) weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528) weblogic.work.ExecuteThread.execute(ExecuteThread.java:209) weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
and when it fails I see the full stack trace:
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:476) at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:204) at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:540) at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:217) at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:1079) at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1466) at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3752) at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3887) at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1508) at weblogic.jdbc.wrapper.PreparedStatement.executeUpdate(PreparedStatement.java:172) at oracle.tip.adapter.db.inbound.DestructivePollingStrategy.poll(DestructivePollingStrategy.java:434) at oracle.tip.adapter.db.InboundWork.runOnce(InboundWork.java:699) at oracle.tip.adapter.db.InboundWork.run(InboundWork.java:578) at oracle.tip.adapter.db.inbound.InboundWorkWrapper.run(InboundWorkWrapper.java:43) at weblogic.work.ContextWrap.run(ContextWrap.java:41) at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209) at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
I note down the 4 threads name (12,13,14,24) and execute a poll on a record.
Why 4 threads? Because we have 4 Proxy Services using the oracle.tip.adapter.db.DBActivationSpec to poll a table, and each is configured in the JCA file with NumberOfThreads=1:
<endpoint-activation portType="SSS_Batching_Service_ptt" operation="receive"> <activation-spec className="oracle.tip.adapter.db.DBActivationSpec"> <property name="DescriptorName" value="SSS_Batching_Service.SssBatchOrder"/> <property name="QueryName" value="SSS_Batching_ServiceSelect"/> <property name="MappingsMetaDataURL" value="NCube_Batching_Service-or-mappings.xml"/> <property name="PollingStrategy" value="LogicalDeletePollingStrategy"/> <property name="MarkReadColumn" value="STATUS"/> <property name="MarkReadValue" value="PROCESSED"/> <property name="MarkReservedValue" value="R${weblogic.Name-2}-${IP-2}"/> <property name="MarkUnreadValue" value="CREATED"/> <property name="PollingInterval" value="60"/> <property name="MaxRaiseSize" value="10"/> <property name="MaxTransactionSize" value="10"/> <property name="NumberOfThreads" value="1"/> <property name="ReturnSingleResultSet" value="false"/> </activation-spec> </endpoint-activation>
See here Polling Thread: By default, as a performance best practice, the Oracle Database Adapter uses one thread to poll the database (NumberOfThreads=1 property in the activation spec). Because the adapter never releases that thread, which is by design, you may see a stuck thread stack trace in the server log. If you set the NumberOfThreads to more than one, you may see stack traces for all of those threads. You can ignore stuck thread stack traces.
Analyzing the logs, it's evident that THE SAME POLLING THREAD IS USED IN OSB TO PROCESS THE MESSAGE.
We have an issue here: if the Processing Time for a batch is > the Polling Period, the Polling Period is not guaranteed.
Something else to explore is to understand the behaviour of the component if we set NumberOfThreads to something > 1.
Labels:
OSB
Booleans in Python
I firmly believe that a boolean value should be represented by 0 and 1 - yesterday I have wasted 1 hour because in Ruby you say "True" but in YAML it's "true" (morons the Puppet founding fathers who chose Ruby).
How does it work in Python?
BUT:
So there you are, you can make out the rule to convert 0 and 1 into a boolean.
How does it work in Python?
value='1' booleanValue = (value == '1') print booleanValue 1 value='0' booleanValue = (value == '1') print booleanValue 0
BUT:
value=1 if value: print "bla" bla value=0 if value: print "bla" bla
So there you are, you can make out the rule to convert 0 and 1 into a boolean.
Labels:
python
Wednesday, February 26, 2014
YAML parsing in Python with PyYAML
I was looking for a simple way of validating YAML structures, so I did the following:
download http://pyyaml.org/download/pyyaml/PyYAML-3.10.tar.gz
tar xvfz PyYAML-3.10.tar.gz
cd PyYAML-3.10
python setup.py install
python
import yaml
It works like wonder.
Here more doc.
download http://pyyaml.org/download/pyyaml/PyYAML-3.10.tar.gz
tar xvfz PyYAML-3.10.tar.gz
cd PyYAML-3.10
python setup.py install
python
import yaml
mydomains=yaml.load(""" domains: osbpl1do: soauser: soa optpath: /opt osbpl2do: soauser: soa2 optpath: /opt2 osbpl3do: soauser: soa3 optpath: /opt3 """) print mydomains print mydomains['domains'] print mydomains['domains']['osbpl2do']
It works like wonder.
Here more doc.
Friday, February 21, 2014
OSB load balancing of JMS messages
Obviously, use Distributed queues.
In the Business Service creating the JMS message, enter a single URI which has the cluster address, like:
jms://host1.acme.com:8001,host2.acme.com:8001/jms.jndi.cf.MyCF/jms.jndi.dq.MyDQ
and in the jms.jndi.cf.MyCF, in the "Load Balance" tab, disable the checkbox "Server Affinity Enabled".
See also http://docs.oracle.com/cd/E28280_01/web.1111/e13814/jmstuning.htm#PERFM311
In the Business Service creating the JMS message, enter a single URI which has the cluster address, like:
jms://host1.acme.com:8001,host2.acme.com:8001/jms.jndi.cf.MyCF/jms.jndi.dq.MyDQ
and in the jms.jndi.cf.MyCF, in the "Load Balance" tab, disable the checkbox "Server Affinity Enabled".
See also http://docs.oracle.com/cd/E28280_01/web.1111/e13814/jmstuning.htm#PERFM311
Thursday, February 20, 2014
OSB binary JMS messages
If someone sets the message type to "binary" for a JMS Business Service, this will generate BytesMessage rather than TextMessage. You might have performance reasons to do that. But it's a pain, because in the WebLogic console these messages will not be very readable.
If you want to process those messages with some Java utility, you can still get the content of the message this way:
the method getBodyBytes() is not part of the BytesMessage interface, but it's very convenient...
I guess that the message body COULD be compressed, in that case you are screwed, you might try decompressMessageBody() before reading the body, not sure...
If you want to process those messages with some Java utility, you can still get the content of the message this way:
Enumeration msgs = queueBrowser.getEnumeration(); while (msgs.hasMoreElements()) { Message tempMsg = (Message)msgs.nextElement(); String msgContent = ""; if (tempMsg instanceof BytesMessage) { weblogic.jms.common.BytesMessageImpl bm = (weblogic.jms.common.BytesMessageImpl)tempMsg; msgContent = new String(bm.getBodyBytes()); } if (tempMsg instanceof TextMessage) { msgContent = ((TextMessage)tempMsg).getText(); } }
the method getBodyBytes() is not part of the BytesMessage interface, but it's very convenient...
I guess that the message body COULD be compressed, in that case you are screwed, you might try decompressMessageBody() before reading the body, not sure...
OSB processing of gz gzip files
Taking example from this post, I have developed this Java Callout class:
Our aim is to read a gz file from HTTP server and write it locally, before we further process it.
We do a Service Callout to a Business Service (response type is binary), and it returns us this response: <con:binary-content ref="cid:-34da800e:1442fd94091:-7fae" xmlns:con="http://www.bea.com/wli/sb/context"/> .
We can pass this variable as such to the "public static String processBytes(byte[] data, String outputFile)" function, as this variable represents a reference to an array of bytes which is the binary content of the gz file. processBYtes will persist the unzipped content to a give file (specify full path)
package com.acme.osb.utilities; import java.io.BufferedReader; import java.io.ByteArrayInputStream; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.PrintWriter; import java.util.zip.GZIPInputStream; public class UnzipAndWriteToFile { public static String inputFileName = "c:\\pierre\\myfile.txt.gz"; public static String outputFileName = "c:\\pierre\\myfile.txt"; public static void main(String[] args) throws Exception { unzipGZIPFile(inputFileName, outputFileName); } /* * Read from a gz byte[] and writes to a file */ public static String processBytes(byte[] data, String outputFile) throws Exception { ByteArrayInputStream bais = new ByteArrayInputStream(data); fromGZStreamToFile(outputFile, bais); return "OK"; } /** * Read from a gz file and writes the unzipped content to another file * * @param fileName * @param outputFile * @throws Exception */ public static void unzipGZIPFile(String fileName, String outputFile) throws Exception { FileInputStream in = new FileInputStream(fileName); fromGZStreamToFile(outputFile, in); } /** * Persist a gz stream to a file * @param outputFile * @param in * @throws IOException * @throws FileNotFoundException */ private static void fromGZStreamToFile(String outputFile, InputStream in) throws IOException, FileNotFoundException { GZIPInputStream gzip = new GZIPInputStream(in); BufferedReader br = new BufferedReader(new InputStreamReader(gzip)); PrintWriter pw = new PrintWriter(outputFile); String line; while ((line = br.readLine()) != null) { pw.println(line); } br.close(); pw.close(); } }
Our aim is to read a gz file from HTTP server and write it locally, before we further process it.
We do a Service Callout to a Business Service (response type is binary), and it returns us this response: <con:binary-content ref="cid:-34da800e:1442fd94091:-7fae" xmlns:con="http://www.bea.com/wli/sb/context"/> .
We can pass this variable as such to the "public static String processBytes(byte[] data, String outputFile)" function, as this variable represents a reference to an array of bytes which is the binary content of the gz file. processBYtes will persist the unzipped content to a give file (specify full path)
JMA author turning 53 tomorrow
In my childhood, computers could be seen only in some MODERN science-fiction movies, and they were huge cabinets with magnetic tapes spinning...
in those times, knowing how to use a ruler was a must for an engineer, and I duly learned as a child to extract logarithms and cotangents with this mechanical device.... quite accurate, to be honest...
My father, an engineer, brought home some hand-held calculators, with a fluorescent display eating all battery in half an hour...they were fascinating, but they could only to the 4 operations...
The first time I met someone who actually had a computer at home (a VIC 20, with a prodigious 1 KB RAM!) I was 21... in 1982...when he described me what a computer can do, it sounded like magic to me.
And the first time I have actually used a computer (a PDP 11), in 1983,
it had PUNCHED CARDS!!! Having a teletype (with actual paper) was a luxury... not to mention having a CRT!!!
But if you ask me "is today world, full of technology, better than the one where you were born?" I would asnwer without hesitation: "NO! We had children playing in the street, lot of social gatherings, families were far more united, life a lot more natural and fun... less cars, more nature"
Wednesday, February 19, 2014
git: common options
configure your identity:
disable certificate warning
use '--rebase' automatically when doing a 'pull'
For common commands, refer to the cheatsheet or the the Git Book
git config --global user.name "FirstName LastName" git config --global user.email "firstname.lastname@nespresso.com"
disable certificate warning
git config --global http.sslVerify false
use '--rebase' automatically when doing a 'pull'
git config --global branch.autosetuprebase remote
For common commands, refer to the cheatsheet or the the Git Book
Labels:
git
git: merging versus rebasing
I used to do:
and life was fine, until I was suggested to use rather:
the changes are:
in alternative you can do:
git pull --rebase
instead of
git fetch + git rebase
Using "rebase" will generate a simpler history in the repository.
All this is very well explained here.
(edit my files) git pull git add myfiles git commit -m "blablabla" git push
and life was fine, until I was suggested to use rather:
(edit my files) git fetch git add myfiles git commit -m "blablabla" git rebase git push
the changes are:
- fetch instead of pull
- add the "rebase" step
in alternative you can do:
git pull --rebase
instead of
git fetch + git rebase
Using "rebase" will generate a simpler history in the repository.
All this is very well explained here.
Labels:
git
Monday, February 17, 2014
Bash: reading properties based on an ENV parameter
I hate bash, and I hate that there is not a standard way of reading property files based on 2 parameters: the property name, and the environment. And we are in 2014...flying people to Mars etc...and still hacking these very basic requirements.
Here is a possible solution:
create a config.sh executable file:
DEV_prop1=bla
PROD_prop1=blu
you have your environment name in a variable ENV
ENV="DEV"
you load all your properties:
. ./config.sh (note the space after the first . !)
then you get the property name with an "indirect referencing":
propertyname=${ENV}_prop1
myvalue="${!propertyname}";
There are many other ways (I hate the cat/grep/awk solution) but they all stink just the same.
Let's face it, bash stinks. Like a skunk.
Here is a possible solution:
create a config.sh executable file:
DEV_prop1=bla
PROD_prop1=blu
you have your environment name in a variable ENV
ENV="DEV"
you load all your properties:
. ./config.sh (note the space after the first . !)
then you get the property name with an "indirect referencing":
propertyname=${ENV}_prop1
myvalue="${!propertyname}";
There are many other ways (I hate the cat/grep/awk solution) but they all stink just the same.
Let's face it, bash stinks. Like a skunk.
Labels:
bash
WebLogic: how to recover a config.xml reduced to 0 size
- if you have "configuration archive enabled" in your domain, you can retrieve the latest archive and apply diffs since last configuration change
- or you can do the same using the config.xml.bck
- if your domain admin server is still running, connect with the console, open a session, make a minor configuration change, and activate the session: this should regenerate the whole config.xml
- otherwise, connect with WLST and run configToScript()
In any case, please take regular backups of your PROD domains!!!
Labels:
weblogic
Sunday, February 16, 2014
Drag and Drop movies disabled in STINKY iTunes
First make sure that your movie type is actually accepted in STINKY Iphone (STINKY Iphone plays only a small subset of movie types)
Solution:
Start STINKY Itunes in safe mode (hold down SHIFT+CONTROL while starting... you should see a message "ITunes is running in safe mode". click continue... then try adding movies (on this iphone, music, drag and drop)... IT STILL DOESN'T WORK!
Even "File/Add file to liibrary" fails silently.
Besides, I have some movies showing in ITunes which don't exist any longer on the device.
I have tried with File/Library/reorganize/consolidate. Nothing.
I go to the "Movies" tab, enable the "Sync movies" button and click "Sync"... I get a message saying that you can Sync with only 1 computer, so all my library will be lost...??? I guess that Apple has no notion of merging and Content Management systems.... how deeply pathetic, all this is so monolithic and user unfriendly....
If you google around, it's all a litany of desperate users trying to fix this issue without any help from Apple Support.
STINKY ITunes is the worst piece of crap ever seen on this planet, even a junior developer could code a better UI. And Apple built his empire on this pathetic piece of crap. How sad that we are in the hands of these %"£!@.
Do yourself a favor... DON'T BUY EVER AN IPHONE!
SOLUTION:
Desperately, I have checked "manually manage files" and synced the whole library. It erased all the content, but now I am able to add movies. OK now I will have to painstakingly re-add all the MP3s.... luckily my PDF were not deleted....
If I delivered this kind of crap to my clients I would be out of business in a matter of days
Solution:
Start STINKY Itunes in safe mode (hold down SHIFT+CONTROL while starting... you should see a message "ITunes is running in safe mode". click continue... then try adding movies (on this iphone, music, drag and drop)... IT STILL DOESN'T WORK!
Even "File/Add file to liibrary" fails silently.
Besides, I have some movies showing in ITunes which don't exist any longer on the device.
I have tried with File/Library/reorganize/consolidate. Nothing.
I go to the "Movies" tab, enable the "Sync movies" button and click "Sync"... I get a message saying that you can Sync with only 1 computer, so all my library will be lost...??? I guess that Apple has no notion of merging and Content Management systems.... how deeply pathetic, all this is so monolithic and user unfriendly....
If you google around, it's all a litany of desperate users trying to fix this issue without any help from Apple Support.
STINKY ITunes is the worst piece of crap ever seen on this planet, even a junior developer could code a better UI. And Apple built his empire on this pathetic piece of crap. How sad that we are in the hands of these %"£!@.
Do yourself a favor... DON'T BUY EVER AN IPHONE!
SOLUTION:
Desperately, I have checked "manually manage files" and synced the whole library. It erased all the content, but now I am able to add movies. OK now I will have to painstakingly re-add all the MP3s.... luckily my PDF were not deleted....
If I delivered this kind of crap to my clients I would be out of business in a matter of days
Friday, February 14, 2014
JMA for Afghanistan, and against war
While here we talk about logstash and puppet, women and children are destroyed daily in one of the most preposterous wars in history, the Afghanistan war. The media don't even mention any more WHY this war was initiated, because the reason was so laughable and questionable - more or less like Hitler's invasion of Poland.
There is not much I can do, apart boycotting as much as I can this insane murderous global system of violence, which is turning this Planet in a huge slaughterhouse.
Anyway here is my pledge: my favorite post, http://www.javamonamour.org/2012/05/if-dogs-worked-in-office.html, is at 9240 hits. When it will reach 10K, I will donate 100 dollars to a Medical Organization who treats people wounded by American, German, Italian etc mines and bullets in Afghanistan. The number of civilian victims is increasing over time, in the indifference and silence of the media.
What? The Talibans are evil and they deserve to die? Wait, I have heard that one before.... but it was mentioning the Jews, some 80 years ago...
These are the enemies America is fighting against, with a huge waste of taxpayer money:
"Afghanistan: increasing civilian casualties
2013 was the worst year for the people of Afghanistan since the war began, 13 years ago.
The Emergency Surgical Centres in Kabul and Lashkar-Gah, capital of Helmand, 4,317 injured patients admitted for causes related to war (about 12 war wounds a day, 365 days per year), 38% more than in 2012 and 60% more than in 2011.
Of them, 2,183 were wounded by a bullet, 1,037 by shrapnel and 613 had been injured by a landmine.
Women and children have always represented more than a third of the wounded who were hospitalized in 2013: 784 children and 668 women."
So, please, click on Vinita's dogs.
There is not much I can do, apart boycotting as much as I can this insane murderous global system of violence, which is turning this Planet in a huge slaughterhouse.
Anyway here is my pledge: my favorite post, http://www.javamonamour.org/2012/05/if-dogs-worked-in-office.html, is at 9240 hits. When it will reach 10K, I will donate 100 dollars to a Medical Organization who treats people wounded by American, German, Italian etc mines and bullets in Afghanistan. The number of civilian victims is increasing over time, in the indifference and silence of the media.
What? The Talibans are evil and they deserve to die? Wait, I have heard that one before.... but it was mentioning the Jews, some 80 years ago...
These are the enemies America is fighting against, with a huge waste of taxpayer money:
"Afghanistan: increasing civilian casualties
2013 was the worst year for the people of Afghanistan since the war began, 13 years ago.
The Emergency Surgical Centres in Kabul and Lashkar-Gah, capital of Helmand, 4,317 injured patients admitted for causes related to war (about 12 war wounds a day, 365 days per year), 38% more than in 2012 and 60% more than in 2011.
Of them, 2,183 were wounded by a bullet, 1,037 by shrapnel and 613 had been injured by a landmine.
Women and children have always represented more than a third of the wounded who were hospitalized in 2013: 784 children and 668 women."
So, please, click on Vinita's dogs.
managing cron entries with Puppet
Puppet has a predefined type cron. Which is very cool. The only issue is that we don't want to hardcode cron stuff in any pp manifest, since each node has to be configured in a different way.
This is where hiera comes in handy. I personally dislike hiera, and in general all repositories of information which are not bound to a tightly validating schema and which don't natively lend themselves to being queried with a query language. Anyway this is another story.
This is the Puppet class:
and in your <hostname>.yaml file enter:
I can't think of anything simpler.
NOTA BENE:
the empty hash {} is needed so that, if a <hostname>.yaml doesn't define the acme_crontab_entries variable, you don't get the message
Could not find data item acme_crontab_entries in any Hiera data file and no default supplied at /tmp/vagrant-puppet/modules-0/acme/manifests/crontab.pp:6 on node osb-vagrant.acme.com
However this is a bit of abuse of hiera.... hiera should contain minimal configuration information, letting the Puppet modules to handle the details.
A more structured approach is to define in hiera only a boolean flag determining is a given cron entry is needed on a specific server: "acme_install_logrotate_service : true" , and then in your init.pp you do:
This is where hiera comes in handy. I personally dislike hiera, and in general all repositories of information which are not bound to a tightly validating schema and which don't natively lend themselves to being queried with a query language. Anyway this is another story.
This is the Puppet class:
class acme::crontab ($acme_crontab_entries = hiera('acme_crontab_entries', {})) { create_resources(cron, $acme_crontab_entries) }
and in your <hostname>.yaml file enter:
acme_crontab_entries : logrotate: command : '/usr/sbin/logrotate' user : 'soa' hour : '2' minute : '0'
I can't think of anything simpler.
NOTA BENE:
hiera('acme_crontab_entries', {})
the empty hash {} is needed so that, if a <hostname>.yaml doesn't define the acme_crontab_entries variable, you don't get the message
Could not find data item acme_crontab_entries in any Hiera data file and no default supplied at /tmp/vagrant-puppet/modules-0/acme/manifests/crontab.pp:6 on node osb-vagrant.acme.com
However this is a bit of abuse of hiera.... hiera should contain minimal configuration information, letting the Puppet modules to handle the details.
A more structured approach is to define in hiera only a boolean flag determining is a given cron entry is needed on a specific server: "acme_install_logrotate_service : true" , and then in your init.pp you do:
$acme_install_logrotate_service = hiera('acme_install_logrotate_service', false) if $acme_install_logrotate_service { class {'acme::logrotateservices': } }and the class logrotateservices contains the Puppet statement:
cron { logrotate: command => "/usr/sbin/logrotate", user => root, hour => ['2-4'], minute => '*/10' }
Labels:
puppet
WLST to create machines
run wlst, and stay in offline mode.
I read the domain "osbpl1do", which has already 1 machine, and I want to add an extra machine.
Now I will add the second machine:
I get this:
Error: create() failed. Do dumpStack() to see details.
I do dumpStack():
com.oracle.cie.domain.script.jython.WLSTException: java.lang.ArrayIndexOutOfBoundsException: 2
however, if I do
I see the new machine listed. However, I see a duplicate entry for the previously existing machine. I do then:
and I restart the servers. I verify that the "machine" tag is created in config.xml.
I have no clue what is going on... I do the same with WLST online:
However, if I create a brand new domain without machines:
CONCLUSION:
it seems that WLST offline fails to behave properly when there is already 1 machine present. However this specific ArrayOutOfBoundException MAYBE can be ignored, since APPARENTLY the configuration is updated.
If you are confused, so am I.
In Oracle Support I found "Run pasteConfig.sh on Unix, WLST command setName() throw ArrayIndexOutOfBoundsException (Doc ID 1547420.1)"
"This issue was caused by internal Bug 10221694 ( SETNAME FOR MACHINE OF TYPE "UNIX MACHINE" IS THROWING ARRAYINDEXOUTOFBOUNDSEXCEPTION) and Bug 9728926 (CREATE UNIXMACHINE IN WLST OFFLINE DOESN'T WORK PROPERLY)"
so at least we know that there IS an issue and we are not totally stupid.
For patch informations, look into "WLSTException When Configuring Whole Server Migration In WLST Offline Mode (Doc ID 1463127.1)"
readDomain('/opt/oracle/domains/osbpl1do') cd('AnyMachine') ls()and here I see a single instance of the previous machine.
Now I will add the second machine:
cd('/') MACHINENAME='pippomachine' create(MACHINENAME, 'UnixMachine')
I get this:
Error: create() failed. Do dumpStack() to see details.
I do dumpStack():
com.oracle.cie.domain.script.jython.WLSTException: java.lang.ArrayIndexOutOfBoundsException: 2
however, if I do
cd('AnyMachine') ls()
I see the new machine listed. However, I see a duplicate entry for the previously existing machine. I do then:
updateDomain()
and I restart the servers. I verify that the "machine" tag is created in config.xml.
I have no clue what is going on... I do the same with WLST online:
connect(...) cd('Machines') ls()and I see the previous machine
edit() startEdit() cd('Machines') MACHINENAME='pippomachine' create(MACHINENAME, 'UnixMachine')and I get success: MBean type UnixMachine with name pippomachine has been created successfully.
save() activate()and all is fine.
However, if I create a brand new domain without machines:
createDomain('/opt/oracle/fmw11_1_1_5/wlserver_10.3/common/templates/domains/wls.jar', '/opt/oracle/domains/pippodomain', 'weblogic', 'weblogic1') readDomain( '/opt/oracle/domains/pippodomain') MACHINENAME='pippomachine' create(MACHINENAME, 'UnixMachine') updateDomain() closeDomain()and here again all is fine. But is I read again the same domain, and try to create a pippomachine2, again I get ArrayOutOfBoundException. HOWEVER, the machine is correctly added to the config.xml.
CONCLUSION:
it seems that WLST offline fails to behave properly when there is already 1 machine present. However this specific ArrayOutOfBoundException MAYBE can be ignored, since APPARENTLY the configuration is updated.
If you are confused, so am I.
In Oracle Support I found "Run pasteConfig.sh on Unix, WLST command setName() throw ArrayIndexOutOfBoundsException (Doc ID 1547420.1)"
"This issue was caused by internal Bug 10221694 ( SETNAME FOR MACHINE OF TYPE "UNIX MACHINE" IS THROWING ARRAYINDEXOUTOFBOUNDSEXCEPTION) and Bug 9728926 (CREATE UNIXMACHINE IN WLST OFFLINE DOESN'T WORK PROPERLY)"
so at least we know that there IS an issue and we are not totally stupid.
For patch informations, look into "WLSTException When Configuring Whole Server Migration In WLST Offline Mode (Doc ID 1463127.1)"
Labels:
WLST
Wednesday, February 5, 2014
Microsoft Office Communicator chat history
Microsoft stinks, and his products are bloated with useless cosmetic features and lack essential stuff.
Communicator is so ridiculous that it doesn't even keep a chat history, so if you close the chat window you lose all the information. Only a total moron could wish that. Microsoft does. "Tools/View Conversation History" simply doesn't work here. This because the option "Save my instant message conversations in the Outlook Conversations History folder" (in tools/options) is disabled, and I don't have rights to enable it, even with a tweak in the Registry by setting IMAutoArchivingPolicy to 1.
Here is a tiny product that saves all chat history:
http://mscommunicatorhistor.codeplex.com/
You should start it on Windows Startup.
Communicator is so ridiculous that it doesn't even keep a chat history, so if you close the chat window you lose all the information. Only a total moron could wish that. Microsoft does. "Tools/View Conversation History" simply doesn't work here. This because the option "Save my instant message conversations in the Outlook Conversations History folder" (in tools/options) is disabled, and I don't have rights to enable it, even with a tweak in the Registry by setting IMAutoArchivingPolicy to 1.
Here is a tiny product that saves all chat history:
http://mscommunicatorhistor.codeplex.com/
You should start it on Windows Startup.
Tuesday, February 4, 2014
JProfiler connection through SSH Tunnel
If you don't want to open a firewall for JProfiler, here is a simple trick:
- on the monitored server, run the startManagedWebLogic_jprofiler.sh
- make sure the port 8849 is being listened to (netstat -an | grep 8849)
- open putty
- open a connection to your monitored server, using this setting for SSH tunnel:
(the obscured hostname is the FQDN of the monitored server)
you will notice that on your laptop, BEFORE the connection you have nothing on port 8849, while AFTER you have a LOCAL port open, and being forwarded to the remote host:
(10.56.80.126 is the remote IP, 10.240.21.73 is the local IP... you see several connections, one is my putty open, the others are Tunnel)
You can now open your JProfiler session, but make it point to localhost:8849, not to the remotehost:8849. It works!
- on the monitored server, run the startManagedWebLogic_jprofiler.sh
- make sure the port 8849 is being listened to (netstat -an | grep 8849)
- open putty
- open a connection to your monitored server, using this setting for SSH tunnel:
(the obscured hostname is the FQDN of the monitored server)
you will notice that on your laptop, BEFORE the connection you have nothing on port 8849, while AFTER you have a LOCAL port open, and being forwarded to the remote host:
netstat -an | find "8849" TCP 127.0.0.1:8849 0.0.0.0:0 LISTENING TCP [::1]:8849 [::]:0 LISTENING netstat -an | find "10.56.10.126" TCP 10.240.21.73:58756 10.56.10.126:22 ESTABLISHED TCP 10.240.21.73:58944 10.56.10.126:22 ESTABLISHED TCP 10.240.21.73:58969 10.56.10.126:22 ESTABLISHED
(10.56.80.126 is the remote IP, 10.240.21.73 is the local IP... you see several connections, one is my putty open, the others are Tunnel)
You can now open your JProfiler session, but make it point to localhost:8849, not to the remotehost:8849. It works!
Sunday, February 2, 2014
Configuring JProfiler for WebLogic
Download and install JProfiler 7.2 (the latest is 8 at the time of writing). Enter the licensing information. Run the Session/Integration/New Remote Integration.
Then remote Application Server:
this means that it will generate jprofiler_agent_linux-x64.tar.gz in c:\temp
you must previously have copied startWebLogic.sh (from the DOMAIN_HOME/bin folder) to your c:\temp folder. In the snapshot below I have tried with startManagedWebLogic.sh, but this is NOT recognized as a valid script.... please replace it with startWebLogic.sh script!
as said, startManagedWebLogic.sh is not recognized... use startWebLogic.sh instead!
this time it will work:
and at this point we are ready to attach:
but fiirst you must prepare the Application Server (the Managed Server)
sudo mkdir /opt/jprofiler
sudo chmod soa:soa /opt/jprofiler
The file jprofiler_agent_linux-x64.tar.gz will be created in c:\temp. Copy it to the /opt/jprofiler folder, then:
tar xvzf jprofiler_agent_linux-x64.tar.gz
and that's it for the libraries.
The modification done in startWebLogic.sh is:
added on top:
JAVA_VENDOR=Sun
added above this line: echo "starting weblogic with Java version:":
changed:
The JPROFILER_OPTIONS add a agentpath pointing to the jprofiles libraries in charge of doing the instrumentation and profiling.
BEWARE: if you copy the startWebLogic_jprofiler.sh from a different domain, make sure you change the DOMAIN_HOME !
At this point we must change the startManagedWebLogic.sh and startWebLogic.sh scripts to add the JProfiler libraries.... we shall clone the existing scripts, to allow normal operations to be unaffected.
cd DOMAIN_HOME/bin
cp startManagedWebLogic.sh startManagedWebLogic_jprofiler.sh
vi startManagedWebLogic_jprofiler.sh
change the 2 occurrences of startWebLogic.sh into startWebLogic_jprofiler.sh
copy to DOMAIN_HOME/bin the startWebLogic_jprofiler.sh generated by the jprofiler wizard
to start the profield server, run ./startManagedWebLogic_jprofiler.sh osbpp4ms1 (or whatever is your managed server name).
In the logs you should see:
And you are ready to connect and profile.
Happy profiling!
Then remote Application Server:
this means that it will generate jprofiler_agent_linux-x64.tar.gz in c:\temp
you must previously have copied startWebLogic.sh (from the DOMAIN_HOME/bin folder) to your c:\temp folder. In the snapshot below I have tried with startManagedWebLogic.sh, but this is NOT recognized as a valid script.... please replace it with startWebLogic.sh script!
as said, startManagedWebLogic.sh is not recognized... use startWebLogic.sh instead!
this time it will work:
and at this point we are ready to attach:
but fiirst you must prepare the Application Server (the Managed Server)
sudo mkdir /opt/jprofiler
sudo chmod soa:soa /opt/jprofiler
The file jprofiler_agent_linux-x64.tar.gz will be created in c:\temp. Copy it to the /opt/jprofiler folder, then:
tar xvzf jprofiler_agent_linux-x64.tar.gz
and that's it for the libraries.
The modification done in startWebLogic.sh is:
added on top:
JAVA_VENDOR=Sun
added above this line: echo "starting weblogic with Java version:":
JAVA_VM= export JAVA_VM JPROFILER_OPTIONS="-agentpath:/opt/jprofiler/bin/linux-x64/libjprofilerti.so=port=8849,nowait $JPROFILER_OPTIONS" export JPROFILER_OPTIONS
changed:
echo "starting weblogic with Java version:" ${JAVA_HOME}/bin/java ${JAVA_VM} -version if [ "${WLS_REDIRECT_LOG}" = "" ] ; then echo "Starting WLS with line:" echo "${JAVA_HOME}/bin/java ${JPROFILER_OPTIONS} ${JAVA_VM} ${MEM_ARGS} -Dweblogic.Name=${SERVER_NAME} -Djava.security.policy=${WL_HOME}/server/lib/weblogic.policy ${JAVA_OPTIONS} ${PROXY_SETTINGS} ${SERVER_CLASS}" ${JAVA_HOME}/bin/java ${JPROFILER_OPTIONS} ${JAVA_VM} ${MEM_ARGS} -Dweblogic.Name=${SERVER_NAME} -Djava.security.policy=${WL_HOME}/server/lib/weblogic.policy ${JAVA_OPTIONS} ${PROXY_SETTINGS} ${SERVER_CLASS} else echo "Redirecting output from WLS window to ${WLS_REDIRECT_LOG}" ${JAVA_HOME}/bin/java ${JPROFILER_OPTIONS} ${JAVA_VM} ${MEM_ARGS} -Dweblogic.Name=${SERVER_NAME} -Djava.security.policy=${WL_HOME}/server/lib/weblogic.policy ${JAVA_OPTIONS} ${PROXY_SETTINGS} ${SERVER_CLASS} >"${WLS_REDIRECT_LOG}" 2>&1 fi
The JPROFILER_OPTIONS add a agentpath pointing to the jprofiles libraries in charge of doing the instrumentation and profiling.
BEWARE: if you copy the startWebLogic_jprofiler.sh from a different domain, make sure you change the DOMAIN_HOME !
At this point we must change the startManagedWebLogic.sh and startWebLogic.sh scripts to add the JProfiler libraries.... we shall clone the existing scripts, to allow normal operations to be unaffected.
cd DOMAIN_HOME/bin
cp startManagedWebLogic.sh startManagedWebLogic_jprofiler.sh
vi startManagedWebLogic_jprofiler.sh
change the 2 occurrences of startWebLogic.sh into startWebLogic_jprofiler.sh
copy to DOMAIN_HOME/bin the startWebLogic_jprofiler.sh generated by the jprofiler wizard
to start the profield server, run ./startManagedWebLogic_jprofiler.sh osbpp4ms1 (or whatever is your managed server name).
In the logs you should see:
JProfiler> Protocol version 37 JProfiler> Using JVMTI JProfiler> JVMTI version 1.1 detected. JProfiler> 64-bit library JProfiler> Don't wait for frontend to connect. JProfiler> Starting up without initial configuration. JProfiler> Listening on port: 8849. JProfiler> Instrumenting native methods. JProfiler> Can retransform classes. JProfiler> Can retransform any class. JProfiler> Native library initialized JProfiler> VM initialized JProfiler> Hotspot compiler enabled
And you are ready to connect and profile.
Happy profiling!
Labels:
jprofiler
Subscribe to:
Posts (Atom)