Thursday, September 30, 2010

The Developer turns into an Historian

So you are a developer and you have just received an assignment.

If you work in a normal IT project, this means that someone sent you an email saying "implement me a system which processes credit card transaction", with a couple of documents in attachment explaining the structure of the records.

If you are lucky, you might get a link to some SVN repository where some developer who has already left the company wrote some undocumented code trying to do the same; they will tell you "yes but this code was not good. Now you are an Agile developer and you will figure out what to do".

If you are a normal human being, you will now have a sense of inadequacy and immense responsibility weighing on your shoulders. And a sense of panic, because the work of 4 people (the business analyst, the architect, the developer and the tester) has been entirely pushed to you, without you having the status/authority in the company to go around and demand that some analysis and design is done.

As a developer, you are at the bottom of the company ladder and your only option is to say "yes" (do I sound "victimistic" (=with a persecution complex)?) .Well, no, you have an option to say "no" and you should exercise it, for the good of the project.

Being poorly specified, the problem will be necessarily poorly implemented, this will imply rejection at delivery, and successive expensive reworks to get it done right. And everything will be the developer's fault.

My point is: as an experienced developer, your duty is to stand up and say "these requirements are insufficient, we cannot proceed to implementation unless we go through a phase of debate, analysis and design, and we reconstruct the previous attempts that were done on this problem, to learn from the past and gather all the useful information from the people who were involved". It is your duty to say "let me talk to some business analyst, to get the entire picture and document it".

All issues have a history behind. It is important to track this history into a central place. Letting things to stay dispersed is a door open for chaos. They only way to successfully complete your assignment is through full control and tracking. Keeping all the info in a Wiki is indispensable. Better still if you can keep a (b)log of what has been done and needs to be done, with the participants clearly indicated. Keep all participants informed. Blogs and RSS feeds can be useful, although simple emails can do.

Being a good developer is not only about being techie techie... most of the time, it means having a relentless commitment to investigation, communication and documentation. The same skills it takes to be a good historian.

Once I have been asked to perform performance tests on a system. I worked one month setup the environment, to prepare all the Python scripts, measure the metrics, investigate the memory leaks.... by chance, on a Saturday, speaking informally to a colleague sitting 5 meters from me, I discovered that he had done exactly the same thing 3 months before, and already come to a lot of diagnostic. Why management didn't tell me before? Probably they will say "because you didn't ask". This is the world, you cannot change it. Getting mad will lead you nowhere. So, next time, ask "has anybody been working on this before"? There are no stupid questions, you are stupid if you don't ask.

Wednesday, September 29, 2010

SOA by the book

Udi Dahan gives a great presentation on common errors of "SOA by the book":

I have taken some notes here:

some skepticism about the term SOA.... "Oriented" means nothing, "Architecture" nobody really agrees on what it means.... only "Service" is left!

Loose Coupling : ok for LC design time, but what about LC Runtime ?

The Tables of the Law:
Services are autonomous
Boundaries are explicit
Share Contract and Schema, not Class and Type
Compatibility is based on Policy


You can end up with extremely complex architectures "everything is a service!" .
Layers on Layers on Layers.
It looks good on PowerPoint, but Hell on Earth.
A simple interaction can trigger zillion of services calling each other.

The entire this is NOT Agile. Any change entails LOTS of changes everywhere.

And if you end up a CYCLE, a service being called twice in an interaction, you are screwed!

Asynchronous invocation only DOUBLE the number of threads blocked.

The longer the interactions last, the more you can have GC problems -> OOM
Long running Transactions generate deadlocks.

Publish Subscribe can alleviate the crazy interdependencies: each service gets a Event notification of the change in other data. No need to ask the price, we are told every time it changes.

SOA is coupled with EDA: Services are coupled with Events.

.... continues ....

Break and Seek

So, after months (years?) of development and test, your application is ready for production.
The time is ripe for a new game: "break and seek".

This game requires 2 participants: Atilla and Sherlock.

In a test environment, Atilla will choose one component of the system to break (it can be anything: an instance of DB to bring down especially if using RAC, a JEE module to undeploy, a LDAP server to shutdown, a file system to be filled to 100%, an Application Server to shutdown, a password to change, a network cable to unplug....).

Atilla will now tell Sherlock to fix it, without telling him what went down.

Sherlock will then verify that:
a) the system fault is monitored and detected via some monitoring tool (eg Nagios)
b) the error message returned to the end user is not some horrible garbage, but something meaningful and reassuring
c) the automated tests are able to capture the fault
d) everybody is aware of the criticality of the fault and its consequences on other systems and on use cases... this can lead to rethinking the fault tolerance/redundancy of the system
e) transactions are properly rolled back and the system is not left in an inconsistent state
f) the logs contain meaningful messages and not loads of repeated stacktraces without any context information
g) the Operations manual contains instructions on how to fix the problem (location of start/stop scripts and other troubleshooting issues)
h) if the fault is not fixed within a given time, the system doesn't diverge (eg some file system gets filled with error reports or something along the line which would lead to a domino effect)

It's a beautiful game and the day you will go to production you will sleep better, knowing that you are ready to tackle all these accidents.

For us Italians, Atilla is a synonym of barbaric devastation. I was very surprised to learn, in Budapest, from my biking instructor, that in Hungary he is considered a national hero, and streets are dedicated to him... the same story seen by Spartacus and by Caesar...

Tuesday, September 28, 2010

Why images and writing are important

I have noticed that, during any discussion, if anyone starts drawing a schema and annotating it,
everybody's attention very soon concentrates on the schema,
fingers point to the drawing to indicate something,
and it becomes much easier to come to a common understanding and solution.

Drawing a schema on a A4 paper, as opposed to a whiteboard, makes it easier for all interlocutors to freely participate and contribute, and it eliminates the cleavage :o) between the speaker and the listeners.
Also, A4 papers are easily composable and reusable.

I have noticed that when I expose something to someone, and at the same time I write it down, it becomes much easier for me to follow a logical thread - it works both ways! - and the interlocutors seem more relaxed and comfortable, knowing that all the information is persisted and he doesn't need to waste any brain power trying to memorize it.

So, why this?

As a starting point, I assume that our visual cortex

is much more developed than our auditory cortex

(the Wikipedia talks about 140 million neurons per hemisphere for the first, I could not find data for the second)
I assume that, as a rule of thumb, the more neurons one can get involved in information processing, the faster and more accurate this will be.
For this reason I started carrying with me everywhere a pile of A4 sheets and a couple of pens of different colors... it's creative and fun!

Language teachers say that when you can show together the WRITTEN sentence with VIDEO of the scene and AUDIO, the 3 sources combined assure the highest retention of the information. That's why I think watching DVDs with subtitles is the best way (apart from live interaction mediated by emotions, e.g. falling in love) to learn a language.

I also read about a woman who became blind, and managed to turn part of her visual cortex into processing area for sound.... she was able to listen to audiobooks at 4x normal speed! This tells me how powerful and how flexible our visual cortex is.

I assume also that cultural biases also influence the way we learn. Italians for instance get much better a message when associated to body language - and this was for me a handicap when I started working in Anglo-Saxon world.

For a dictionary of Italian gestures, here you have a priceless visual dictionary :

I will explore in future the power of gesturality and comedy to mimick IT events, like the sending of a message or the raising of an Exception. I personally understand much better a message flow when I can see something in movement.

I envisage a world where in IT projects part of the documentation will be expressed with animated cartoons depicting actual examples of Use Cases. How many times I have been craving for this! But the day of Cartoon Driven Development will come :o) ! In future, the documentation of a Use Case with different scenarios could look like this (Donald Duck being the UML Actor):

and IT people life will be a bit more entertaining. 

Evidently, most of the visual cortex must be dedicated to the processing of movement.
But then again, culture plays its influence, probably neuron allocation changes with culture and time. The brain is, above all, a PLASTIC organ.

Funnily, I see that Computer Science helps neurology with Neuroinformatics
but the opposite doesn't seem to take place (we could call it Infoneurotics  :o) )! God only knows how much Neuroscience would help preventing IT project from going spaghetti...

Last but not least, let's mention the Theory of multiple intelligences by Howard Gardner . Basically, we all have different pathways to learning, one should mix different techniques together to address each specific audience.

OSB, lessons learned and best practices

if someone asked me: "tell me, quick, what is your message to posterity about OSB?",
I would probably say:

- from day 1, start automated deployment of your project to a development and integration server, automated tests with SOAPUI and continuous performance monitoring again with SOAPUI. Use customization files to do the property filtering for the target environment, this will force to identify from day 1 all the environment-dependent values. You should NEVER, EVER deploy something somewhere manually. NEVER. Was I clear? NEVER.

- learn XQuery, there is a lot you can do with XQuery without resorting to convoluted cascades of message processing actions (replace, rename, assign....). Remember, your message flow should stay ESSENTIAL, all the message processing should be hidden somewhere else

- learn about XmlObject and XmlCursor in Java Callout and custom Java XPaths, they can bring the entire power of countless Java libraries to your message processing needs.

- generate WSDL automatically, with Groovy or anything you like. They are a pain in the neck to be hacked manually. The same can be said about XQuery transformations. There is a debate on whether also Proxy Services could be generated automatically, and whether the ActionId is an obstacle or not...if you find yourself doing a lot of copying and pasting, this might be an option for you.

- publish WSDL and XSDs externally, to an Apache server or any form of Registry. If you feel that keeping your resources inside OSB is cool, feel twice.

- wrap BS with a Local Transport PS, abstraction is good when it comes with a low performance cost

- make sure your Application Specific Business Model doesn't creep into your "Canonical Data Model", that's one of the main reasons people use a ESB anyway. Anyway, domain modeling is not something that should be left solely to the developer, make sure the Business too is involved.

- EJBs proxies can be tempting - they map XML to Java for you - but they are expensive from a maintenance and performance perspective. If you need interfacing to the Java world, verify that you cannot replace them with a Java Callout.

Fabio adds these lessons:

- Enable Monitoring for BSP and PS with interval of 10-15 minutes
- Enable SLA Alerts for errors avg and max response times. Aggregation interval+15 minutes
- For every BSP, set a timeout
- Create 1 or more Workmanagers and associate the PS to it.
- enforce consistent Exception (fault) handling and logging across all services
- Auditing - trace the service, operation, response time, response code, originating username/system, ipaddress... logging this information with log4j (rotation time 1 minute) and having an independent process parse the files and store this information asynchronously to a DB . This proves to be a more efficient way of auditing than using the ReportingMDB. ReportingMDB should be used only for infrequent events like SLA violations.

Another book for the Winter: Effective Java, Second Edition by Joshua Bloch

Also here

Judging from the reviews I read about this book, I should only be ashamed for not having read it before.

Command Line Parsing with Java

Look no further, here

is all what you need!

Many times I wonder why such cool features are not embedded in the Java Language itself.

Monday, September 27, 2010

DI, JSR-330, Google/Guice

On JSR 330: Dependency Injection for Java

interesting introduction to Guice:

Let me copy the DI motto: "push dependencies from the core to the edges"

Nice and lightweight book this one:

Google Guice: Agile Lightweight Dependency Injection Framework

I like Guice, it's very specific and focused. Unfortunately it seems it's not undergoing a lot of development, version 2.0 was released in May 2009....
They claim it's both more compact than Spring 2.0 and definitely faster to execute.
This very accurate test  seems to prove it.
Yet no test covers Spring 3.0.

I would say that, with all my sympathy to Guice, I would rather go for Spring 3.0, especially now that most of the bloody XML has gone...just as a resume-builder :o).

Actually Luciano has used Guice and he says "it's better than Spring, if you need only IoC; I would highly recommend Guice"

Here a detailed dissertation on the topic. It favors definitely Guice over Spring. His conclusion is "Spring wires classes together by bean, whereas Guice wires classes together by type." ... implying that Guice is more typesafe and reliable, and less verbose.

Saturday, September 25, 2010

Getting Started With Oracle SOA Suite 11g R1 – A Hands-On Tutorial

This book is very packed of information, conceptually very dense... will take a long time to digest, but it brings amazing value!

Here the sample code.

Inside, you learn what is:

Service Component Architecture - SCA (delivering an integration solution made by many pieces glued together)
Metadata Store - MDS (holder of all SOA Suite artifacts)
Event Delivery Network - EDN (abstraction on MOM)
Service Infrastructure - SI (provides runtime environment, e.g. made by BPEL, Mediator, Business Rules, and Human Workflow engines)
Enterprise Manager - EM (management and monitoring application)
Oracle Web Services Manager OWSM (maintains security policies)
Exception management



I started my "carreer" in Distributed Computing back in 1997, as a Forte and Visibroker specialist.

In Forte it was a matter of seconds to create a Distributed Object, deploy it to a cluster of servers, an create a client to invoke it.

When it came to integrate Forte with Java, I fell immediately in love with IDL and ORBs.
Visibroker was loaded with features, and it would come with a idl2java tool to create Java stubs and invoke a CORBA service.
At that time I was in Bangalore, and I used to spend all my weekends at office experimenting with ORBs. A thrilling experience.

13 years after,

IDL is gone and we have WSDLs and XSDs. IDL was simple, WSDL and XSDs are intricate.

IIOP is gone and we have SOAP. IIOP was fast, SOAP is as slow as molassa.

Performance is pathetic. Garbage collections through the roof.

Standars are multiple. Each vendor implements its own flavour.

Something went terribly wrong in the IT industry in the last 15 years.

Read about performance here

I am 100% with the author of this post. I have worked with both IDL and SOAP and the former was orders of magnitude better specified and easier to use. The only good point of SOAP is that you can manually carve your payload with notepad, but is this a good reason to build your Enterprise Solution on top of it? On the contrary, it seems to me a proof that most developers will simply tinker and carve hand-crafted messages which can or cannot work according to the SOAP implementation.

Luciano says that SOAP/WSDL is for the weak of mind, really cool people use REST + JSON. I will investigate the technology.

Here another rant against SOAP.

Every time I hear "SOAP and XML are cool" I kill a kitten.

IaaS, PaaS, and SaaS... beyond the hype and the bullshot

An excellent presentation here:

I am also reading this book by David Linthicum:

available also on Safari

OWSM , cool presentation

Those who have struggled in the past with WS Security, can appreciate how simply and elegantly OWSM manages WS security.

My only wish is that Oracle would use the voice of some sensual girl to advertise their products, that would make them a bit more sexy :o)

UML, Einstein's razor and Ockham Razor

I love simplicity, and one of my favorite quotes is Einstein's
"It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience." (better know as "Everything should be made as simple as possible, but no simpler").

This is known as Einstein's razor, often used as a warning against oversimplification.

Here is the story of Ockham Razor: . Ockham too was an advocate of simplicity (entia non sunt multiplicanda praeter necessitatem) - and this almost costed him his head, because the Pope was thriving on obscure formulas and machiavellic politics.

And here is the book I am reading about UML. The 20% of UML that gets 80% of the modeling done.

Once upon a time UML was a prerequisite to get any job, these days more often than not people just code away stuff without any modeling... a sign of our times which consider more and more software as a throwaway thing without an intrisic value and without paying attention to quality.

Very useful link:
UML stencils for VISIO :

Here an open source to do UML.

(PS apparently both spellings are correct: modelling and modeling)

Friday, September 24, 2010

Java SOA Cookbook

I highly recommend this book, full of practical examples, showing that the author, Eben Hewitt, is really a hands-on guy...

Alistair Cockburn on Agile: it's the people that matter

Alistair's company is:

These quotes are distilled wisdom! An Agile coach warning against turning Agile into a religion, and putting the focus on the Manager responsibility to knit people together, and interpersonal chemistry...

"small, colocated development team: frequent delivery, reflective improvement, osmotic communication"

"pay attention to people in the design process, and to fine-grained, end-to-end feedback (typically accomplished through shorter iterations)"

"The utility of the term "agile" is not yet done, because people still too quickly drift into process religions or mathematical dreamland and forget that our industry is built on people getting together, inventing and deciding as they go. I repeat yet and yet again: people, human people, with all their weird characteristics. Math and process are relevant, but soooo much easier than dealing with the actual people in front of us."

"All the theory in the world does not guarantee that people will work well together. Individual people have strange effects on each other – either increasing trust or trigger unexpected angry outbursts."

"We still make decisions unconsciously in our teams and organizations, we don't discuss how we come to decisions, and we don't discuss alternative ways to make decisions."

"It is all too easy for any manager to lean on a process to improve output, rather than to develop his or her personal skill in knitting together the people across departments."

Thursday, September 23, 2010

DSL, a book for the coming winter

On Amazon

and on Safari

I keep telling myself that if you master the implementation of DSL you can quickly become the king on any IT project. There is simply too much mismatch between the domain description and the actual coding of the solution. Too much boilerplate and fluff that makes it hard for the Business to review the code, and to code the solution too.

Here an  interesting interview with M. Fowler on DSLs:

from which we learn the Sturgeon's Law :
"Nothing is always absolutely so"
"Ninety percent of everything is crud."

Wednesday, September 22, 2010

WebLogic Portal, useful links

Some concepts here:
more advanced stuff:


Portal development guide:

On Beehive, Pollinate, Struts:
(Beehive has not been updated since 2006 :o( )

and the "attic" page, suggesting what to use instead of Beehive (essentially Spring Web Flow, Spring Beans and Axis2):

Some interesting musing here:

On SSO and IdentityAsserter:

The structure of a .portal file:


In other words:
a portal contains a desktop, which has a header, a footer, one or more books, and each book can have multiple pages

Here an excellent explanation of these elements.

WEB-INF/netuix-config.xml contains important settings: be continued ...

How do I feel while editing a Portal? A voice inside me screams "these operations should be scripted!!! Why am I clicking away on a WebApp to customize a WebApp?". To be investigated: how do you provide a DSL to create and customize a Portal?

Monday, September 20, 2010

The Agile Antimanifesto (or the antiagile manifesto)

Many years ago, I had the misfortune of working in a project where people spending all their day on Online Sudoku and Facebook had the guts to call themselves "Agile". It was one of the most painful experiences of my life, seeing systems being abused, and chaos and approximation rule unconstrained.

I was eventually kicked out of that project, because I was challenging the general misconception of Agility, and the firing manager told me "it's not enough to be right" (maybe he meant: you must also be a sneaky politician and a sycophant). That taught me a lesson: if you want to keep your job, always smile and say yes, even if the Titanic he heading full speed towards an iceberg.

I am not anti-Agilist, I simply believe that "Agility" require extreme competence, coordination and organization, rigorous control and discipline; too often, Agile is an excuse to be slack.

When left in the hands of the wrong people, Agile becomes a Religion, and Religion has never made a project successful AFAIK - unless your target is the transfer of wealth from the Poor to the Elites, which is mostly what religions are for.

Hence here is my Agile Antimanifesto.

We are Agile, therefore we don't send emails so at a later stage you cannot use that email against me

We are Agile, therefore if you resist to a last minute change to be put in production without any test, it means you are lazy and un-agile .

We are Agile, therefore we can change architecture every week and make you work every day until midnight re-implementing the same functionality in 10 different ways, and then blame you because you have delivered late.

We are Agile, therefore we delegate all the analysis to the developers without even providing any guidelines. Then blame them because them didn't implement according to the guidelines which exist only in our head - or don't exist at all, we are too agile to elaborate guidelines.

We are Agile, therefore we go to all meetings and you must read our brains if you want to know what was decided; we will never inform you, we are Agile which means you must be Telepathic.

We are Agile, therefore we don't need to work hard.... as we are Agile, we do all the bla bla and you do all the work. We are Agile and Smart, and Smart people work Smart, not Hard. That is, we don't need to work at all, all we need to do is to say "Agile" once in a while and delegate everything.

We are Agile, therefore we don't need to do any analysis, we will improvise everything along the way; an Agilist God will inspire us and miraculously turn Chaos into Order at the end. We don't design anything and don't publish blueprints, one day they could be used against us and this is not Agile.

We are Agile, and if we put the word Agile in every sentence we don't need to give any rational answer to your objections. Actually we don't even need to listen to you, because if you object you are not Agile.

10 signs that your project is in trouble... and 10 signs it's in good health

(I am on vacation, so I am publishing posts which are more philosophical than technical)

Your project is presumably on a path to failure when:

1) in meetings, team leads keep saying "we are making progress" without giving much detail, and don't mention difficulties and problems to be solved

2) the words "agile", "soa", "abstraction", "cloud" - or name your favorite methodology or buzzword - are used more often than "analysis", "design", "code review", "automation", "test"

3) everybody around is trying to impose his view about a technology/product even without ever having used it

4) the architects don't have a clue about what the developers are doing, anyway they don't have time for irrelevant details

5) the developers are afraid to talk to the architects because last time they did it they had to scrap 2 weeks of work and do it all over again

6) you come to know that your code has been (or not been, or the wrong version has been) delivered only because a tester insults you in the corridor, and not because your automated tests have been run and failed

7) when you say "sequence diagram" (or state, or class, or deployment, or component...), people look at you with a puzzled or horrified expression

8) the number of emails in your inbox, apart from the invitation to social events and meetings, equals the number of technical emails you have sent

9) it takes a day to restart a server or DB because nobody knows who is in charge

10) you must buy hardware with your own money in order to be able to work

11) after a major delivery failure, all what your manager has to say is "the sun keeps shining" and "it's ok to fail" rather than "let's analyze what went wrong"

12) for months in a row, nobody can troubleshoot a system until a specific person arrives at office

13) despite a huge delay in the project, at 3 pm on Friday the office is empty

14) somebody is seriously trying to make management believe that Methodology X or Product Y  is the solution to all problems; and management listens to him

15) the main objective of a meeting is to schedule another meeting

16) components are defined as "delivered" while running on a developer box

17) environments point to each other and nobody seems to care as long as things seem to work

18) tests are done manually, with a single happy path

Your project is presumably healthy if:

1) architects are seen often discussing with developers

2) business analysts and audit persons walking on the development floor are treated with great respect and can freely ask questions to the developers

3) developers are equipped with highly performing machines

4) new environments can be created in minutes, with automation scripts

5) configuration is strictly under control, in a sort of Configuration Management DB accessible to everyone, and changes are strictly controlled

6) the walls are not covered with slogans but with diagrams

7) your manager asks you regularly to show him the progress of your work

8) if you are found bullshitting or finger-pointing other developers, people frown at you; people compliment you if you find a solution, and are eagerly interested to know about it

9) everything is automated, no manual operation - apart the click of a button - is required to make a delivery from your CMS running all the conceivable tests

10) if any part of any system goes down, some sort of alert is sent within minutes

11) emails are sent in due time for any event of public interest, such as a new release or a code freeze or a new environment or a configuration change

12) "I'll do it" is the motto

13) People are not afraid to say "I don't know", and other people are happy to teach them

14) A meeting is preceded with an email containing a list of topics to be discussed; it ends only when all the topics have been addressed and all decisions have been made; someone takes minutes and distributes them to all concerned people, and the material produced in the meeting is archived and published

15) the delivery and testing process is defined and in place, and developers stick to a rigorous TDD approach

(ok the number 10 has not been respected... pardon me for being Agile)

Domain-specific multimodeling, Language Cacophony and the DSL Unicorns

(Image from the fabulous Lady and the Unicorn, in the Museum of the Middle  Ages of Cluny, in Paris)

The Unicorn is a mythological creature inspiring a sense of peace, courage and harmony...

From the Wikipedia:
"The unicorn is the only fabulous beast that does not seem to have been conceived out of human fears. In even the earliest references he is fierce yet good, selfless yet solitary, but always mysteriously beautiful. He could be captured only by unfair means, and his single horn was said to neutralize poison."" 

The idea of being able to express in a Powerful, Intuitive, Unique Domain Specific Language all our business needs, avoiding the Language Cacophony and the multi-language nevrosis and impedance mismatch is something which seems to attract the most powerful minds of this century. As novel Jason and the Argonauts, they set out in search of the Golden Fleece

Here are some posts on the topic:

More in general on Model Driven Development:
some real life experience here

Are we only at the beginning of a revolution which will see Generalist Languages like Java basically disappear as a main programming language, and be left only to support runtime more specific, business-driven languages? I hope so.

Sunday, September 19, 2010

Keep calm and carry on

Sometimes, some projects go through tough phases of harsh finger-pointing and criticism.

The mediocre are the best at this game, you can recognize them at how often they point their finger against someone.

The best people seek solutions, the worst seek scapegoats.

As the Brits used to say during WWII, "Keep calm and carry on".

Getting emotional about pricks will lead you nowhere.
Stay excited about building solutions, and shrug off the rest.
Just focus on your duties and ignore the negativity.

(for a funny hack of the famous poster, go here )

Friday, September 17, 2010

Communication and Icebreakers

I have read once this story, I think on "The Mythical Man Month" (excellent book that I highly recommend):

A number of managers in a company got together to decide who should be assigned to a new, strategic IT project.
All the managers agreed on the name of a lady. Nobody could really say why, since she was not particularly skilled in any specific area. Yet people were vaguely aware that all the project on which she had been working had been highly successful.

On further investigation, it turned out that the real skill of that lady was TALKING TO PEOPLE, and ENABLING COMMUNICATION AMONGST PEOPLE. She was a sort of catalyzer, of potential barrier breaker.

I have seen this in a number of project: the power that ASKING QUESTIONS to people has. It enables awareness of problems and trigger birth of new idea.

I always say: 75% of problems in an IT project are originated by HUMAN-TO-HUMAN communication and coordination, not by intrinsic technical problems.

Groovy for DSL

I am fed up of XML being used for the wrong thing (thank you Ant, Maven and the rest)
so I want to be able to design my own DSL.
I have the strong feeling that if you manage to design a DSL you gain a tool of immense power.

I wrote some 25 years ago a C compiler in Lex and Yacc, to be able to generate embedded firmware in Z80 controller cards, and it was great fun.
These days we have more powerful tools, like Groovy.

I am reading this book and I find it fascinating:

This link  is also very dense of information.

Some concepts:
Java is cool, but verbose.
Groovy DSL is used internally by Groovy to implement parts of the Groovy framework.
An XML document is a primitive form of DSL
A DSL is a programming tool designed for the domain expert

..... to be continued...

This is a practical example on how to develop a Groovy DSL in minutes:

and here a good presentation on Groovy DSLs

Thursday, September 16, 2010

Log4j and JMX

A correct Logging mechanism / Error Reporting is one of the most vital parts of an IT solution, yet it's one of the most overlooked and let to the individual developer's taste.

This normally leads to frightening GREP nightmares in environments where the application is deployed in a cluster... usually several grepping monkeys (I have been one of them often :o( ) are assigned to the task of grooming the logs in search of error conditions.

The adoption of monitoring tools - often grepping logs based on a set of regexps - is normally something which happens very late in the project, and it entails a painful series of flashbacks and retrospectives to remember all the error conditions we have seen in the past, manually troubleshooted and not captured anywhere.

In most projects, logging is configured with a unique or log4j.xml (I prefer the xml).... which is edited by hand and ofter requires a restart of the appserver .... unless you use the propertyconfigurator, which is not something I particularly like. For instance, it becomes impossible to have any automated tests involving logs, because you don't have a way to configure them programmatically. But since logs are Cinderellas, nobody really want to test them.

Anyway I like the idea of using JMX in conjunction with log4j. To be definitely explored in the next project, and that I have seen used in the past.

In the past I have used also a JSP console to change log levels at runtime, something like this:

Tuesday, September 14, 2010

Waterfall vs Agile

In my life I have seen only 2 projects failing miserably - it all depends on the definition of "failure" anyway.

One was a PURE (pseudo)WATERFALL approach: developers were told that they needed to read the requirements, then do ALL the High Level Design with Rational Rose, then ALL the Low Level Design, then ALL the coding, and only then, 6 months later, deliver to the client.
The result was:
- two months spent with people scratching their head asking themselves what on earth should go into a HLD or LLD... and deliver some approximated sequence diagrams just to please the architect. Then, they spent 3 months sort-of-coding with continuously changing standards, and trying to interpret the requirements.
They delivered, and the feedback from the client was "but this is not what we actually needed!"

The other was a PURE (pseudo)AGILE approach: just hack things, cannibalizing existing legacy systems without any analysis nor design nor clear attribution of responsibilities, leaving the developer entirely alone in deciding how to implement stuff, changing architecture, schema and strategies every week. No design, no analysis, just headless coding. The result was the typical big ball of mud , useless and unmaintainable, because it would lack coherence and solid design. People would dedicate a disproportionate amount of energy to discuss methodology issues, and almost none to address functionality issues.

What I learn from these 2 extreme experiences is: talking about methodology alone will not bring you anywhere, and is no substitute for good practices, hard work, commitment to quality, automation and good communication (verbal and written). At the end of the game, it's the quality of the people you have on board, and the effective way they cooperate, which matters.

Pure Methodologist sometimes/often are failed professionals. As a proverb says: those who know, do; those who don't know, teach. Or criticize those who do. The more often your people repeat the word "Agile", the more CPU cycles they are stealing from the actual resolution of your problems. You can't buzzword your way to success.

Too much stress on methodology mean, basically, that you believe that your team is made by kids from a kindergarden and that you must micromanage them otherwise they will never get anywhere. Macromanagement is good, micromanagement is bad. When I am asked to break down a task in its atomic components and produce an estimate for each one, with 1 hour resolution, something is seriously wrong.

Success is a hard to reach target; it requires rigorous analytical minds, experienced professionals relentlessly banging their head on the wall searching for the best solution, the best design, surfing through zillions of forums and documentation searching for a path to a solution, putting together solid POCs and working closely with the business.
It requires passionate, dedicated people, with solid scientific education and good moral values, integrity as a mission.

There is no silver bullet, no magic product, no wonder methodology.

Monitoring MQ queue running on Linux

MQ user interface is VERY mainframe... you must get used to a very stern and dry presentation.

log into your unix box running the mqserver


runmqsc myqueuemanager


display qstatus(myqueue)

4 : DISPLAY QSTATUS(myqueue)

AMQ8450: Display queue status details.








AMQ8426: Valid MQSC commands are:
















Message Queue Interface (MQI)
Queue Manager (QM), which hosts Queues and Channels

Queues can be:
Local queue (held in the QM)
Transmission queue (basically a Bridge)
Remote queue definition (basically a Foreign Queue)
Alias queue (just a nickname for an existing queue)
Model queue (a template)
Cluster queue (like a Uniform Distributed Queue)
Shared queue (???)
Group definition queue (???)

Monday, September 13, 2010

The Good, the Bad and the Ugly

This post is really inspiring:

Over and over in IT projects I have seen that 70% of the crew is made by mediocre, 9-5 people, and only 15% are self-motivated, innovative, hard working and focused individuals, who alone deliver 60% of the project and manage to organize the rest of the troops.

Of course it's impossible to have all Clint Eastwood on board.
At least, try to identify ASAP the Lee Van Cleef, those who strive on lies, bullshitting, fingerpointing, backstabbing etc. and get rid of them.
And make sure your Clints are put in positions of control and responsibility and organize the mass of Eli Wallach.

Coherence: running 2 coh clusters in OSB

Step 1:

backup the old library 3.5.2b463 existing at

and replace it with the "latest stable" 3.5.3/465p6

We should definitely avoid using 2 different versions of Coherence for the OSB and the Custom cluster, just to make our life simpler.

Now, OSB should start the internal Coherence cluster using the 3.5.3/465p6 version (check the MS logs)

Step 2:

put somewhere in the MS classpath the tangosol-coherence-override.xml for the Custom node.
This will not interfere with the built-in OSB Coherence, which is loading its config from
com.bea.alsb.coherence-impl.jar!/tangosol-coherence-override.xml and $DOMAIN_HOME/config/osb/coherence/osb-coherence-override.xml

Check the line
Created a new cluster "OSB-cluster" with Member(Id=1, Timestamp=bla, Address=

which corresponds to the $DOMAIN_HOME/config/osb/coherence/osb-coherence-override.xml settings

Step 3:

Add /Oracle/Middleware/coherence_3.5/lib/coherence.jar and /Oracle/Middleware/coherence_3.5/lib/je.jar to the MS classpath, so that we don't need to deploy these JARs in the OSB project making the JavaCallout to the Custom cluster.
REMEMBER: having these libraries in the CLASSPATH doesn't mean that the OSB internal cluster will load the Coherence classes in the System Classloader! The ALSB Coherence Cache Provider EAR specifies to load them in the EAR classloader!

If you are using this utility, add to MS classpath also /coh-sl/lib/coherence-serialization-support.jar which contains the PofSerialized classes (they are available here )

Step 4:
code your Java callout class to return instances of
Library Path: /Oracle/Middleware/modules/com.bea.core.xml.xmlbeans_2.2.0.0.jar

in practice:

import org.apache.xmlbeans.XmlObject;

public static XmlObject myJavaCallout() {
.... invoke your code getting data from Coherence Cache....
XmlObject returnObject = XmlObject.Factory.parse("");
return returnObject;

and make sure that in your OSB Java Callout you DON'T return by reference.

You will see that at the first invocation of Coherence you will join the Coherence Cluster, from the second onwards this will not happen.

Sunday, September 12, 2010

Mirror mirror on the wall, which appserver is the fairest of all?

my friend Mark is running a survey on which Application Server rates the highest:

please vote! I have voted WebLogic! It's because it's the only one I know :o) !
Not true, I am also WebSphere certified and I gave a try to JBoss and GlassFish, but I was not impressed. WebSphere has some cool stuff but it's no match for WebLogic IMHO.

Groovy: how to parse all property files in a directory

I also group the files by their suffix (dev2, dev3...)
and I print the property values, for names like admin_host, admin_port...

package com.acme.propertyprocessor

class PropertyProcessor {

    static main(args) {
        def envs = ["dev2", "dev3", "pp", "prod", "tst1", "tst2"];
        envs.each { env->
            println "

" + env + "

" new File("c:/properties").eachFile() { file-> if (file.getName().startsWith(env)) { def prop = new Properties() prop.load(new FileInputStream(file)) println prop.domain_prefix + " http://" + prop.admin_host + ":" + prop.admin_port + " http://" + prop.frontend_host + ":" + prop.frontend_port + " " + prop.ora_host1 + ":" + prop.ora_port + " " + prop.ora_user } } } } }

I suspect it could be made even simpler, for instance using eachFileMatch

Groovy rocks! Too bad I still haven't found a really professional IDE.

How to create a XmlObject using XmlCursor

String GEONS="";
String operationResponse="getCountriesResponse";

XmlObject result = XmlObject.Factory.newInstance();
XmlCursor cursor = result.newCursor();
QName responseQName = new QName(GEONS, operationResponse);
for (Location loc : locations) {
 cursor.beginElement(new QName(GEONS, "Location"));
 cursor.insertElementWithText(new QName(GEONS, "countryid"), loc.getCountryId());

this will produce:


If you want to extract the operation name from the $body (an XmlObject), this is how:

XmlCursor newCursor = body.newCursor();
System.out.println("operation=" + newCursor.getName().getLocalPart());

If you want to extract the values of the Operation parameters, you can use the XmlCursor APIs:

Sample log4j.xml

As usual, you never find things when you need them, so better have a sample log4j.xml at hand

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
<log4j:configuration xmlns:log4j=''>
 <appender name="CA" class="org.apache.log4j.ConsoleAppender">
  <layout class="org.apache.log4j.PatternLayout">
   <param name="ConversionPattern" value="%-4r [%t] %-5p %c %x - %m%n" />
 <appender name="FILE" class="org.apache.log4j.RollingFileAppender">
  <param name="File" value="c:/temp/log4jlog.log" />
  <param name="MaxFileSize" value="10MB" />
  <param name="MaxBackupIndex" value="3" />
  <param name="Threshold" value="DEBUG" />
  <layout class="org.apache.log4j.PatternLayout">
   <param name="ConversionPattern" value="%-4r [%t] %-5p %c %x - %m%n" />
 <logger name="com.acme.osb" additivity="false">
  <level value="DEBUG"/>
  <appender-ref ref="FILE"/>

  <level value="WARN" />
  <appender-ref ref="CA" />

Saturday, September 11, 2010

Coherence cache: read-through, write-through....,+Write-Through,+Refresh-Ahead+and+Write-Behind+Caching

the presentation is very complete but slightly complicated...


in simple words:

read-through: if hit -> return; if miss -> fetch from DS, put in cache and return

: tries to improve read performance by reloading from the DS the frequently requested data which are about to expire (better performance in reads)

: upon put, store in cache, store in datasource and only then return (bad performance)

write-behind: upon put, store in cache and return immediately; you will update the DS later
(better performance in updates)

I have the sensation that cache synchronization problems can become really nasty when one tries to optimize stuff in a frequently updated cache... this is why you need Coherence experts, you can't improvise.

Monday, September 6, 2010

Maven: instructions on how to mavenize an EAR project

The project will consist of:
1) a EJB Module
2) a EJB Client
3) an EAR
4) some dependencies from external jars.

the POM.xml of the EJBModule is:



note the scope=provided, this aims at NOT including the JAR in the deployment, in case it's already available in the classpath.

The EJBClient is piece of cake:



while the real challenge is correctly configuring the EAR POM.xml







The more I look into Maven, the more it seems to me horrific the amount of effort required even to setup a simple thing like an EAR. It's like going back to EJB 1.0... we have EJB 3.1, but the Maven still looks the same as 10 years ago.

Sunday, September 5, 2010

ALSB (OSB) Coherence Cache Provider

From OSB logs:

Creating WorkManager from "weblogic.wsee.mdb.DispatchPolicy" WorkManagerMBean for application "ALSB Coherence Cache Provider"

Oracle Coherence 3.5.3/465p2 (member=n/a): Loaded operational configuration from resource "zip:/Oracle/Middleware/coherence_3.5/lib/coherence.jar!/tangosol-coherence.xml

Oracle Coherence 3.5.3/465p2 (member=n/a): Loaded operational overrides from resource "zip:/Oracle/Middleware/user_projects/domains/OSBDomainBasic/servers/AdminServer/tmp/_WL_user/ALSB Coherence Cache Provider/sm027g/APP-INF/lib/com.bea.alsb.coherence-impl.jar!/tangosol-coherence-override.xml

Oracle Coherence 3.5.3/465p2 (member=n/a): Loaded operational overrides from resource "file:/Oracle/Middleware/user_projects/domains/OSBDomainBasic/config/osb/coherence/osb-coherence-override.xml

Oracle Coherence 3.5.3/465p2 (member=n/a): Created a new cluster "OSB-cluster" with Member(Id=1, Timestamp=2010-09-04 19:00:46.957, Address=, MachineId=32896, Location=site:localdomain,machine:localhost,process:9285, Role=OSB-node, Edition=Grid Edition, Mode=Production, CpuCount=2, SocketCount=2) UID=0x7F0000010000012ADDB16DED80801ED2

So I gather that OSB internally uses the "ALSB Coherence Cache Provider" Enterprise Application, deployed as /Oracle/Middleware/Oracle_OSB1/lib/coherence.ear and targeted to the cluster.

This ear contains:

jar xvf coherence.ear

created: META-INF/
inflated: META-INF/application.xml
created: APP-INF/
created: APP-INF/classes/
created: APP-INF/classes/com/
created: APP-INF/classes/com/bea/
created: APP-INF/classes/com/bea/alsb/
created: APP-INF/classes/com/bea/alsb/coherence/
created: APP-INF/classes/com/bea/alsb/coherence/init/
created: APP-INF/lib/
inflated: APP-INF/classes/com/bea/alsb/coherence/init/CoherenceAppListener$1.class
inflated: APP-INF/classes/com/bea/alsb/coherence/init/CoherenceAppListener.class
inflated: APP-INF/lib/com.bea.alsb.coherence-impl.jar
inflated: META-INF/weblogic-application.xml

the META-INF/weblogic-application.xml is revealing, basically it says "disregard any classes com.tangosol...." already loaded in the System classpath and load them in my EAR classloader. THIS IS EXTREMELY IMPORTANT! For multiple clusters to work in the same JVM, the coherence framework classes must be loaded in separate classloaders!



Here is the content of the "client-side" library jar, with the strategic tangosol-coherence-override.xml:

jar xvf com.bea.alsb.coherence-impl.jar
created: META-INF/
created: com/
created: com/bea/
created: com/bea/alsb/
created: com/bea/alsb/coherence/
created: com/bea/alsb/coherence/impl/
created: com/bea/alsb/coherence/impl/messages/
inflated: com/bea/alsb/coherence/impl/messages/CoherenceLogLogger$MessageLoggerInitializer.class
inflated: com/bea/alsb/coherence/impl/messages/CoherenceLogLogger.class
inflated: com/bea/alsb/coherence/impl/messages/
inflated: com/bea/alsb/coherence/impl/messages/
inflated: tangosol-coherence-override.xml
inflated: com/bea/alsb/coherence/impl/CacheValue.class
inflated: com/bea/alsb/coherence/impl/CoherenceCache$1.class
inflated: com/bea/alsb/coherence/impl/CoherenceCache$RemoveProcessor.class
inflated: com/bea/alsb/coherence/impl/CoherenceCache.class
inflated: com/bea/alsb/coherence/impl/CoherenceNotInstalledException.class
inflated: com/bea/alsb/coherence/impl/CoherenceProvider.class
inflated: com/bea/alsb/coherence/impl/CoherenceProviderFactory.class
inflated: com/bea/alsb/coherence/impl/ConfigurationException.class
inflated: com/bea/alsb/coherence/impl/OwnerValueExtractor.class

in this instance, tangosol-coherence-override.xml doesn't override the OOTB settings, it simply defines the cluster-name:

        Oracle Coherence {version} (member={member}): {text}

And the Initializing class does this:

package com.bea.alsb.coherence.init;

import com.bea.alsb.coherence.CacheProviderManager;
import com.bea.alsb.coherence.impl.*;
import com.bea.alsb.coherence.init.messages.CoherenceLogLogger;
import java.util.logging.*;
import weblogic.application.ApplicationLifecycleEvent;
import weblogic.application.ApplicationLifecycleListener;
import weblogic.logging.LoggingHelper;

public class CoherenceAppListener extends ApplicationLifecycleListener

    public CoherenceAppListener()

    public void preStart(ApplicationLifecycleEvent evt)
            com.bea.alsb.coherence.CacheProvider provider = CoherenceProviderFactory.newInstance("/coherence/osb-coherence-cache-config.xml");
        catch(CoherenceNotInstalledException ex)
        catch(ConfigurationException ex)

    public void postStop(ApplicationLifecycleEvent applicationLifecycleEvent)
        catch(Exception ex)

    private static void redirectCoherenceLogger()
        Handler h = new Handler() {

            public void close()
                throws SecurityException

            public void flush()

            public void publish(LogRecord record)
                if(serverLogger != null)

            Logger serverLogger;

                serverLogger = LoggingHelper.getServerLogger();
        Logger coherenceLogger = Logger.getLogger("Coherence");

    private static final String CONFIG_FILE = "/coherence/osb-coherence-cache-config.xml";
    private static final String COHERENCE_LOGGER_NAME = "Coherence";

This osb-coherence-cache-config.xml is actually in $DOMAIN_HOME/config/osb/coherence/osb-coherence-cache-config.xml, this file was shown above.

Incidentally, the content of $DOMAIN_HOME/config/osb/coherence/osb-coherence-override.xml contains the network parameters for this cluster; it is evident that each cluster must provide its configuration:


But, wait a second, what is this com.bea.alsb.coherence.CacheProviderManager ?

It turns out that CacheProviderManager is in /Oracle/Middleware/Oracle_OSB1/lib/modules/com.bea.alsb.coherence-api.jar. It's a basic singleton holder class for a SINGLE instance of CacheProvider - which means that in OSB only 1 OSB cache provider is supported. So if we want to access to our "custom " cache provider we must find another path: it's necessary to instantiate the tangosol classes in the same EAR (or WAR) where the client resides; or otherwise find a mechanism where we register the classloader of the CacheProvider and we retrieve from somewhere else.

Coherence multiple clusters in same jvm

The BIG question is: how can I connect to multiple clusters (different multicast/unicast addresses) from within the same JVM?

The answer I get from the experts is: only by having separate classloaders.

Which is not always easy to do, especially when you don't have control on the way your client is deployed and packaged. Like in OSB.

Hot google searches: IsolatingClassLoader, BlockingClassLoader

See this
on multiple Coherence in same JVM

"all you need to do is ensure that Coherence is loaded by two independent class loaders and not from a common parent class loader."

This one also is very revealing

"They will of course be contained within separate web application class loaders."

More on the 2 caches here:

"You can define a path to the cache configuration descriptor in your operation configuration override file (tangosol-coherence=override.xml) or specify it in the system property "tangosol.coherence.cacheconfig"."

And here:

It is possible to do what you ask but it comes with a whole set of issues that need to be worked around. We run multiple Coherence instances in a single JVM (usually something like 2 storage nodes, an extend proxy and a client) for unit/acceptance tests. This is done with a special implementation of a classloader so each Coherence instance runs isolated by its own classloader - in the same way that web applications all run isolated from eachother in a web container like Tomcat. A number of Coherence settings can be overriden using System properties and we have also implemented classloader aware system properties, so each of our Coherence instances has its own system properties related to the classloader that it is contained in.

Here a very pictorial view, going in depth in the concept of "Coherence Cluster Node Isolation"*Web

Saturday, September 4, 2010

JMX and Log4J

Tired of fighting with and restarting my application server,
giving also that I deeply dislike the idea of having a Thread dedicated to monitoring whether a file has changed or not, I decide to investigate in the intriguing connections between Log4J and JMX.

This seems a very authoritative article on the specific topic:

and this a very general comprehensive coverage of JMX....

I run the Hello example, funnily using JRMC it doesn't work, while with JConsole it works. Interestingly, they recommend to run JConsole remotely, not locally.

Here  the Eclipse project.

Deep dependency in OSB Java Client jars

It looks like, when you deploy a OSB project, the WSDL is generated and this entails parsing annotations in all the Java classes reachable from the Public Interface of your EJB Business Service. So, unless you deploy all the dependencies, you will get something like this:

[JAM] Error: unexpected exception thrown:
java.lang.ClassNotFoundException: info.politext.coherence.pof.annotation.PofSerialized
at java.lang.ClassLoader.loadClass(
at sun.misc.Launcher$AppClassLoader.loadClass(
at java.lang.ClassLoader.loadClass(
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(
at com.bea.util.annogen.view.internal.JavadocAnnogenTigerDelegateImpl_150.getClassFor(
at com.bea.util.annogen.view.internal.JavadocAnnogenTigerDelegateImpl_150.extractAnnotations(
at com.bea.util.annogen.view.internal.javadoc.ProgramElementJavadocIAE.extractIndigenousAnnotations(
at com.bea.util.annogen.view.internal.AnnoViewerBase.getIndigenousAnnotations(
at com.bea.util.annogen.view.internal.AnnoViewerBase.getAnnotations(
at com.bea.util.annogen.view.internal.AnnoViewerBase.getAnnotation(
at com.bea.util.annogen.view.internal.jam.JamAnnoViewerImpl.getAnnotation(
at com.bea.staxb.buildtime.Java2Schema.ensureDocumentElementsExistFor(
at com.bea.staxb.buildtime.Java2Schema.internalBind(
at com.bea.staxb.buildtime.BindingCompiler.bind(
at com.bea.staxb.buildtime.BindingCompiler.bindAsExplodedTylar(
at weblogic.wsee.bind.buildtime.internal.SoapAwareJava2Schema.bindAsExplodedTylar(
at weblogic.wsee.bind.buildtime.internal.TylarJ2SBindingsBuilderImpl.createBuildtimeBindings(
at sun.reflect.GeneratedMethodAccessor319.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(
at java.lang.reflect.Method.invoke(
at com.bea.wli.config.derivedcache.DerivedCache.deriveTheValue(
at com.bea.wli.config.derivedcache.DerivedCache.get(
at com.bea.wli.config.derivedcache.DerivedResourceManager.getDerivedValueInfo(
at com.bea.wli.config.derivedcache.DerivedResourceManager.get(
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(
at sun.reflect.DelegatingMethodAccessorImpl.invoke(
at java.lang.reflect.Method.invoke(
at $Proxy125.getBindingInfo(Unknown Source)
at com.bea.wli.config.impl.ResourceListenerNotifier.sendLoadNotificationsInContext(
at com.bea.wli.config.impl.ResourceListenerNotifier.access$400(
at com.bea.wli.config.impl.ResourceListenerNotifier$1.execute(
at com.bea.wli.config.transaction.TransactionalTask._doExecute(
at com.bea.wli.config.transaction.TransactionalTask.doExecute(
at com.bea.wli.config.impl.ResourceListenerNotifier.sendLoadNotifications(
at com.bea.wli.config.impl.ResourceListenerNotifier.sendLoadNotifications(
at com.bea.wli.config.ConfigService.startListeners(
at weblogic.application.internal.flow.BaseLifecycleFlow$
at weblogic.application.internal.flow.BaseLifecycleFlow$LifecycleListenerAction.invoke(
at weblogic.application.internal.flow.BaseLifecycleFlow.postStart(
at weblogic.application.internal.flow.TailLifecycleFlow.activate(
at weblogic.application.internal.BaseDeployment$
at weblogic.application.utils.StateMachineDriver.nextState(
at weblogic.application.internal.BaseDeployment.activate(
at weblogic.application.internal.EarDeployment.activate(
at weblogic.application.internal.DeploymentStateChecker.activate(
at weblogic.deploy.internal.targetserver.AppContainerInvoker.activate(
at weblogic.deploy.internal.targetserver.BasicDeployment.activate(
at weblogic.deploy.internal.targetserver.BasicDeployment.activateFromServerLifecycle(

Sanity check on WebLogic classpath

A constant nightmare for WebLogic administrators is to make sure your WebLogic classpath doesn't contain crap.

This (where 3757 is the PID) prints the classpath:

jrcmd 3757 print_properties | grep java.class.path


Here is a script to check the integrity of the classpath:

entries=`jrcmd ${pid} print_properties | grep java.class.path | awk -F'=' '{print $2}' | tr ':' '\n'`
for i in `seq 0 $((${#entriesList[@]}-1))`;
  if [ -f ${entry} ]
    then echo "FILE ${entry} exists"
      if [ -d ${entry} ]
      then echo "DIRECTORY ${entry} exists"
         echo "NONEXISTING ${entry}"

or if running Sun, use
jinfo -sysprops ${pid} | grep java.class.path | awk -F'=' '{print $2}'

If you have problems executing jrcmd $pid print_properties - for instance some security restrictions, you can try this:

allcps=`ps -ef | grep weblogic | tr ' ' '\n' | grep java.class.path | awk -F'=' '{print $2}'` 
#here we have many lines, each with a classpath
for j in `seq 0 $((${#allcpsList[@]}-1))`; 
 entriesList=`echo $line | tr ':' '\n'`
 for i in `seq 0 $((${#entriesListSplit[@]}-1))`;
   if [ -f ${entry} ]
     then echo "FILE ${entry} exists"
       if [ -d ${entry} ]
       then echo "DIRECTORY ${entry} exists"
   echo "NONEXISTING ${entry}"

running the script followed by | sort | more gives interesting results

WebLogic starting in Production Mode instead of Development Mode

I have tried to set to "false" any "production" related flag in the various shell scripts (bin/setDomainEnv,, bin/

The flags are: DOMAIN_PRODUCTION_MODE, PRODUCTION_MODE, -DProductionModeEnabled=false

All in vain, my WebLogic 11g starts in Production mode.

At last, I set the value of production-mode-enabled to false in config/config.xml

This seems to work. The "Production Mode" checkbox on the "Domain" configuration page in WebLogic console becomes unchecked.
The description says:

"Specifies whether all servers in this domain run in production mode. Once enabled, this can only be disabled in the admin server startup command line."

Well, anyway, editing the config.xml works.

Resizing the VMWare disk with Fedora

The utility provided with VMPlayer will only allocate extra space for the VM disk, but will not change your filesystem allocation in Fedora.

post gives a great step-by-step instruction set on how to do it.

If you install SoapUI3.5.1 you get a

= SOAPUI_HOME = /maerskwas/tools/soapui-3.5.1
soapUI 3.5.1 LoadTest Runner
usage: loadtestrunner [options]
-v Sets password for soapui-settings.xml file
-t Sets the soapui-settings.xml file to use
-D Sets system property with name=value
-G Sets global property with name=value
-P Sets or overrides project property with name=value
-S Saves the project after running the tests
-c Sets the testcase
-d Sets the domain
-e Sets the endpoint
-f Sets the output folder to export to
-h Sets the host
-l Sets the loadtest
-m Overrides the LoadTest Limit
-n Overrides the LoadTest ThreadCount
-p Sets the password
-r Exports statistics and testlogs for each LoadTest run
-s Sets the testsuite
-u Sets the username
-w Sets the WSS password type, either 'Text' or 'Digest'
-x Sets project password for decryption if project is encrypted
Missing soapUI project file..

run as:

./ -r -f/home/tstuser/pierre/ /home/tstuser/pierre/TSTStressTest-soapui-project.xml

After some time the load test finishes and you can find statistics in


A more complete syntax with -s -l -c options:
C:\pierre\SmartBear\soapUI-4.5.0\bin\loadtestrunner.bat -s"TerminalSetupServicePortBinding TestSuite" -c"GetUsers TestCase" -l"LoadTest 1" C:\pierre\workspace\SSS_AutomatedTests\SOAPUIArtifacts\MachineSetupService-soapui-project.xml