Monday, May 21, 2012

Performance test of JDBCStore versus Local FileStore vs NFS FileStore

- created JDBCStore PerfTestJDBCStore1,2,3,4 in PROD, based on SOADataSource (RAC) and targeted to osbpr1ms1, 2, 3, 4 migratable
- created PerfTestJMSServer1,2,3,4 based on PerfTestJDBCStore1, 2, 3, 4
- created PerfTestJMSModule , subdeployment PerfTestSD with targets PerfTestJMSServer1,2,3,4
- created PerfTestJMSDQ, Uniform Distributed Queue , targetd PerfTestJMSServer1, 2,3,4

created a project JMSPerfTest with a PS PerfTestJMSPS reading from the queue (jms://pippo2-osbpr1ms1.acme.com:8001,pippo2-osbpr1ms2.acme.com:8001,pippo2-osbpr1ms3.acme.com:8001,pippo2-osbpr1ms4.acme.com:8001/weblogic.jms.XAConnectionFactory/jms.PerfTestJMSDQ) and routing to a BS PerfTestJMSBS which writes to the same queue
jms://pippo2-osbpr1ms1.acme.com:8001,pippo2-osbpr1ms2.acme.com:8001,pippo2-osbpr1ms3.acme.com:8001,pippo2-osbpr1ms4.acme.com:8001/weblogic.jms.XAConnectionFactory/jms.PerfTestJMSDQ



In practice the system reads and writes to the same queue, like a cat spinning on itself trying to catch his own tail (dogs do this more than cats)

I trigger the system by dropping a jmsmessage in each separate instance of the Distributed Queue. The message says “”. It’s of type String and it’s Persistent.

I enable monitoring on the PS. I observe 74K messages per minute.


I do the same by using a local FileStore instead of the JDBC store. The file location is /opt/oracle/domains/osbpr1do/servers/osbpr1ms1,2,3,4/data/store (local, no NFS)

I restart the managed servers just in case.
Observed speed is 600K messages per minute!


A third experiment is done with FileStore pointing to the NFS store:
/opt/oracle/domains/osbpr1do/shared/store/jms
Thruput: 140K messages per minute.

Morale is:
- all 3 setups (JDBCStore, Local FileStore, NFS FileStore) seem MORE than fast enough for our needs
- Local FileStore is faster, then NFS FileStore, then JDBCStore

I haven’t tested scenarios with Transactions Required.

No comments: