Wednesday, 24 November 2010

changing import binding endpoint of a SCA module

One of the requirements of application deployment modules was to change import bindings when we deploy in different environment. Like if a BPM process is calling external services then the endpoint of these services need to change for every environment. Here is how you can change binding endpoint using wsadmin and jython:

#This will get you list of all SCA modules in the connected environment
#Now iterate through the SCA modules to get to he module you are interested in
for scaMod in scaModList:
        scaImports = []
       #This will get you list of all imports for the module
        scaImports=AdminTask.listSCAImports("[-moduleName " +scaModNames[0] + "]").split(System.getProperty('line.separator'))
#Now iterate through all imports  to get to the import you are interested in
for scaImport in scaImports:
  #Now get the actual binding
   binding=AdminTask.showSCAImportBinding("[-moduleName " +scaModNames[0] + " -import " +scaImport + "]") 
# Check if binding is actually webservices binding
if binding.find ("WsImportBinding")!=-1:
# Check if the binding is the one you want to change
if bindingAtr[0].find("yourBinding Name"):
 #change the binding
AdminTask.modifySCAImportWSBinding("[-moduleName " +scaModNames[0] +" -import " +  scaImport + " -endpoint " + YourNewBindingEndPoint+ " ]")

Again its just an example and you probably want to check lot many exception situations like index is correct for all split output etc. The idea is to list all the commands you need to change the binding endpoint and you can probably build your application logic around it

Use java XML Dom parser in wsadmin jython

With my Application deployment module I had to read lot many xml files for different elements/attributes and I was first tag based parsing as I thought DOM will be very costly to use. But I soon realise that I need to work on many xml files and tag based parsing will mean a really stupid looking code. So I used Java dom parser in my jython and its really easy with jython wsadmin and java. I am not posting my complete class I am using to parse different xml files but if you want to do it, here is how you can get started:

import javax.xml.parsers.DocumentBuilderFactory as DocumentBuilderFactory
import javax.xml.parsers.DocumentBuilder as DocumentBuilder

dbf = DocumentBuilderFactory.newInstance()
db = dbf.newDocumentBuilder()
dom = db.parse("\usr\xml\app.xml")
docEle = dom.getDocumentElement()
 attr= docEle.getAttribute("name")
print attr

Here I am printing attribute of root element of the xml I have read. You can look at java docs for complete set of methods available with dom parser

Use Log4J in wsadmin and Jython

We all love Log4j and I was really surprised to find how easy it is to use it from within your jython script and wsadmin. The log4j related classes are loaded with wsadmin (as part of its own classpath ) so you need no jars to be configured. Just use the following line of code to import the classes you need and start logging:

from org.apache.log4j import Logger
from org.apache.log4j import PropertyConfigurator
# change the path to your properties file
logger = Logger.getLogger("YourClassName")"statement")

You can do everything you are used doing with log4j and this is just a simple example. An example property file will look like:

#Author Abhijeet Kumar
#Version 1.0
#Root Loggers
log4j.rootLogger=debug, stdout, R
#  The following properties configure the console (stdout) appender.
#  See for details.
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %c{1} [%p] %m%n
# Pattern to output the caller's file name and line number.
#  The following properties configure the Rolling File appender.
#  See for details.
log4j.appender.R = org.apache.log4j.RollingFileAppender
log4j.appender.R.File = /usr/logs/ApplicationDeployment.log
log4j.appender.R.MaxFileSize = 1000KB
log4j.appender.R.Append = true
log4j.appender.R.layout = org.apache.log4j.PatternLayout
log4j.appender.R.layout.ConversionPattern = %d{yyyy-MM-dd HH:mm:ss} %c{1} [%p] %m%n

Get Display Name of your EAR file in Jython to deploy on Websphere Application Server

As part of my current project to automate deployment of Websphere Process Server modules and Websphere Application server I decided to concatenate the target cluster name (We are running a multiple cluster topology) and build version with Application name delivered by developers in the archive. This is to make operational guys aware of the target by just looking at the display name of the application.

We all know we can pass the a new name while deploying using AdminApp.install to override the name specified in archive. What was little tricky is to get the current name specified in the application. I wrote this method to get an archive's display name

import zipfile
def getAppNameFromArchive(filepath):
 # This is little crude way of reading
 # a xml but it works and less costly then
 # using a DOM parser just to get a single key
 start_string = "<display-name>"
 end_string = "</display-name>"
 if (filepath.endswith(".jar")) | (filepath.endswith(".ear")):
  # read the zipfile
  zf = zipfile.ZipFile(filepath, "r")
  #Get the list of files in the zipfile
  for name in nl:
   #Check if application.xml is present in the archive
   #The loop will run for as may files in archive but I
   # am sure there will be only one application.xml or
   #you can even change it to include META-INF//application.xml
   if (name.lower().find(("application.xml")) != -1):
    appxml =
    start_index = appxml.find(start_string) + len(start_string)
    end_index = appxml.find(end_string)
    appname = appxml[start_index:end_index]
 return appname

I know closing tags are not required in Jython but its just my style of coding so forgive me for that. I can now use the appname and concatenate whatever I need to and pass it to to AdminApp.install as appOptions.

Friday, 15 October 2010

IPC_CONNECTOR_ADDRESS in Websphere Process Server 7.0

Apparently there is a bug in Websphere Process Server 7.0 which can make your server environment unstable. Our problem was more around response time of Business Space. The bug rleates to default IPC_CONNECTOR_ADDRESS port which doesn't get define if you have created your servers with manageprofile command or used deployment environment to create servers. You will need manually define the port. Below is the command to create the port defination.

*Note change the port value if there is a conflict :)

                       AdminConfig.create('EndPoint', AdminConfig.create('NamedEndPoint',serverEntry , '[[endPointName "IPC_CONNECTOR_ADDRESS"]]'), '[[port "9633"] [host "localhost"]]')

*Note serverEntry is list of servers in a given node. You can get list of servers like this:

AdminConfig.list('ServerEntry', AdminConfig.getid( '/Cell:%s/Node:%s/' % ( AdminControl.getCell(), <Name_OF_Your_Node>) )).split(java.lang.System.getProperty('line.separator'))

*Note the above command also returns nodeAgent on the given node which should have IPC port already defined so do not run it for you NodeAgent

All about Websphere MQ Clustering

How WMQ Clusters work
>>Role of the CLUSRCVR channel
Every queue manager in the cluster must have a cluster receiver definition, known as a CLUSRCVR.   The CLUSRCVR is very different to a standard WMQ receiver channel as it has more fields, such as the connection name and port that you would normally find in a sender channel.  This is because the CLUSRCVR channel is also used to build the auto-defined CLUSSDR channel when the queue managers start communicating with each other.
The CLUSRCVR definition advertises the queue manager within the cluster. It is used by other queue managers in the cluster to auto-define their corresponding cluster sender (CLUSSDR) channel definitions to that queue manager.  Consequently, once a connection has been established between 2 queue managers in a cluster, any changes to a CLUSSDR channel need to be made on the corresponding CLUSRCVR channel
Note: If you alter a cluster receiver channel, the changes are not immediately reflected in the corresponding automatically defined cluster sender channels. If the sender channels are running, they will not pick up the changes until the next time they restart. If the sender channels are in retry, the next time they hit a retry interval, they will pick up the changes.
>>Role of the CLUSSDR channel
The important thing to remember about CLUSSDR channels is that every queue manager must have at least one manually defined CLUSSDR and this must point to one of the full repository queue managers.   It doesn’t matter which repository queue manager it makes its initial connection with.  Once the initial connection has been made auto-defined cluster channels are defined as necessary to communicate within the cluster.
Note.  Even the manually defined cluster sender channels attributes are overwritten with information from the corresponding CLUSRCVR channel once communication has been established.  Manually defined cluster sender channels that have also been altered this way with auto definitions are shown as CLUSSDRB channels in the output of the DISPLAY CLUSQMGR(*) command. Channels that have been only been defined automatically are displayed as CLUSSDRA channels.
The manually defined CLUSSDR channels to the full repositories are used to transmit cluster information (as well as user data).  For this reason the repository queue managers must fully interconnected with a manually defined CLUSSDR channel to each of the repository queue managers in the cluster.  Under normal circumstances there should only be 2 repository queue managers in the cluster.
>>Role of the REPOSITORY queue managers.
Repository queue managers maintain a complete set of information about all of the queue managers and clustered objects within the cluster.  Partial repository queue managers build up their repositories by making enquiries to a full repository when they need to access a queue or queue manager within a cluster.  The full repository provides the partial repository with the information required to build an auto-defined cluster sender channel to the required queue manager.
You should always have at least 2 full repositories in the cluster so that in the event of the failure of a full repository, the cluster can still operate. If you only have one full repository and it loses its information about the cluster, then manual intervention on all queue managers within the cluster will be required in order to get the cluster working again. If there are two or more full repositories, then because information is always published to 2 full repositories, the failed full repository can be recovered with the minimum of effort.
It is possible to have more than 2 full repositories, but is not recommended unless there is a compelling reason to do so (such as a geographically dispersed cluster).
Subscriptions to cluster objects from partial repositories are only ever made to 2 full repositories, therefore adding additional repositories doesn’t give any benefit and can actually result in additional administration (as extra manual cluster sender channels are required to connect the full repositories) and there is a higher likelihood that the repositories become out of sync. 
Once a cluster has been setup, the amount of messages that are sent to the full
repositories from the partial repositories in the cluster is very small. Partial repositories will re-subscribe for cluster queue and cluster queue manager information every 30 days. Other than this, internal messages are not sent between the full and partial repositories unless a change occurs to a resource within the cluster, in which case the full repositories will notify the partial repositories that have previously registered an interest in that resource.
>>The Repository Process
The repository process, (amqrrmfa) is started with the queue manager and is essential for any cluster processing to take place.  It is not possible to restart the process independently of the queue manager and you should never end the process manually unless you are manually ending the whole queue manager.
When the repository process starts up the contents of the SYSTEM.CLUSTER.REPOSITORY.QUEUE is loaded into memory (cache).  It also processes messages on the SYSTEM.CLUSTER.COMMAND.QUEUE and the SYSTEM.CLUSTER.TRANSMIT.QUEUE.
If the process encounters an error reading from any of these queues it will try to restart itself unless the error is classified as severe.   If the process does end abnormally, the first thing to check for is for any preceding errors in the error log which will often confirm what the issue is e.g
AMQ9511: Messages cannot be put to a queue.
The attempt to put messages to queue 'SYSTEM.CLUSTER.COMMAND.QUEUE'
on queue manager 'MYQMGR' failed with reason code 2087.
>>What happens when a queue is opened for the first time?
>An application connects to a queue manager and issues an MQOPEN against a cluster queue.  The local repository cache is checked to see if there is already an entry for this queue.  In this scenario there is no information in the cache as it is the first time an MQOPEN request has been made for this queue on this queue manager.
>A message is put to the SYSTEM.CLUSTER.COMMAND.QUEUE requesting the repository task to subscribe for the queue.
>When the repository task is running, it has the SYSTEM.CLUSTER.COMMAND.QUEUE open for input and is waiting for messages to arrive. It reads the request from the queue.
>The repository task creates a subscription request. It places a record in the
repository cache indicating that a subscription has been made and this record is also hardened to the SYSTEM.CLUSTER.REPOSITORY.QUEUE. This queue is where the hardened version of the cache is kept and is used when the repository task starts, to repopulate the cache.
>The subscription request is sent to 2 full repositories. It is put to the SYSTEM.CLUSTER.TRANSMIT.QUEUE awaiting delivery to the
SYSTEM.CLUSTER.COMMAND.QUEUE on the full repository queue managers.
>The channel to the full repository queue manager is started automatically and the
message is delivered to the full repository. The full repository processes the message and stores the subscription request.
>The full repository queue manager sends back the information about the queue being opened to the SYSTEM.CLUSTER.COMMAND.QUEUE on the partial repository queue manager.
>The message is read from the SYSTEM.CLUSTER.COMMAND.QUEUE by the
repository task.
>The information about the queue is stored in the repository cache and hardened to the SYSTEM.CLUSTER.REPOSITORY.QUEUE
At this point the partial repository knows which queue managers host the queue. What it would then need to find out is information on the channels that the hosts of the queue have advertised to the cluster, so that it can create auto-defined cluster sender channels to them. To do this, more subscriptions would be made (if necessary) to the full repositories.

Turning off default_host with Websphere Process Server 7.0

Its always a good practice to turn off default host on Websphere application server if you are using http server. To do this you need to remove all the ports other then <Ip/SystemName:443 or 80>. If a request is made to the server then actual request address is preserved in HTTP headers and its used by container to resolve to a host. This secure you from any one trying to access your assets directly to websphere environment and every thing has to be routed to webserver first. But with Websphere Process server there is an internal application call Remote Application Loader (RemoteAL) deployed on each cluster (considering you are running a cluster topology). This application is accessed by contaners on <virtualHost>:9443 and I have still not figured how to change the container settings to point it to my webserver URL. The container always call the application local to the server container itself is running so we do not have to worry about load balancing or high availability the containers access is managed via webserver. But this leaves with one problem to leave default_host port 9443 but that would have meant other application bound to default_host can still be accessed directly. So we cared another virtual host by name RAL_HOST with 9443 and 443 port and bound RemoteAL application (on each cluster) to this host and left all other application bound to default_host which had 0nly 443 port.

It will be good to figure out how to route this call via webserver even if its not a requirement which is something I am currently working on.

Integrating Websphere Message Broker 7.0 with WSRR

One of my recent attempts to configure DefaultWSRR  service object in Websphere Message broker 7.0 to integrate with WSRR hosted in Websphere Application Server 6.1 via IBM HTTP Server 6.1 gave real grief when I started thinking about load balancing and failover. Apparently there is a subscription created every time the service object is intialized so that broker is aware of any changes you make in WSRR. As per IBM documentation we can configure a borker object to a single WSRR instance but this should not be read as one instance as you can still configure it to point it to clustered Websphere Application servers hosting identical WSRR nodes. In this scenario you cannot turn on cacheNotification as a subscription will be created on the node which intializes the broker service object. Steps to configure your service object will be something like below:

mqsichangeproperties <BrokerName> -c ServiceRegistries -o DefaultWSRR -n    
endpointAddress -v https://%3cvirtualhost%3e/WSRRCoreSDO/services/WSRRCoreSDOPort 
mqsichangeproperties <BrokerName> -o BrokerRegistry -n brokerKeystoreFile -
v "<Path_Of_Key_Store>"           
mqsichangeproperties <BrokerName> -o BrokerRegistry -n brokerTruststoreFile
-v "<Path_OF_Trust_Store>" 

mqsichangeproperties <BrokerName> -c ServiceRegistries -o DefaultWSRR
-n enableCacheNotification -v false
mqsistop <Broker_Name>

mqsisetdbparms <Broker_Name> -n DefaultWSRR::WSRR -u <Broker_User> -p <Broker_Password>

mqsisetdbparms <BrokerName> -n brokerKeystore::password -u <Key_Store_User> -p <Key_Store_Password>

mqsisetdbparms <BrokerName> -n brokerTruststore::password -u <Trust_Store_User> -p <Trust_Store_Password>

mqsisetdbparms <BrokerName> -n jms::DefaultWSRR@jms/SRConnectionFactory -u 
<Broker_User> -p <Broker_Password>

mqsistart <Broker_Name>

If you want to view the settings of DefaultWSRR object you can issue command like :
mqsireportproperties <Broker_name> -c ServiceRegistries -o DefaultWSRR -r