Wednesday 23 April 2014

Apache, IE8 Download unknown file types

I stumbled upon this issue while configuring a new Adobe CQ5 system. The file in question is an adobe indesign file with an extension .indd. We had this file on our apache server and it worked perfectly on all other browser but IE8 on Windows 7 system. We had to obviously setup Mime type etc first to make sure the server is sending the right headers back to the browser but it will simply say unable to download file in first attempt but will work the next time in IE8.

Well I haven't been able to resolve this completely but have given a work around to our client which is acceptable but not ideal. Obviously if you are in website business you have to get all the solutions on server side as you can't expect all your audience to change there side of things.

Below is what I added to our httpd.conf to fool IE8 in thinking that the file is actually and html file and not an unknown type. The ideal way was to define it as application/octet-stream but IE8 won't buzz because it need to find the association of the file downloaded to save it.

<FilesMatch "\.(indd)$">
ForceType text/html
Header set Cache-Control ""
Header set Pragma ""
Header set Content-Transfer-Encoding "binary"
#Header set Content-Disposition attachment
SetEnvIf REQUEST_URI "/([^/]+)$" FILENAME=$1
Header set Content-Disposition "attachment; filename=\"%{FILENAME}e\"" env=FILENAME
</FilesMatch>

<FilesMatch "\.(indd)$">  bit is to identify my files extension and you will need to replace indd with your extension if you are trying it.
ForceType text/html is the line which is setting the type as an html file which allows multiple extensions

The last two lines in the configuration is there to force file extension which is the only unacceptable bit in the whole solution as Windows 7 will then try an add .htm to the extension and you have to manually remove the extension before saving.
All other browsers are smart enough to not care what you are downloading and they start downloading without a SAVE prompt straight away so you shouldn't have trouble with them.
Also if you can control your users accessing your site then you can just ask them to make an association to a program for the file type and it should download fine after that. Like ask them to try and open and assign say notepad to open such file type. After that if you will try and download then it will download fine.

Wednesday 24 November 2010

changing import binding endpoint of a SCA module

One of the requirements of application deployment modules was to change import bindings when we deploy in different environment. Like if a BPM process is calling external services then the endpoint of these services need to change for every environment. Here is how you can change binding endpoint using wsadmin and jython:

#This will get you list of all SCA modules in the connected environment
scaModList=AdminTask.listSCAModules().split(System.getProperty('line.separator'))
#Now iterate through the SCA modules to get to he module you are interested in
for scaMod in scaModList:
        scaModNames=scaMod.split(':')
        scaImports = []
       #This will get you list of all imports for the module
        scaImports=AdminTask.listSCAImports("[-moduleName " +scaModNames[0] + "]").split(System.getProperty('line.separator'))
#Now iterate through all imports  to get to the import you are interested in
for scaImport in scaImports:
  #Now get the actual binding
   binding=AdminTask.showSCAImportBinding("[-moduleName " +scaModNames[0] + " -import " +scaImport + "]") 
# Check if binding is actually webservices binding
if binding.find ("WsImportBinding")!=-1:
 bindingAtr=binding.split(',')
# Check if the binding is the one you want to change
if bindingAtr[0].find("yourBinding Name"):
 #change the binding
AdminTask.modifySCAImportWSBinding("[-moduleName " +scaModNames[0] +" -import " +  scaImport + " -endpoint " + YourNewBindingEndPoint+ " ]")

Again its just an example and you probably want to check lot many exception situations like index is correct for all split output etc. The idea is to list all the commands you need to change the binding endpoint and you can probably build your application logic around it

Use java XML Dom parser in wsadmin jython

With my Application deployment module I had to read lot many xml files for different elements/attributes and I was first tag based parsing as I thought DOM will be very costly to use. But I soon realise that I need to work on many xml files and tag based parsing will mean a really stupid looking code. So I used Java dom parser in my jython and its really easy with jython wsadmin and java. I am not posting my complete class I am using to parse different xml files but if you want to do it, here is how you can get started:

import javax.xml.parsers.DocumentBuilderFactory as DocumentBuilderFactory
import javax.xml.parsers.DocumentBuilder as DocumentBuilder

dbf = DocumentBuilderFactory.newInstance()
db = dbf.newDocumentBuilder()
dom = db.parse("\usr\xml\app.xml")
docEle = dom.getDocumentElement()
 attr= docEle.getAttribute("name")
print attr

Here I am printing attribute of root element of the xml I have read. You can look at java docs for complete set of methods available with dom parser

Use Log4J in wsadmin and Jython

We all love Log4j and I was really surprised to find how easy it is to use it from within your jython script and wsadmin. The log4j related classes are loaded with wsadmin (as part of its own classpath ) so you need no jars to be configured. Just use the following line of code to import the classes you need and start logging:

from org.apache.log4j import Logger
from org.apache.log4j import PropertyConfigurator
# change the path to your properties file
PropertyConfigurator.configure("log4j.properties") 
logger = Logger.getLogger("YourClassName")
logger.info("statement")

You can do everything you are used doing with log4j and this is just a simple example. An example property file will look like:

#Author Abhijeet Kumar
#Version 1.0
#FileName:log4j.properties
#Root Loggers
log4j.rootLogger=debug, stdout, R
#------------------------------------------------------------------------------
#
#  The following properties configure the console (stdout) appender.
#  See http://logging.apache.org/log4j/docs/api/index.html for details.
#
#------------------------------------------------------------------------------
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %c{1} [%p] %m%n
# Pattern to output the caller's file name and line number.
#------------------------------------------------------------------------------
#
#  The following properties configure the Rolling File appender.
#  See http://logging.apache.org/log4j/docs/api/index.html for details.
#
#------------------------------------------------------------------------------
log4j.appender.R = org.apache.log4j.RollingFileAppender
log4j.appender.R.File = /usr/logs/ApplicationDeployment.log
log4j.appender.R.MaxFileSize = 1000KB
log4j.appender.R.Append = true
log4j.appender.R.layout = org.apache.log4j.PatternLayout
log4j.appender.R.layout.ConversionPattern = %d{yyyy-MM-dd HH:mm:ss} %c{1} [%p] %m%n

Get Display Name of your EAR file in Jython to deploy on Websphere Application Server

As part of my current project to automate deployment of Websphere Process Server modules and Websphere Application server I decided to concatenate the target cluster name (We are running a multiple cluster topology) and build version with Application name delivered by developers in the archive. This is to make operational guys aware of the target by just looking at the display name of the application.

We all know we can pass the a new name while deploying using AdminApp.install to override the name specified in archive. What was little tricky is to get the current name specified in the application. I wrote this method to get an archive's display name

import zipfile
def getAppNameFromArchive(filepath):
 # This is little crude way of reading
 # a xml but it works and less costly then
 # using a DOM parser just to get a single key
 start_string = "<display-name>"
 end_string = "</display-name>"
 if (filepath.endswith(".jar")) | (filepath.endswith(".ear")):
  # read the zipfile
  zf = zipfile.ZipFile(filepath, "r")
  #Get the list of files in the zipfile
  nl=zf.namelist()
  for name in nl:
   #Check if application.xml is present in the archive
   #The loop will run for as may files in archive but I
   # am sure there will be only one application.xml or
   #you can even change it to include META-INF//application.xml
   if (name.lower().find(("application.xml")) != -1):
    appxml = zf.read(name)
    start_index = appxml.find(start_string) + len(start_string)
    end_index = appxml.find(end_string)
    appname = appxml[start_index:end_index]
   
   #endif
  #endfor
 #endif
 return appname
#enddef

I know closing tags are not required in Jython but its just my style of coding so forgive me for that. I can now use the appname and concatenate whatever I need to and pass it to to AdminApp.install as appOptions.

Friday 15 October 2010

IPC_CONNECTOR_ADDRESS in Websphere Process Server 7.0

Apparently there is a bug in Websphere Process Server 7.0 which can make your server environment unstable. Our problem was more around response time of Business Space. The bug rleates to default IPC_CONNECTOR_ADDRESS port which doesn't get define if you have created your servers with manageprofile command or used deployment environment to create servers. You will need manually define the port. Below is the command to create the port defination.

*Note change the port value if there is a conflict :)

                       AdminConfig.create('EndPoint', AdminConfig.create('NamedEndPoint',serverEntry , '[[endPointName "IPC_CONNECTOR_ADDRESS"]]'), '[[port "9633"] [host "localhost"]]')

*Note serverEntry is list of servers in a given node. You can get list of servers like this:

AdminConfig.list('ServerEntry', AdminConfig.getid( '/Cell:%s/Node:%s/' % ( AdminControl.getCell(), <Name_OF_Your_Node>) )).split(java.lang.System.getProperty('line.separator'))

*Note the above command also returns nodeAgent on the given node which should have IPC port already defined so do not run it for you NodeAgent

All about Websphere MQ Clustering

How WMQ Clusters work
>>Role of the CLUSRCVR channel
Every queue manager in the cluster must have a cluster receiver definition, known as a CLUSRCVR.   The CLUSRCVR is very different to a standard WMQ receiver channel as it has more fields, such as the connection name and port that you would normally find in a sender channel.  This is because the CLUSRCVR channel is also used to build the auto-defined CLUSSDR channel when the queue managers start communicating with each other.
The CLUSRCVR definition advertises the queue manager within the cluster. It is used by other queue managers in the cluster to auto-define their corresponding cluster sender (CLUSSDR) channel definitions to that queue manager.  Consequently, once a connection has been established between 2 queue managers in a cluster, any changes to a CLUSSDR channel need to be made on the corresponding CLUSRCVR channel
Note: If you alter a cluster receiver channel, the changes are not immediately reflected in the corresponding automatically defined cluster sender channels. If the sender channels are running, they will not pick up the changes until the next time they restart. If the sender channels are in retry, the next time they hit a retry interval, they will pick up the changes.
>>Role of the CLUSSDR channel
The important thing to remember about CLUSSDR channels is that every queue manager must have at least one manually defined CLUSSDR and this must point to one of the full repository queue managers.   It doesn’t matter which repository queue manager it makes its initial connection with.  Once the initial connection has been made auto-defined cluster channels are defined as necessary to communicate within the cluster.
Note.  Even the manually defined cluster sender channels attributes are overwritten with information from the corresponding CLUSRCVR channel once communication has been established.  Manually defined cluster sender channels that have also been altered this way with auto definitions are shown as CLUSSDRB channels in the output of the DISPLAY CLUSQMGR(*) command. Channels that have been only been defined automatically are displayed as CLUSSDRA channels.
The manually defined CLUSSDR channels to the full repositories are used to transmit cluster information (as well as user data).  For this reason the repository queue managers must fully interconnected with a manually defined CLUSSDR channel to each of the repository queue managers in the cluster.  Under normal circumstances there should only be 2 repository queue managers in the cluster.
>>Role of the REPOSITORY queue managers.
Repository queue managers maintain a complete set of information about all of the queue managers and clustered objects within the cluster.  Partial repository queue managers build up their repositories by making enquiries to a full repository when they need to access a queue or queue manager within a cluster.  The full repository provides the partial repository with the information required to build an auto-defined cluster sender channel to the required queue manager.
You should always have at least 2 full repositories in the cluster so that in the event of the failure of a full repository, the cluster can still operate. If you only have one full repository and it loses its information about the cluster, then manual intervention on all queue managers within the cluster will be required in order to get the cluster working again. If there are two or more full repositories, then because information is always published to 2 full repositories, the failed full repository can be recovered with the minimum of effort.
It is possible to have more than 2 full repositories, but is not recommended unless there is a compelling reason to do so (such as a geographically dispersed cluster).
Subscriptions to cluster objects from partial repositories are only ever made to 2 full repositories, therefore adding additional repositories doesn’t give any benefit and can actually result in additional administration (as extra manual cluster sender channels are required to connect the full repositories) and there is a higher likelihood that the repositories become out of sync. 
Once a cluster has been setup, the amount of messages that are sent to the full
repositories from the partial repositories in the cluster is very small. Partial repositories will re-subscribe for cluster queue and cluster queue manager information every 30 days. Other than this, internal messages are not sent between the full and partial repositories unless a change occurs to a resource within the cluster, in which case the full repositories will notify the partial repositories that have previously registered an interest in that resource.
>>The Repository Process
The repository process, (amqrrmfa) is started with the queue manager and is essential for any cluster processing to take place.  It is not possible to restart the process independently of the queue manager and you should never end the process manually unless you are manually ending the whole queue manager.
When the repository process starts up the contents of the SYSTEM.CLUSTER.REPOSITORY.QUEUE is loaded into memory (cache).  It also processes messages on the SYSTEM.CLUSTER.COMMAND.QUEUE and the SYSTEM.CLUSTER.TRANSMIT.QUEUE.
If the process encounters an error reading from any of these queues it will try to restart itself unless the error is classified as severe.   If the process does end abnormally, the first thing to check for is for any preceding errors in the error log which will often confirm what the issue is e.g
AMQ9511: Messages cannot be put to a queue.
The attempt to put messages to queue 'SYSTEM.CLUSTER.COMMAND.QUEUE'
on queue manager 'MYQMGR' failed with reason code 2087.
>>What happens when a queue is opened for the first time?
>An application connects to a queue manager and issues an MQOPEN against a cluster queue.  The local repository cache is checked to see if there is already an entry for this queue.  In this scenario there is no information in the cache as it is the first time an MQOPEN request has been made for this queue on this queue manager.
>A message is put to the SYSTEM.CLUSTER.COMMAND.QUEUE requesting the repository task to subscribe for the queue.
>When the repository task is running, it has the SYSTEM.CLUSTER.COMMAND.QUEUE open for input and is waiting for messages to arrive. It reads the request from the queue.
>The repository task creates a subscription request. It places a record in the
repository cache indicating that a subscription has been made and this record is also hardened to the SYSTEM.CLUSTER.REPOSITORY.QUEUE. This queue is where the hardened version of the cache is kept and is used when the repository task starts, to repopulate the cache.
>The subscription request is sent to 2 full repositories. It is put to the SYSTEM.CLUSTER.TRANSMIT.QUEUE awaiting delivery to the
SYSTEM.CLUSTER.COMMAND.QUEUE on the full repository queue managers.
>The channel to the full repository queue manager is started automatically and the
message is delivered to the full repository. The full repository processes the message and stores the subscription request.
>The full repository queue manager sends back the information about the queue being opened to the SYSTEM.CLUSTER.COMMAND.QUEUE on the partial repository queue manager.
>The message is read from the SYSTEM.CLUSTER.COMMAND.QUEUE by the
repository task.
>The information about the queue is stored in the repository cache and hardened to the SYSTEM.CLUSTER.REPOSITORY.QUEUE
At this point the partial repository knows which queue managers host the queue. What it would then need to find out is information on the channels that the hosts of the queue have advertised to the cluster, so that it can create auto-defined cluster sender channels to them. To do this, more subscriptions would be made (if necessary) to the full repositories.