WebSphere MQ Server with Temenos T24: Part 2

WebSphere MQ Server with Temenos T24: Part 2


WebSphere MQ Server with Temenos T24: Part 2

Imran Adeel Haider   |   02 Mar 2017

This is the second entry on MQ Server with T24.  The previous entry can be found here.

If you read my last post, you would think that I am saying it’s OK to use T24 without MQ Server.  It’s not.  I will shed light on various aspects of it as I go along.

First off, MQ offers message persistence; not having which means that if you have a server crash or your network goes down, the messages will be lost either at a host, or in transmission. For instance, a teller clicked on the option to deposit cash in a customer’s account.  The request came to the application server, and then went to T24 server.  Now, the server processes it and responds with confirmation.  On the way back, the web server went down, or the network broke.  Now the teller will not receive anything except a timeout message (in case of a network failure), or nothing at all (if the web server crashed).  But the transaction has already been committed at T24.  The teller would try to re-login and post the transaction again.  If the failed component is up, the transaction would go through, and the teller would get a confirmation from T24.  So now T24 would have two transactions instead of one.

Message persistence could have been of help here, if implemented with some level of clustering at the web (and application) server and MQ server levels.  Like had there been a web server crash, and there were two web servers clustered together, then the other web server would have been able to receive the message – only if there had been an MQ server, from where it could poll the incoming messages.


Assembly of servers and their interconnections

Similarly, if the T24 server goes out of business right when a teller sends a withdraw request, MQ would hold it till T24 server is back.  TC Server will pick the message from the MQ’s “IN” queue and process it; all this without the teller actually knowing that there was a problem.  This, of course, assumes that the T24 server is back within a time span not longer than long enough.

But therein lies another problem.  You see, you configure timeout on every browser channel.  This is typically 180 seconds (3 minutes).  If the web server doesn’t receive a response from T24 (or MQ) within this time span, it will time out, and show the user the error screen.  The user can then only go to the login page, which the web server hosts.  So even if MQ is holding a message in its queue, and the other party/network is not back within 3 minutes, the session is going to time out anyway, and the user will not be able to use the same message.  It will be “orphaned out” in MQ, as no party would be interested in taking it.  MQ server will wait till its timeout setting allows it, and then remove this message from its main queue, and put it in the Dead Letter Queue.  So even if you have MQ server, if the transaction is delayed for 3 minutes (or whatever is your timeout setting at the browser/TC Client and TC server levels), the session would expire anyway, thus essentially ignoring whether the transaction ever went through or not.

My advice here is that the IT executives should tell their users in the T24 early adoption/training sessions that if they face a timeout after posting a transaction, and have to login again, they must first check whether the transaction they just posted actually went through or not.

In my next post, I will explain the scenarios where MQ Server really comes into play as an essential component in the core banking system.  Your feedback and comments are always welcome.