Skip to main content

Transactions: Part 2: XA. When and how to avoid?

In the world of container managed transactions, Java provides a highly misused feature called XA transactions.
In simplest terms XA transaction is a global transaction that spans multiple resources. The implementation provides a transaction manager that allows applications to perform interactions with multiple resources as a single unit of work.

A lot of applications end up using XA adapters when they have multiple databases to deal with or just because of future possibility of using multiple database interactions. CMT hides transaction management and developers are not bothered with the nuances of the same, until something goes wrong. And with XA, it can go wrong a lot.
I planned to explain why, but then I found this very relevant post, so I'll leave it at that.
Distributed transactions are evil

Instead, let's focus on the solution since interactions with multiple resources is a fact of life. It is neither a good idea, nor a practical one to keep all the data that an application needs in a single huge database (let's not talk big data).

  1. One of the most important consideration while building better enterprise applications is correctly identifying what data shall reside in a particular database. What's important, is to keep all related data together. For example, say we have an online store, then we can store customer profile data in one database while the products that it sell shall reside in another database and maybe the orders in another database or either of the two previous ones. The intent is to keep all associated entities in a single place.
  2. If a particular interaction involves operations on multiple databases, it is probably a better idea to implement our own transaction management. If we have taken care of correctly distributing our entities, as in 1 above, we should be able to run transactions on each database independently as separate workflow steps. Each database interaction can commit or rollback as one unit of work, since all related entities are part of that single transaction only. The success or failure of previous database interaction can drive whether next database call is made or not. 
  3. Also it is a good thing to keep database interactions idempotent. A fair way of doing that is to read before writing. Alternatively, a lot of tools like hibernate etc. provide saveOrUpdate kind of methods which provide this functionality, out of the box.
  4. If the application uses EJBs, mostly they will, the container transaction management shall be set to NOT_SUPPORTED. This is extremely important to ensure that the container does not try to wrap the interactions within transaction context.
  5. The database drivers should be non-XA. This will reduce overhead associated with database transactions and also if someone "enhances" the code in future and forgets to take care of design consideration in point 1 above. If there are overlapping entities, it'll either fail fast or look difficult to implement that someone will catch the miss.
Additional consideration while using resource like MQ along with database:

If the application uses resource like MQ to read data that it then processes to store into one or more databases, a little more consideration shall be put into building it. Developers can ask that if CMT is NOT_SUPPORTED and if their database interactions fail, the application might end up losing data.
A careful choice of MQ provider can solve this. Many MQ providers provide for syncpoints, where a message shall return to the queue if the Listener reading the message throws an error. This is because the provider didn't get an acknowledgement.
So if the database interactions are written such that if it fails and the application wants to prevent data loss, it shall throw an exception.
Point 3 above comes in handy, if database interactions are made idempotent, message re-delivery shall ensure successful end to end business interaction.


Comments

Popular posts from this blog

Using JNDI managed JMS objects with Apache CAMEL

Apache CAMEL uses Spring JMS to work with JMS Queues or Topics. Evidently, we will need Spring to configure and use JMS capabilities provided by CAMEL. Details about how to implement JMS based routes using Apache CAMEL can be found in the CAMEL documentation. However, the documentation leaves a lot to be figured out. In a typical Java EE container, it is usually a good idea to abstract the underlying JMS resources by using JNDI. We can use the below configuration to achieve that. This configuration is tested in Websphere environment, but should work in any JEE container. Create a JMS queue connection factory in the JNDI registry. CAMEL configuration will be able to use only one queue connection factory, even if we have more than one. Create one or more JMS queue or topics, in the JNDI registry, as required. The above two steps are related to generic JNDI configuration for JMS resources. Now we come to the setup required for making these JMS resources work with CAMEL rout...

Catch hold of that Exception and hide that stacktrace!!!

E xceptions happen!!! Rules are to be followed, too. Time and again, Java developers are told the golden rule to catch specific exceptions and not to catch the generic Exception. The thought process behind that is, applications should not catch runtime exceptions. This is apt as runtime exceptions are an indicator of "bugs" in the code. However, blindly following rules, as always, can have unexpected consequences. If you are developing services that are to be exposed over the wire, it is always a good idea to break this rule and "catch that Exception". Instead, follow the below principles: Service methods should implement a generic Exception block, along with catching declared exceptions, thrown from inner layers of the code.  If needed, the service can throw another exception back to the client. What's important is that we create a new Exception instance to be thrown, along with relevant message for the client. The service can log stacktrace for the E...

Container ready spring boot application

Spring boot applications are now ubiquitous. The usual way to build one is to create an uber jar. At the same time, Docker allows us to build self reliant containers which are unaffected by the underlying server architecture or neighboring applications or their dependencies. Spring boot applications can also run in docker containers.  However running an uber jar inside a container fails to satisfy an important goal. That of high speed build and deployment. An uber jar is a heavy weight entity. That will make docker image heavy and slow to build. Here's a step by step solution, which leverages docker layer caching feature for faster builds with all spring boot goodness. We will use Maven to build a deployment structure for our docker image that allows fast deployments. Step 1: Create a spring boot application with only the SpringBootApplication and Configuration classes, such as one for REST configuration package scanning, one for JPA etc. Most often this wil...