Showing posts with label ESB. Show all posts
Showing posts with label ESB. Show all posts

Mar 15, 2015

0 Comments
Posted in Arrangement , Art , Business

Minimum steps to load balance WSO2 ESB with HTTPD Server

HTTPD server also knows as Apache2 server is a very commonly used server in many production environments. It is tested and trusted. This server has many usages and you can extend its functionality by installing modules like mod_proxy, proxy_connect, proxy_balancer and the list goes on. In this blog post I'll be showing how to use HTTPD server as a load balancer with minimum number of configuration steps. 

Install and Prepare the HTTPD server 

If you are using a Linux Debian distribution such as Ubuntu or Linux mint, you can simply install it by issuing the following command.
  • apt-get install apache2
Once the server is successfully installed you need to install mod_proxy related modules. In order to do that execute the following command. 
  • aptitude install -y libapache2-mod-proxy-html libxml2-dev
Now you just need to enable proxy_module, proxy_balancer_module and proxy_http_module. It can be done by executing the following command. 
  • a2enmod proxy
  • a2enmod proxy_balancer
  • a2enmod proxy_http
To verify if the modules are installed and enabled properly, use the following command.
  • apache2ctl -M | grep proxy

Configuring the cluster 

Following is the cluster setup we will be configuring.



Basically, what we are going to have is two WSO2 ESBs fronted by the HTTPD server. As you may have already noticed, I have used port offset 1 for ESB -1 and port offset 2 for ESB -2. You can change the port of each ESB by configuring below element of <ESB_HOME>/repository/conf/carbon.xml. 

<!-- Ports offset. This entry will set the value of the ports defined below 
to the define value + Offset.  e.g. Offset=2 and HTTPS port=9443 will
set the effective HTTPS port to 9445 -->
<Offset>1</Offset>

Likewise you can change the port offset to 2 for ESB -2 as well. Apart from these for testing purposes I have deployed the below Proxy service in each ESB. 

   
      
         
            
               
                  OK
                  1
               
            
            
         
         
Now that you have configured two WSO2 ESBs, let's look at how we can configure HTTPD server. It is very easy. Open the default configuration file for HTTP and add following configuration. The default configuration file is 000-default and it can be found under /etc/apache2/sites-enabled/

Just before the end of VirtualHost section you need to add the following two entries.

ProxyPass /httpd/ balancer://mycluster/
ProxyPassReverse /httpd/ balancer://mycluster/

Once that is done add the following configuration at the very top of (even before the VirtualHost section) 000-default file.

<Proxy balancer://mycluster>
    # Define back-end servers:
    # Server 1
    BalancerMember http://localhost:8281/
    # Server 2
    BalancerMember http://localhost:8282/
</Proxy>

Now save the file and restart HTTPD server. This can done by executing the following command.
  • sudo service apache2 restart
After restarting the server start the two WSO2 ESBs. Once the servers are started use the following request to see if the cluster is working.
  • curl -v http://localhost/httpd/services/MyMockProxy
Upon successful configuration you should be able observe that load is getting equally distributed among the two servers. Each time a server gets a request, there should be a log message in the console as below.

[2015-03-15 16:03:54,073]  INFO - LogMediator To: , MessageID: urn:uuid:4a6e90ae-de16-4828-b416-176250a269d9, Direction: response, HIT = HIT

Yep, It is that easy to load balance WSO2 ESBs with HTTPD server. To learn more about HTTPD server you can refer to the below links. 

[1] https://www.digitalocean.com/community/tutorials/how-to-use-apache-http-server-as-reverse-proxy-using-mod_proxy-extension
[2] http://httpd.apache.org/docs/2.2/mod/mod_proxy.html
[3] http://wiki.centos.org/HowTos/Https
[4] http://httpd.apache.org/docs/2.2/mod/mod_proxy_balancer.html



Mar 15, 2014

0 Comments
Posted in Arrangement , Art , Business

WireTap : An Enterprise Integration Pattern with Message Store and Message Processor

I've always wondered why we needed Sampling Processor when you have Forwarding Processor. Because at first glance it feels like you can do everything that you do with Sampling Processor by using Forwarding Processor. But that is not true. I came-across an interesting integration that made use of Sampling Processor and Forwarding Processor in order to wiretap incoming messages. In fact there is a separate Enterprise Integration Pattern (EIP) for this called Wirtap and this blog post explains a comprehensive implementation of it. In addition to that, as you go through the blog post you will also get to know the nuts and bolts you need to know about Message Store and Message Processor of WSO2 ESB

Requirement : Wiretap It

Basically, what we are trying to archive with this solution is to enable wiretapping for a given Proxy-service with minimal intrusive configurations and performance loss. In simple English, we need to listen to the incoming messages seamlessly. Proxy-service continues to do its intended job while we keep on listing (just like FBI does). Err.. why are we listening ? you ask, this could due to many reasons such as understanding the incoming request, validating it, etc.

Application of Message Store/Message Processor

Here comes the interesting part, the implementation of the above requirement. Let's start with a diagram that depicts the implementation. This will give the initial idea that would make it easier to get a grasp of what I am talking in the next paragraph.  


As you can see there are two Message Stores first one for the Sampling Processor and the second one for the Forwarding Processor. Here's what have done,

  1. Take a copy of the incoming message and store it in a message store. This is done with the clone mediator and the store mediator.
  2. Then takes the message using the sampling processor and do necessary modifications to the message such as adding authentication headers, base64 encoding, etc. Then stores it in the second message store. 
  3. Lastly, take the modified message out using the Forwarding Processor and send it reliably to the back-end. In this case it is a Apache CouchDB
Following is the Synapse configuration of the above design. 

   
      15000
   
   
      
         
            
               
                  
                     
                     
                     
                  
               
            
            
               
                  
The main sequence for the message mediation org.apache.activemq.jndi.ActiveMQInitialContextFactory tcp://localhost:61616 JMSMS 1.1 org.apache.activemq.jndi.ActiveMQInitialContextFactory tcp://localhost:61616 1.1 JMSMS1 10 storeForward 4 true 1000 10 true

You may wonder why go through such complex implementation. Imaging, you add all the wiretapping logic in the original proxy. It would obviously hinder its original task. The Proxy would get slow which in turn reduces the number of clients it can serve. Moreover, developers will get confused with the original Synapse logic with the new intrusive wiretapping Synapse logic. So that is why this is the better way. 

You can use this Synapse configuration in any given Proxy of yours to start wiretapping (Before that you will have to copy necessary jar files to lib directory). Finally, This also shows the capabilities of WSO2 ESB. An ESB that not only support conventional Enterprise Integration Patterns (EIP) but also novel EIPs such as this. For list of EIPs that WSO2 ESB covers, look at here


Feb 21, 2014

0 Comments
Posted in Arrangement , Art , Business

ESB Performance Round 7.5 - The Other Side of The Story


This blog post explains why the message corruptions stated in “ESB Performance Testing - Round 7” and “Why the Round 6.5 results published by WSO2 is flawed” article are not so catastrophic. Moreover, As you go through the post you’ll understand the fact that, it is written in an absurd manner with overly exaggerated statements. However, with this blog post I don’t really intend to play the same game of theirs but to clear any possible misunderstandings that were caused by those articles. 

Fastest open source ESB in the world


Latest performance study conducted by WSO2 ESB team has clearly showed that WSO2 ESB has continued to be the leader in the space of ESB performance. Geared with latest technology and a dedicated team, WSO2 ESB always provides nothing but the best for its users. Following graph shows the summary of the latest results. For more information please refer Performance round 7.5.



However, There have been some invalid critics on the Net which gives the message that WSO2 ESB fails to deliver. This message is entirely not true and below paragraphs explain why. 

The extinct issue of StreamingXpath 


We must admit that enabling StreamingXpath did lead into a message corruption when the message size is larger than 16K. While there was a real issue here, this was never a default configuration and has NOT really affected the thousands of real deployments of WSO2 ESB out there. Furthermore, this has been stabilised in the recently released WSO2 ESB 4.8.1 as it continues to be the fastest open source ESB.

XSLT and FastXSLT false alarm


XSLT and FastXSLT mediators never had a problem of message corruption. The message corruptions that were seen in Performance round 7 were due to a missing Synapse configuration. Given the fact that, the engineers who conducted the performance test were ex-WSO2 ESB team engineers, they could have easily figured it out and fixed it during the Performance round 7. Plus, they could have informed us about this prior to the test. So that we could have fixed it for them. 

They failed to do neither of these. So, as they have mentioned, their peformance test does have inherent limitations due to their limited understanding. Therefore, it cannot be attributed as a message corruption of WSO2 ESB 4.6 or WSO2 ESB 4.7.0. 

Stability of Passthrough Transport (PTT)


Over the last year WSO2 ESBs with PTT were deployed in many customer sites and they have never encountered any significant issues but rather benefited from high performance of deployed ESBs as the deployment only required very few instances of ESBs. 

To clear any confusions PTT never had message corruption problems but instead StreamingXpath which is written on top of PTT in order to utilize its high performance architecture.

Nothing to Worry


After all, As above section explains message corruptions that were discussed in performance round 7 are either occurs in extreme situations or never really exist. Therefore, we believe the content of the article performance round 7 is more or less misleading the audience. However, StreamingXpath did have a problem with messages larger than 16K which is fixed in ESB 4.8.1. Apart from that there aren’t any message corruptions issues at all.

Lastly, The only other critic that worth answering is why we didn’t publish the AMI. Yes, we didn’t publish the AMI but we did publish the configuration files along with clean and clear instructions to re-setup setup if needed. So, If one wants to reproduce the result they can simply re-setup the setup. Besides, Even if we had published the AMI, one would have to load this AMI into an EC2 instance which is always not guaranteed to be the same. 

As a conclusion, Most of the things that have been published in those articles are trivial stuff and just overly exaggerated to make a big thing out of nothing. However, I must admit some of the critics they have mentioned were really helpful for us to improve our product and I am grateful to them for those.

Feb 12, 2014

0 Comments
Posted in Arrangement , Art , Business

WSO2 ESB Passthrough Transport Basics

When I first joined WSO2 I found it hard to get a grasp of this so-called "Passthrough Transport". All I knew was it was fast! as opposed to "nhttp transport" (I hadn't known anything about it either). However, over the past year I gradually get to understand what is this Passthrough transport and why it was so fast. So, In this blog post I'll be explaining some "good-to-know" stuff about "Passthrough Transport". Since I am no expert on this, there could be few gaps. But still better than nothing.

Passthrough Transport Vs NHTTP Transport

The main difference is, in Passthrough Transport the incoming message does not get built all the time whereas in NHTTP Transport it always gets built. What we meant by building the message is we take the message stream from the socket and transform it in to a XML representation. 

In reality, you don't always have to build the message. For instance, you maybe able to rout the message simply by looking at the headers of HTTP request. So rather than blindly building the message Passthrough Transport does this selectively which makes it smarter than its predecessor NHTTP Transport.

The main similarity between these two is they are both developed on top of the popular Apache project HTTP-core.

High level view of Passthrough Transport

OK, Now that you have an idea, let's look at high level view this transport.



It is not as simple as the diagram depicts but it is enough to get you started. In a way, Passthrough Transport is a complex implementation of Producer-Consumer pattern. Why I say so, Let me explain. Everything starts from the SourceHandler side. When a client sends a request it comes through HTTP-Core towards SourceHanlder. Then the SourceHandler starts producing data to the Pipe. As soon as SourceHanlder starts producing data to the Pipe TargetHandler starts consuming data from the Pipe. These consumed data are sent through HTTP-Core to the desired endpoint. 

As I said earlier it is not as simple as that. There are quite a lot of classes associated with the process such as ServerWorker, SourceRequest, SourceResponse, ClientWorker, TargetResponse, TargetRequest, etc. To make matters worse, the entire implementation is done in Asynchronous manner.

State Machine of Passthrough Transport

So in order to reduce the complexity of the entire process it is implemented on a state machine. SourceHanlder and TargetHanlder have their separate state machines. Following is the state machine that used by them.


The vertical split represents the SourceHandler side and the TargetHandler side. The horizontal split represents the HTTP request and HTTP response. The methods that are next to each state are the methods that get executed in each state (Forget the methods for the moment). Before I explain the state machine it is important to know that in Passthrough the HTTP message is divided into two parts and as HEADERs and BODY. 

This is how it goes, First ESB establishes the connection with the Client and sets its state to REQUEST_READY. Then it starts receiving data. First it reads the headers and goes to REQUEST_HEAD state. Afterwards, it makes itself ready to read the body of the message in the REQUEST_BODY state. Finally to finish the first quarter it reads the entire message body and moves to REQUEST_DONE state. 

The same thing continues in the next quarter but this time ESB acts as the client to some back-end server.  And in the last two quarters it happens for the response. The states of the SourceHandler and the TargetHanlder are interconnected through Pipe's buffer. So sometimes when you debug, though the error message shows in TargetHandler side the actual cause could be on the other side. 

Exact Location of Passthrough Transport

Following diagram shows the location of the Passthrough Transport in the ESB architecture.




Finally, THIS STATE MACHINE REPRESENTS AN IDEAL SCENARIO. But in real world it could deviate a bit from this. Moreover, this blog post is only covering a small bite of a complex implementation. Anyways, this knowledge is enough to get you started. 


    Blogger news

    Blogger templates

    Blogroll

    About