As Vlad pointed out in a comment on my previous post about using the Split-Join, there are a few things to keep in mind when using them. If you put a Split-Join in your service, and let it take care of any amount of service calls in parallel, you might be in for some trouble. Bogging down your server with requests and potentially losing some data being some of the biggest concerns. For this blog I will be referring to my previous post and the code example that came with it a lot, for your reference:
- Blog post
- Code Example
- WeatherDataTestSuite (To use this example you will also have to run the mockService in SoapUI)
So now that we know how to process (parts of) messages in parallel, how do we control this and make sure things do not get out of hand. There are a few ways to do this and the way to go depends largely on this question; do we know how many messages to expect? Maybe there is a set number of RainRecords in every session. Or if there is not, maybe we can still enforce a fixed amount, or for the records to be organized in sets of a fixed amount. In these cases, the anwser to the previous question is yes, and if it is, we can come a long way solving our problem by using the Parallel component in the Split Join. More about that later though, for now let’s look at our original case, where we do not know exactly how many records we will receive.
When we do not know the amount of records we are going to receive, the solution does not lie in the Split-Join, instead we will have to look at the Service Bus Console. We are going to use Throttling to keep the amount of Service Invocations in check. Fist, navigate to your Service Bus Console, when the server in your OEPE environment is running you can normally find this at: http://localhost:7001/sbconsole/.Now navigate to the Project Explorer, in here you will go to SplitJoinBlogCase->localServices->storeRecord->business. Select the storeRecordMock business service from the list of resources.
When selected you will see a detailed page describing the service, including four tabs. Select the Operation Settings tab where we can enable Throttling. For those unfamiliar with the console, you might notice that none of the settings can currently be changed. All changes you make here have to be part of a session, you can create a new session by clicking the Create button in the Change Center at the top left of the page. This will make the settings editable.
After creating the session the Create button will turn green and the text changes to Activate. Now we can edit the Throttling settings. First of all, check the Throttling state checkbox that enables the actual Throttling. Now we need to put some numbers in, for now let’s set the Maximum Concurrency to 8, this means no more than 8 calls to this service will be handled at the same time by the server. For Throttling Queue let’s set 2000, we are expecting a lot of relatively small messages, and they need to go somewhere to wait for their turn to get processed. Basicly this is the line a call to this service must wait in until one of the 8 service windows becomes available to handle the request. When you set the queue length to 0 there will be no queue and any excess messages will be discarded. Lastly we need to set Message Expiration, if we leave it at 0 messages will never expire, this is fine if we do not want to lose any messages, however it also means we let the caller wait for an indifinite amount of time. Instead of this we set the Expiration time to 10000 msecs, if a message has to wait longer than 10 seconds to be processed we will discard it.
Once we have entered the settings for our throttling, scroll down to update the settings by clicking the Update button. Next click on the Activate button to let your new settings take effect. Before the settings are actually implemented the SB Console will ask for a description of the changes you made. Enter a short description of what you did so that others know why the changes were made.
After submitting you are done enabling Throttling on your business service. It is worth it to keep in mind that the throttling value is per domain and not per server, meaning that in case of a clustered environment the messages will be equally divided among the managed servers. Read more about this here in Oracle’s documentation on the subject.
Unfortunately, when messages are discarded your WeatherDataService will return the Soap Fault generated by the concurrent message call that was discarded. This will result in a nice BEA-38001 error overiding your response (even though some messages might already be properly handled) To prevent this we will have to implement some basic error handling. I will not go into the details of error handling in this post, but in short this is what you do.
- Generate a proxy service based on your storeRecord business service.
- Add an Error Handler component to the Routing component
- Replace the body of the message (the Soap Fault) with an error message of your own. In my case I merely state that the processing of the record was unsuccesful. Obviously adding a more detailed message will make debugging easier, especially on more complex systems.
- Tell the Error Handler to resume. This tells the service not to stop functioning, instead just returning your error message and continue handling messages (as far as possible).
Creating the Soap Fault is fairly easy, just set the queue size to 0 and send more records than you allow concurrent calls to the business service. In my next posts we will dive deeper into the matter of error handling in the OSB and using the Parallel component in our Split-Join.
Hello,
Is there a way to persist the throttling queue so that I can be sure that none of the messages will be lost under any circunstance?
Thanks,
Thanks, i have a small doubt regarding throttling queue,
how to access the messages which are stored in throttling queue?
interesting. In my projects I never used the wait queue, perhaps because I was feeling I’m losing a bit of the control. I shall review some of my services where the delayed execution may make sense.
Thanks!
A minor but important correction though.
“Let’s set the Maximum Concurrency to 8, this means no more than 8 calls to this service will be handled at the same time by the server”
The throttling value is per domain. That is a value of 8 in a domain with 4 managed servers will limit each managed server to 2 concurrent requests.
I am glad I could clarify some of it. And indeed you are right about the throttling value. The example is based on one server, but I will make sure to add your note.