8 min read

(For more resources related to this topic, see here.)

In this article, we’ll focus on recipes for designing high performance SOA Suite 11g applications. These recipes look at how you can design your applications for high performance and scalability, where high performance is defined as providing low response times even under load, and scalability is defined as the ability to expand to cope with large numbers of requests.

While many of the recipes in other articles can be applied after the application has been designed and written, those in this article need to be applied while the application is being written, and may require that your application is implemented in a certain way. Designing an application with performance as a requirement from the start is much easier than trying to add performance to an application that is already live. So, the recipes in this article provide some of the best value for money in terms of getting the most performance out of your SOA Suite infrastructure. However, while this book focuses on decisions that should be made during the design stages of a development process, this article is not a list of general SOA Suite design patterns.

As for many of the recipes in other articles, a lot of the focus in this article is on reducing the amount of time your application spends waiting on external services and the SOA Suite database tables.

There are many aspects to the performance of a SOA Suite application, and the design guidelines depend very much on the particular business problems that your application is designed to solve. Factors such as payload size, number of external systems being orchestrated, data transformation complexity, and persistence requirements, all have an impact on the performance of your application. Performance is a relative term, with each application and use-case having its own requirements, but there are a number of basic principles that can help ensure that your application will have a good chance of meeting its goals.

  • Design for peak loads, not average loads. Average loads can be very misleading; there are many situations in which the average load of a system is not a good indicator of the expected load. A good example of this would be a tax return system, where the usage for most of the year is very low, building into a peak in 30 or so days before people’s tax returns are due.
  • Smaller payloads are faster. When designing your application, try and limit the amount of payload data that goes through your composites and processes. It is often better to store the data in a database and send the key and metadata through the processes, only going to retrieve data when required.
  • Understand your transaction boundaries. Many applications suffer performance problems because their transactions boundaries are in the wrong places, causing work to be redone unnecessarily when failures happen, or leaving data in an inconsistent state.
  • Understand what causes your application to access the database, and why. Much of the performance overhead of Oracle SOA Suite applications is in repeated trips to the database. These trips add value by persisting state between steps or within processes, but the overuse of steps that cause database persistence is a common cause of performance problems.
  • Follow standard web service design patterns, such as using asynchronous callbacks and stateless invocations, where you are using web services.

Using BPEL process parallelization

By having your BPEL process execute steps in parallel when there are no dependencies, you can increase the performance by spending less time waiting for external systems to complete.

Getting ready

You will need JDeveloper installed, and have an open BPEL project.

How to do it…

Follow these steps to use BPEL process parallelization:

  1. Expand the BPEL Constructs section in the component palette.
  2. Drag Flow from the palette onto the process.

  3. Click on the + icon next to the flow to expand it.

  4. Populate the flow with the process steps.

How it works…

If you have a number of tasks that do not have dependencies on each other, you can improve performance by executing the preceding tasks in parallel. This is most effective with partner links, where you know you are waiting on an external system to produce a response. The default behaviors of these flows is still to use a single thread to execute the branches if external systems are invoked. See the Using non-blocking service invocations in BPEL recipe to learn how to execute flows that contain partner links in parallel.

There’s more…

It is possible to include a limited amount of synchronization between branches of a flow, so that tasks on one branch will wait for tasks on another branch to complete before proceeding. This is best used with caution, but it can provide benefits, and allow tasks that would not otherwise easily lend themselves to parallelization to be run in parallel.

Using non-blocking service invocations in BPEL flows

We can reduce the latency of forked external service invocations in a BPEL process to the longest flow’s execution time if we assign a thread to each flow, making it multi-threaded.

Getting ready

You’ll need a composite loaded in JDeveloper to execute this recipe. This composite will need a flow that makes calls to a partner link external service.

How to do it…

Follow these steps to use non-blocking service invocations:

  1. Right-click on each partner link that is being executed in your BPEL process flow, and select Edit.

  2. In the Property tab, select the green + icon and add nonBlockingInvoke as a property name. In the Value box at the bottom, enter true.

How it works…

This recipe causes flow branches to be executed in parallel, with a new thread to be used for each branch flow.

For multiple service invocations that each have a high latency, this can greatly improve the total BPEL execution time. For example, assume we have a BPEL process that calls two web services, one that takes four seconds to execute, and one that takes six seconds to execute. Applying this change will prevent the BPEL process making the calls serially, which would take 10 seconds in total, and enforce parallel service calls in separate threads, reducing the execution time to just over six seconds, or the latency of the longest call plus time to collate the results in the main BPEL process execution thread.

While it may sound like a silver bullet performance improvement, this recipe is actually not necessarily going to improve the execution time of our BPEL process! Consider that we may now be at the mercy of greater thread context switching in the CPU; for every invocation of our process, we now have a larger number of threads that will be spawned. If each service invocation has a low latency, the overhead of creating threads and collating callbacks might actually be greater than the cost of invoking the services in a single thread. Our example in this explanation is contrived, so ensure to test the response time of your composite and the profile of your application, when placed under operational load (which may result in lots of threads spawning), as these may well be different with the configuration applied.

There’s more…

This recipe used an alternative way of setting property values to that which we’ve used elsewhere in the book. Previously, we’ve edited composite files directly; here, we used the JDeveloper BPEL graphical editor to achieve the same end result. If you check the composite.xml source, you’ll see a property added with a name, such as partnerLink.[your service name].nonBlockingInvoke for each service added.

Turning off payload validation and composite state monitoring

Payload validation checks all inbound and outbound message data thus adding an overhead, especially for large message types. Composite state monitoring allows for administrators to view the results of all instance invocations. We can disable these to improve performance.

Getting ready

You will need to know the administration credentials for your Oracle SOA Suite WebLogic domain, and have access to the Oracle Enterprise Manager console.

How to do it…

By following these steps, we can turn off payload validation:

  1. Log in to Enterprise Manager.
  2. Open the SOA tab, and right-click on soa_infra , select SOA Administration and Common Properties .

  3. Un-tick the checkbox for Payload Validation to disable this feature.
  4. Un-tick the checkbox for Capture Composite Instance State.

How it works…

In this recipe, we globally disabled payload validation. This instructs SOA Suite to not check the inbound and outbound message payloads against the schemas associated with our services. This can be particularly useful, not only if the payload is coming from a trusted source, but even if the source is untrusted. A common alternative to payload validation is to add steps to manually validate the payloads at the point that we first receive the request, while not validating those that have come from internal or trusted sources.

There are a number of levels of granularity for payload validation; it can be applied at the SOA Engine (BPEL) and composite levels to allow for fine-grained application of this property. You can access these properties via the enterprise manager console right-click menu on the SOA engines and deployed composites. For performance, I would recommend disabling this in all environments above development.

Composite state management is responsible for tracking and representing the health of our running composites. This is a powerful administration feature, but costs a lot in terms of performance. Anecdotal testing shows that this can be responsible for up to 30 percent of processing time. As such, for high throughput applications, the value of this feature should be considered.

There’s more…

See the recipes on audit logging to further control composite recording activities at runtime.

Ensure that you check the payload validation at the Engine and Composite levels to ensure that they meet your performance requirements.

LEAVE A REPLY

Please enter your comment!
Please enter your name here