25 min read

Tuning JBoss AS 7 performance

In this article by Francesco Marchioni, author of JBoss AS 7 Configuration, Administration and Deployment we’ll discuss about JBoss AS 7 performance tuning.

Performance tuning is a topic which concerns every application once it is rolled in production. The performance of the application server is influenced by a lot of factors; for this reason, a complete guide about tuning would require a book by itself (affectionate readers might remember that last year a book covering JBoss AS 5 performance tuning was authored by me and published by Packt Publishing available at: http://www.packtpub.com/jboss-5-performance-tuning/book.

In this chapter, we will try to stress out the most important factors that influence the performance of the application server itself and of the applications deployed on it, with a special focus on the factors introduced by the new platform.

So this article will basically cover the following topics:

  • First, we will introduce in a nutshell the basics of performance tuning
  • Then, we will show which are the key elements of the AS configuration which can influence its performance

Definition of tuning

The performance tuning process can be defined as an iterative process that you use to identify and eliminate bottlenecks until your application meets its performance objectives.

You start by establishing a baseline. In this part of the process, you should decide what you are going to measure, and make sure you develop a performance test plan before you start. The test plan typically includes the desired performance, as well as the testing methodology which will be used.

Then you collect data, analyse the results, and make configuration changes based on the analysis. After performance testing is completed, analysis of the test results helps determine the root causes of poor performance. This should be done before attempting any changes for performance.

In order to measure performance it’s essential to establish how performance will be measured. There are two important properties that you should measure for quantifying the performance of your applications:

  • Response Time
  • Throughput

The response time is the time it takes for one user to perform an operation. For example, in an e-commerce site, after the customer puts items in a shopping cart and clicks the Buy button, the time it takes to process the order and for the checkout screen to appear is the response time for the checkout web page

Throughput is the number of transactions that can occur in a given amount of time. The throughput is usually measured in Transactions Per Second (TPS).

Although many architects and software engineer agree that about 70-80% of the application performance depends on how the application itself is coded—a poorly configured server environment can affect your user experience significantly, and eventually, on your application value.

The amount of configuration elements, which can influence your server performance are quite a lot; however, some of them deserve a special attention:

  • JVM tuning
  • Application server resource pooling
  • Logging
  • Caching data

Let’s see each element in detail!

The JBoss AS 7 runs within a Java Virtual Machine (JVM), hence it’s natural that the AS can give a better performance with proper configuration of the JVM parameters.

JVM tuning has evolved over the years and actually changed with each version of Java. Because the release 5.0 of the J2SE, the JVM is able to provide some default configuration (“Ergonomics”) which is consistent with your environment. However, the smarter choice provided by Ergonomics is not always the optimal and without an explicit user setting, the performance can fall below your expectations.

Basically, the JVM tuning process can be divided into the following steps:

  • Choose a correct JVM heap size. This can be divided into setting an appropriate initial heap size (-Xms) and a maximum heap size (-Xmx).
  • Choose a correct Garbage collector algorithm.

Let’s see both elements more in details.

Choosing the correct JVM heap size

Java objects are created in Heap; each heap is divided into three parts or generations for sake of garbage collection in Java. These are called as Young generation, Tenured or Old generation, and Permanent area of heap.

JVM tuning has evolved over the years and actually changed with each version of Java. Because of the release 5.0 of the J2SE, the JVM is able to provide some default configuration (“Ergonomics”) that is consistent with your environment. However, the smarter choice provided by Ergonomics is not always the optimal, and without an explicit user setting, the performance can fall below your expectations.

New Generation is further divided into three parts known as Eden space, Survivor 1, and Survivor 2 space. When an object is first created in heap, it gets created in new generation inside Eden space, and after a subsequent minor Garbage collection if the object survives, it gets moved to Survivor 1, and then to Survivor 2 before the major garbage collection moved that object to Old or Tenured generation.

Permanent generation of Heap or Perm Area of Heap is somewhat special, and it is used to store metadata related to the classes and methods in JVM; it also hosts String pool provided by JVM.

In order to tune the JVM, you should choose a correct ratio between young generation (where objects are initially placed after instantiation) and the tenured generation (where old living generations are moved). For most applications, the correct ratio between the young generation and the tenured generation ranges between 1/3 and close to 1/2.

The appropriate max heap size can be determined by testing your application with a peak load for a consistent time. Once you have determined the peak of memory demanded by your application, you can allow an extra 25-40 percent additional maximum heap size, depending on the nature of your application. As far as the initial heap size is concerned, a good rule of thumb is to set minimum heap size to be the same as the maximum heap size in order to avoid having the JVM allocate memory to expand the heap. This is particularly useful for production environment, while developers (who have limited resources) might choose a smaller initial heap size.

Keep this suggested configuration as a reference for smaller environments and also for larger ones:

java -Xmx1024m -Xms1024m -XX:MaxNewSize=448m -XX:NewSize=448m -XX:SurvivorRatio=6
java –Xmx2048m –Xms2048m -XX:MaxNewSize=896m -XX:NewSize=896m -XX:SurvivorRatio=6

The following table will help you to recap the meaning of the JVM flags:

Property Description
-Xmx Maximum heap size allowed for the JVM
-Xms Initial heap size allowed for the JVM
-XX:MaxNewSize Maximum size of new (young) generation
-XX:NewSize Default size of new generation
-XX:SurvivorRatio Ratio of Eden/Survivor space size

Tuning the garbage collector

Garbage collection is a mechanism provided by Java Virtual Machine to reclaim heap space from objects that are eligible for Garbage collection.

An object becomes eligible for Garbage collection or GC if it is not reachable from any live threads or any static references. In other words, you can say that an object becomes eligible for Garbage collection if all its references are null.

Choosing the correct Garbage collector algorithm is a key (but often overlooked) factor that plays an important role in reaching your service level requirements. There are several garbage collectors available as:

  • Serial collector (-XX:+UseSerialGC): It performs garbage collector using a single thread that stops other JVM threads. This collector is fit for smaller applications; we don’t advise using it for Enterprise applications.
  • Parallel collector (-XX:+UseParallelGC): It performs minor collections in parallel and because J2SE 5.0 can also perform major collections in parallel (-XX:+UseParallelOldGC). This collector is fit for multiprocessor machines and applications requiring high throughput. It is also a suggested choice for applications that produce a fragmented Java heap, allocating large-size objects at different timelines.
  • Concurrent collector (-XX:+UseConcMarkSweepGC): It performs most of its work concurrently using a single garbage collector thread that runs with the application threads simultaneously. It is fit for fast processor machines and applications with a strict service-level agreement. It can also be the best choice for applications using a large set of long-lived objects live HttpSessions.

The new G1 collector

One of the major enhancements in Java 7 is the new G1 (“Garbage first”) low-latency garbage collector planned to replace CMS in the Hotspot JVM. It is a server-style collector, targeted at multiprocessor machines with large amounts of memory.

The G1 collector is a departure from earlier collectors that had a physical separation between the young and old generations. With G1, even though it is generational, there is no physical separation between the two generations. This collector divides the entire space into regions and allows a set of regions to be collected, rather than split the space into an arbitrary young and old generation.

The key features of the G1 collector are:

  1. G1 uses parallelism that is mostly used in hardware today. The main advantage of G1 is it is designed in such a way as to make use of all the available CPUs and utilize the processing power of all CPUs as well as increase the performance and speed up the garbage collection.
  2. The next feature that plays a key role in increasing the garbage collection is treating the young objects(newly created) and the old objects (that lived for some time) differently. G1 mainly focuses on young objects as they can be reclaimable when traversing the old objects.
  3. Heap compaction is done to eliminate fragmentation problems. In essence, because G1 compacts as it proceeds, it copies objects from one area of the heap to the other. Therefore, because of compaction, it will not encounter fragmentation issues that CMS might. There will always be areas of contiguous free space from which to allocate, allowing G1 to have consistent pauses over time.

Compared to CMS, the G1 collector is also much easier to use, because it has a lesser number of switches and hence tuning VM is simpler. G1 is already present in JDK 7 as of now, and one can try it. To use G1, these two switches need to be passed:

-XX:+UnlockExperimentalVMOptions -XX:+UseG1GC

Using large memory pages

Another area of the JVM tuning that can yield substantial benefits to your application is the usage of large memory pages. Large memory pages is a feature of modern CPUs that allows memory-hungry applications to allocate memory in 2-4 MB chunks, instead of the standard 4 KB. Beginning with Java SE 5.0, there is a cross- platform flag for requesting large memory pages: -XX:+UseLargePages (On by default for Solaris, Off by default for Windows and Linux).

The goal of large-page support is to optimize processor Translation Lookaside Buffer (TLB). A Translation Lookaside Buffer is a page translation cache that holds the most recently used virtual-to-physical address translation. TLB is a scarce system resource. A TLB miss can be costly as the processor must then read from the hierarchical page table, which may require multiple memory accesses. By using bigger page size, a single TLB entry can represent large memory ranges. There will be less pressure on TLB, and the memory-intensive applications may have better performance.

Large memory pages are available with the 64-bit JVM. (Red Hat Enterprise Linux does let you allocate large pages on the 32-bit OS, but you get an illegal argument when starting the JVM).

The Sun JVM, as well as OpenJDK, requires the following option, passed on the command line, to use large pages: -XX:+UseLargePages.

Application server resource pools

Application server pools are used because the very first release of any application server as means to set boundaries for the resources they contain.

Resource pooling offers several benefits, such as:

  • Improved performance: You can re-assign resource-intensive objects such as a database connection instead of creating and destroying a resource every time.
  • Improved security: By granting a limited number of resources, you prevent plundering of server resources from applications that could eventually lead to an interruption of the AS services.

JBoss AS 7 uses several resource pools to manage different kinds of services. The application server ships with a default configuration for all resource pools, which could be just good for simple applications. If you are planning to write mission- critical applications, however, you need to find the appropriate number of resources to be assigned to your pools.

We will discuss in particular the following pool of resources, which ultimately play an important role in performance tuning:

  • The database connection pool
  • The EJB pool used by Stateless EJBs and MDBs
  • The Web server pool of threads

At the time of writing, the application server is not ready to produce performance metrics for the single subsystem that we have mentioned. Although it would be preferable to monitor the application server pools through management interfaces, you can still have a look inside the application server pools using some other tools or with a minimal sample application. That’s what we will do in the next sections (if you want to check all the AS 7 latest updates that couldn’t be added in this book, check the author’s blog at: http://tinyurl.com/63tvagg).

Tuning the database connection pool

Establishing a JDBC connection with a DBMS can be quite slow. If your application requires database connections that are repeatedly opened and closed, this can become a significant performance issue. The connection pools in JBoss AS datasources offer an efficient solution to this problem.

What is important to stress is that when a client closes a connection from a datasource, the connection is returned to the pool and becomes available for other clients; therefore, the connection itself is not closed. The cost of opening and closing pooled connections can be measured in terms of nanoseconds, so it’s irrelevant in terms of performance.

<datasource jndi-name="MySqlDS" pool-name="MySqlDS_Pool"
enabled="true" jta="true" use-java-context="true" use-ccm="true">
<connection-url>
jdbc:mysql://localhost:3306/MyDB
</connection-url>
<driver>mysql</driver>
<pool>
<min-pool-size>10</min-pool-size>
<max-pool-size>30</max-pool-size>
<prefill>true</prefill>
</pool>
<timeout>
<blocking-timeout-millis>30000</blocking-timeout-millis>
<idle-timeout-minutes>5</idle-timeout-minutes>
</timeout>
. . . .
</datasource>

Here, we configured an initial pool capacity of 10 connections, which can grow up to 30. As you can see from the following MySQL administration console, when you set the pre-fill element to true, the application server attempts to pre-fill the connection pool at the startup. This can produce a performance hit, especially if your connections are costly to acquire.

If the application server is not able to serve any more connections because they are all in use, then it will wait up to the blocking-timeout-millis before throwing an exception to the client.

At the same time, connections that have been idle for some minutes over th parameter idle-timeout-minutes are forced to return to the pool.

Adjusting the pool size

To determine the proper sizing, you need to monitor your connection usage. As we said, at the time of writing, the application server is not able to produce runtime metrics for the connection pool. However, there are some valid alternative as well: the first and most obvious is monitoring the database sessions. The following table shows some useful commands, which can be used to keep track of active database connections on different databases:

Database Command / Table
Oracle Query the V$SESSION view
MySQL Use the command SHOW FULL PROCESSLIST
Postgre-SQL Query the PG_STAT_ACTIVITY table

Another option is using a tool such as P6Spy, which acts as a JDBC proxy driver (the author has blogged an article about it at: http://tinyurl.com/6dkmbxn).

Once you have found the peak of connection used by your application, just set the maximum at least 25-30 percent higher. Don’t be concerned about setting the maximum too high, because if you don’t need that many connections, the pool will shrink automatically, provided that you have set idle-timeout-minutes.

On the other hand, your server logs are still an invaluable help to check if your pool is running into trouble. For example, if you start seeing this exception in your server logs, there is a strong clue that you need to look at your connection pooling:

21:57:57,781 ERROR [stderr] (http-executor-threads – 7) Caused by: javax. resource.ResourceException: IJ000655: No managed connections available within configured blocking timeout (30000 [ms])

21:57:57,782 ERROR [stderr] (http-executor-threads – 7) at org.jboss.jca.core. connectionmanager.pool.mcp.SemaphoreArrayListManagedConnectionPool.getC onnection(SemaphoreArrayListManagedConnectionPool.java:355)

EJB connection pool

The creation and destruction of beans can be an expensive operation, especially if they acquire external resources. To reduce this cost, the EJB container creates a pool of beans that, therefore, don’t need to be re-initialized every time they are needed.

The Stateless EJB pool and MDB pool are used to provide stateless business services to their client, acquiring beans from the pool when they are requested and releasing the bean to the pool as soon as they are finished.

A typical EJB pool configuration looks like the following:

<pools>
<bean-instance-pools>
<strict-max-pool name="slsb-strict-max-pool" max-pool-
size="20" instance-acquisition-timeout="5" instance-acquisition-
timeout-unit="MINUTES"/>
<strict-max-pool name="mdb-strict-max-pool" max-pool-
size="20" instance-acquisition-timeout="5" instance-acquisition-
timeout-unit="MINUTES"/>
</bean-instance-pools>
</pools>

At the time of writing in AS 7, there is only support for strict-max-pool as a bean instance pool.

A strict max pool allows you to configure a maximum upper limit for the pool. At runtime, when all the bean instances from the pool are in use and a new bean invocation request comes in, the pool blocks the request until the next bean instance is available or until a timeout (set in instance-acquisition-timeout) is reached.

Monitoring the EJB pools will soon be available through the CLI, which will have a set of minimal operations to check the pool metrics. Setting up a self-made solution to monitor the current pool size is, however, not too complicated: you can use the handy EJB3 Interceptor API to monitor the EJB that have been taken from the pool and those that have been released (if you want to learn more about interceptors, you can check out this link from the JBoss EJB 3 documentation: http://tinyurl.com/cuev69q ).

In the following Interceptor, we are simply setting a single EJB singleton field before contacting the EJB, and after that, it has completed its job and hence returned to the pool.

package com.packtpub.chapter12;
import javax.ejb.EJB;
import javax.interceptor.AroundInvoke;
import javax.interceptor.InvocationContext;
public class EJBInterceptor {
@EJB
EJBCounter singleton;
@AroundInvoke
public Object defaultMethod(InvocationContext context) throws
Exception{
// Take a bean from the pool
singleton.getFromPool();

// Invoke the EJB method
Object result = context.proceed();
// Return the bean to the pool
singleton.returnToPool();

// Prints out the current pool size
singleton.dumpPoolSize();
return result;
}
}

The EJBCounter is a singleton EJB that merely contains the counter variable holding the EJB max-pool-size.

package com.packtpub.chapter12;
import javax.ejb.Singleton;
@Singleton
public class EJBCounter {
private int count=20;
public void getFromPool() {
count--;
}
public void returnToPool() {
count++;
}
public void dumpPoolSize() {
System.out.println("Current pool size is "+count);
}
}

You can further refine this approach by adding a @PostContruct annotation, which loads this variable from an external source such as the DB or a property file. Another viable option is launching a CLI script that collects the value from the max-pool-size attribute. Here’s an example:

[standalone@localhost:9999 /] /subsystem=ejb3/strict-
max-bean-instance-pool=slsb
-strict-max-pool:read-attribute(name="max-pool-size")
{
"outcome" => "success",
"result" => 20
}


The interceptors can then be applied to the stateless EJB that are used by your application either by means of a simple annotation or by declaring them into the ejb-jar.xml configuration file. For example, here’s how to intercept all EJB invocations via the @Interceptors annotation:

import javax.ejb.Stateless;
import javax.interceptor.Interceptors;
@Stateless
Interceptors(EJBInterceptor.class)
public class EJBTest {
. . .
}

Turning off the EJB pool

Although this might sound weird to you, there can be some scenarios where you don’t want your EJB resources to be managed by a pool but created on demand. For example, if your EJB does not need a costly initialization (like acquiring external resources), it can be advantageous, in terms of performance, to avoid using the EJB 3 pool (the JBoss 5 Performance tuning book, for example, shows a case where a so-called heavy Stateful EJB can even outperform the Stateless counterpart. This is mostly due to the fact that handling the stateless pool is not a trivial task in terms of performance).

Switching off the EJB pool just requires to comment/evict the bean-instance-pool- ref element that refers to the EJB pool:

<stateless>
<!--
<bean-instance-pool-ref pool-name="slsb-strict-max-pool"/>
-->
</stateless>

Of course, it is strongly recommended to run a consistent test bed to demonstrate that your application can benefit from such a change, which will take away any check on the number of EJB instances.

Web server thread pool

There are a large set of tuning aspects that ultimately influence the performance of the web server. One of the most important factors is tuning the HTTP Connector thread pool settings to more closely match the web request load you have. This is difficult to do, but it is very important to get right for best performance.

<subsystem >
<connector enable-lookups="false" enabled="true"
executor="http-executor"
max-connections="200"
max-post-size="2048" max-save-post-size="4096"
name="http" protocol="HTTP/1.1"
proxy-name="proxy" proxy-port="8081"
redirect-port="8443" scheme="http"
secure="false" socket-binding="http" />
. . .
</subsystem>

The amount of threads that are allowed by the web server are referenced through the executor attribute:

<subsystem >
<bounded-queue-thread-pool
name="http-executor"
blocking="true">
<core-threads count="10" per-cpu="20"/>
<queue-length count="10" per-cpu="20"/>
<max-threads count="10" per-cpu="20"/>
<keepalive-time time="10" unit="seconds"/>
</bounded-queue-thread-pool>
</subsystem>

Then, within the threads subsystem, you can define the number of threads that will be used by the pool, along with the other thread attributes:

The most important Connector attributes are defined into the core-threads and max-threads. Setting these values too low means that you may not have enough threads to handle all of the requests, in which case, requests have to sit idle for some time without being handled until another request thread is freed up. Too low a value also means that JBoss Web server will be unable to take advantage of your server machine’s hardware.

On the other hand, be careful before increasing these thread counts blindly. By increasing the thread count too much, you will:

  • Consume a good chunk of memory
  • Your system will spend too much time context-switching

You should, at first, investigate if instead it’s a problem of individual requests taking too long. Are your threads returning to the pool? If, for example, database connections are not released, threads pile up waiting to obtain a database connection, thereby making it impossible to process additional requests.

In such a scenario, simply adding more thread will make things even worse by introducing a greater stress on the CPU and on the garbage collector.

You can discover this kind of problem by simply taking a thread dump in your application to find out where your web server threads are stuck. For example, in this picture, taken from the JConsole threads tab, you can see how an idle thread should look like, by looking at its stack trace:

On the other hand, the following HTTP thread is busy doing input/output operations, which could mean, for example, the web server is acquiring data from an external resource.

On the other hand, the following HTTP thread is busy doing input/output operations, which could mean, for example, the web server is acquiring data from an external resource.

The above snapshots also give you a clue on how you can monitor the number of web server running threads. Just fill in the executor name (http-executor) in the lower textfield, and you will have a filtered list of all your web server threads.

Logging tuning

Logging is an essential activity of every applications. However, the default configuration is generally appropriate for development, but not for a production environment.

The key elements that you need to consider when switching to production are:

  1. Choosing the appropriate handler to output your logs.
  2. Choose a log level that provides just the amount of information you need and nothing else.
  3. Choose an appropriate format for your logs

As far as log handlers are concerned, in the default configuration, both console logging and file logging are enabled. While this can be fine for development, using console logging in production is an expensive process that causes lots of un-buffered I/O. While some applications may be fine with console logging, high- volume applications benefit from turning off console logging and just using the FILE handler.

In order to remove console logging, you can simply comment out its handler:

<root-logger>
<level name="INFO"/>
<handlers>
<!-- <handler name="CONSOLE"/> -->
<handler name="FILE"/>
</handlers>
</root-logger>

The next step is choosing the correct logging verbosity. Obviously, the less you log, the less I/O will occur, and the better your overall application. The default configuration uses the “INFO” level for the root logger. You could consider raising this to a higher threshold such as “WARN” or (using a fine grained approach) changing the single logging categories:

<logger category="org.hibernate">
<level name="WARN"/>
</logger>

In this example, we have just raised the log level for org.hibernate package to “WARN”, which will produce much more concise information from Hibernate.

Finally, the pattern used by your logs can also influence the performance of your applications. For example, let’s take the default pattern format, which is:

<pattern-formatter
pattern="%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n"/>

Starting from this basic format, with as little as adding the flag %l you can greatly enhance the verbosity of your logs by printing the line number and the class that emitted the log:

<pattern-formatter
pattern="%l %d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n"/>

And this is the console output once the server configuration has been reloaded:

While this information can be quite useful in development, it will result in a huge burden when ported in production.

(which prints out the caller class information) , %M (which outputs the method where logging was emitted), and %F (which outputs the filename where the logging request was issued).

 

Cache tuning

Most performance issues in Enterprise applications arise from data access, hence caching data is one of the most important tuning techniques. The current release of the application server uses Infinispan as a distributed caching provider, and you can use it to cache anything you like.

In particular, you can use it as a second-level caching provider by adding the following configuration in your persistence.xml file:

code no 16

Second-level caching is intended for data that is read-mostly. It allows you to store the entity and query data in memory so that this data can be retrieved without the overhead of returning to the database.

On the other hand, for applications with heavy use of write operations, caching may simply add overhead without providing any real benefit.

In order to cache entities, you can use the @javax.persistence.Cacheable in conjunction with the shared-cache-mode element of persistence.xml. When you have enabled a selective cache of your entities, the @Cachable annotation will load entities into the Hibernate second-level cache.

If you want to monitor the cache statistics, you can use the following property in your persistence.xml file, which will expose the cache statistics via JMX:

<shared-cache-mode>ENABLE_SELECTIVE</shared-cache-mode>
<properties>
<property name="hibernate.cache.use_second_level_cache"
value="true"/>
<property name="hibernate.cache.use_minimal_puts" value="true"/>
</properties>

A very simple way to check the MBeans exposed by AS 7 is starting the JConsole application and choosing the Mbeans tab in the upper area of the application:

The following table describes synthetically the meaning of the statistics provided by Infinispan’s cache:

Attribute Description
Evictions Number of cache eviction operations
RemoveMisses Number of cache removals where keys were not found
ReadWriteRatio Read/writes ratio for the cache
Hits Number of cache attribute hits
NumberofEntries Number of entries currently in the cache
StatisticsEnabled Enables or disables the gathering of statistics by this component
TimeSinceReset Number of seconds since the cache statistics were last reset
ElapsedTime Number of seconds since cache started
Misses Number of cache attribute misses
RemoveHits Number of cache removal hits
AverageWriteTime Average number of milliseconds for a write operation in the cache
Stores Number of cache attribute put operations/td>
HitRatio Percentage hit/(hit+miss) ratio for the cache
AverageReadTime Average number of milliseconds for a read operation on the cache

Evicting data from the cache is also fundamental in order to save memory when cache entries are not needed anymore. You can configure the cache expiration policy, which determines when the data will be refreshed in the cache (for example, 1 hour, 2 hours, 1 day, and so on) according to the requirements for that entity.

Configuring data eviction can be done either programmatically (see Infinispan documentation for examples about it:

<property name="hibernate.cache.infinispan.entity.eviction.strategy"
value= "LRU"/>
<property name="hibernate.cache.infinispan.entity.eviction.wake_up_
interval" value= "2000"/>
<property name="hibernate.cache.infinispan.entity.eviction.max_
entries"
value= "5000"/>
<property name="hibernate.cache.infinispan.entity.expiration.lifespan"
value= "60000"/>
<property name="hibernate.cache.infinispan.entity.expiration.max_idle"
value= "30000"/>

And here’s a description for the properties that are contained in the configuration file:

Property Description
hibernate.cache.infinispan.entity.eviction. strategy The eviction strategy used by Infinispan. Can be either UNORDERED, FIFO, LIFO, NONE (check Infinispan docs for more details).
hibernate.cache.infinispan.entity.eviction. wake_up_interval The time (ms) interval between each eviction thread runs.
hibernate.cache.infinispan.entity.eviction. max_entries The maximum number of entries allowed in a cache (after that, eviction takes place).
hibernate.cache.infinispan.entity.expiration. lifespan The time expiration (ms) of entities cached.
hibernate.cache.infinispan.entity.expiration. max_idle The idle time expiration (ms) of entities.

LEAVE A REPLY

Please enter your comment!
Please enter your name here