18 min read

(For more resources related to this topic, see here.)

Understanding interceptors

Interceptors are defined as part of the EJB 3.1 specification (JSR 318), and are used to intercept Java method invocations and lifecycle events that may occur in Enterprise Java Beans (EJB) or Named Beans from Context Dependency Injection (CDI).

The three main components of interceptors are as follows:

  • The Target class: This class will be monitored or watched by the interceptor. The target class can hold the interceptor methods for itself.
  • The Interceptor class: This interceptor class groups interceptor methods.
  • The Interceptor method: This method will be invoked according to the lifecycle events.

As an example, a logging interceptor will be developed and integrated into the Store application. Following the hands-on approach of this article, we will see how to apply the main concepts through the given examples without going into a lot of details.

Check the Web Resources section to find more documentation about interceptors.

Creating a log interceptor

A log interceptor is a common requirement in most Java EE projects as it’s a simple yet very powerful solution because of its decoupled implementation and easy distribution among other projects if necessary. Here’s a diagram that illustrates this solution:

Log and LogInterceptor are the core of the log interceptor functionality; the former can be thought of as the interface of the interceptor, it being the annotation that will decorate the elements of SearchManager that must be logged, and the latter carries the actual implementation of our interceptor. The business rule is to simply call a method of class LogService, which will be responsible for creating the log entry.

Here’s how to implement the log interceptor mechanism:

  1. Create a new Java package named com.packt.store.log in the project Store.
  2. Create a new enumeration named LogLevel inside this package. This enumeration will be responsible to match the level assigned to the annotation and the logging framework:

    package com.packt.store.log; public enum LogLevel { // As defined at java.util.logging.Level SEVERE, WARNING, INFO, CONFIG, FINE, FINER, FINEST; public String toString() { return super.toString(); } }

    We’re going to create all objects of this section—LogLevel, Log, LogService, and LogInterceptor—into the same package, com.packt.store.log. This decision makes it easier to extract the logging functionality from the project and build an independent library in the future, if required.

  3. Create a new annotation named Log. This annotation will be used to mark every method that must be logged, and it accepts the log level as a parameter according to the LogLevel enumeration created in the previous step:

    package com.packt.store.log; @Inherited @InterceptorBinding @Retention(RetentionPolicy.RUNTIME) @Target({ElementType.METHOD, ElementType.TYPE}) public @interface Log { @Nonbinding LogLevel value() default LogLevel.FINEST; }

    As this annotation will be attached to an interceptor, we have to add the @InterceptorBinding decoration here. When creating the interceptor, we will add a reference that points back to the Log annotation, creating the necessary relationship between them.

    Also, we can attach an annotation virtually to any Java element. This is dictated by the @Target decoration, where we can set any combination of the ElementType values such as ANNOTATION_TYPE, CONSTRUCTOR, FIELD, LOCAL_VARIABLE, METHOD, PACKAGE, PARAMETER, and TYPE (mapping classes, interfaces, and enums), each representing a specific element. The annotation being created can be attached to methods and classes or interface definitions.

  4. Now we must create a new stateless session bean named LogService that is going to execute the actual logging:

    @Stateless public class LogService { // Receives the class name decorated with @Log public void log(final String clazz, final LogLevel level, final String
    message) { // Logger from package java.util.logging Logger log = Logger.getLogger(clazz); log.log(Level.parse(level.toString()), message); } }

  5. Create a new class, LogInterceptor, to trap calls from classes or methods decorated with @Log and invoke the LogService class we just created—the main method must be marked with @AroundInvoke—and it is mandatory that it receives an InvocationContext instance and returns an Object element:

    @Log @Interceptor public class LogInterceptor implements Serializable { private static final long serialVersionUID = 1L; @Inject LogService logger; @AroundInvoke public Object logMethod(InvocationContext ic) throws Exception { final Method method = ic.getMethod(); // check if annotation is on class or method LogLevel logLevel = method.getAnnotation(Log.class)!= null ? method.getAnnotation(Log.class).value() : method.getDeclaringClass().getAnnotation(Log.class).value(); // invoke LogService logger.log(ic.getClass().getCanonicalName(),logLevel,
    method.toString()); return ic.proceed(); } }

    As we defined earlier, the Log annotation can be attached to methods and classes or interfaces by its @Target decoration; we need to discover which one raised the interceptor to retrieve the correct LogLevel value.

    When trying to get the annotation from the class shown in the method.getDeclaringClass().getAnnotation(Log.class) line, the engine will traverse through the class’ hierarchy searching for the annotation, up to the Object class if necessary. This happens because we marked the Log annotation with @Inherited. Remember that this behavior only applies to the class’s inheritance, not interfaces.

    Finally, as we marked the value attribute of the Log annotation as @Nonbinding in step 3, all log levels will be handled by the same LogInterceptor function. If you remove the @Nonbinding line, the interceptor should be further qualified to handle a specific log level, for example @Log(LogLevel.INFO), so you would need to code several interceptors, one for each existing log level.

  6. Modify the beans.xml (under /WEB-INF/) file to tell the container that our class must be loaded as an interceptor—currently, the file is empty, so add all the following lines:

    <beans
    xsi_schemaLocation="http://java.sun.com/xml/ns/javaee
    http://java.sun.com/xml/ns/javaee/beans_1_0.xsd"> <interceptors> <class>com.packt.store.log.LogInterceptor</class>
    </interceptors> </beans>

  7. Now decorate a business class or method with @Log in order to test what we’ve done. For example, apply it to the getTheaters() method in SearchManager from the project Store. Remember that it will be called every time you refresh the query page:

    @Log(LogLevel.INFO) public List<Theater> getTheaters() { ... }

  8. Make sure you have no errors in the project and deploy it to the current server by right-clicking on the server name and then clicking on the Publish entry.
  9. Access the theater’s page, http://localhost:7001/theater/theaters.jsf, refresh it a couple of times, and check the server output. If you have started your server from Eclipse, it should be under the Console tab:

    Nov 12, 2012 4:53:13 PM com.packt.store.log.LogService log INFO: public java.util.List com.packt.store.search.SearchManager.getTheaters()

Let’s take a quick overview of what we’ve accomplished so far; we created an interceptor and an annotation that will perform all common logging operations for any method or class marked with such an annotation. All log entries generated from the annotation will follow WebLogic’s logging services configuration.

Interceptors and Aspect Oriented Programming

There are some equivalent concepts on these topics, but at the same time, they provide some critical functionalities, and these can make a completely different overall solution. In a sense, interceptors work like an event mechanism, but in reality, it’s based on a paradigm called Aspect Oriented Programming (AOP). Although AOP is a huge and complex topic and has several books that cover it in great detail, the examples shown in this article make a quick introduction to an important AOP concept: method interception.

Consider AOP as a paradigm that makes it easier to apply crosscutting concerns (such as logging or auditing) as services to one or multiple objects. Of course, it’s almost impossible to define the multiple contexts that AOP can help in just one phrase, but for the context of this article and for most real-world scenarios, this is good enough.

Using asynchronous methods

A basic programming concept called synchronous execution defines the way our code is processed by the computer, that is, line-by-line, one at a time, in a sequential fashion. So, when the main execution flow of a class calls a method, it must wait until its completion so that the next line can be processed.

Of course, there are structures capable of processing different portions of a program in parallel, but from an external viewpoint, the execution happens in a sequential way, and that’s how we think about it when writing code.

When you know that a specific portion of your code is going to take a little while to complete, and there are other things that could be done instead of just sitting and waiting for it, there are a few strategies that you could resort to in order to optimize the code. For example, starting a thread to run things in parallel, or posting a message to a JMS queue and breaking the flow into independent units are two possible solutions.

If your code is running on an application server, you should know by now that thread spawning is a bad practice—only the server itself must create threads, so this solution doesn’t apply to this specific scenario.

Another way to deal with such a requirement when using Java EE 6 is to create one or more asynchronous methods inside a stateless session bean by annotating either the whole class or specific methods with javax.ejb.Asynchronous.

If the class is decorated with @Asynchronous, all its methods inherit the behavior.

When a method marked as asynchronous is called, the server usually spawns a thread to execute the called method—there are cases where the same thread can be used, for instance, if the calling method happens to end right after emitting the command to run the asynchronous method.

Either way, the general idea is that things are explicitly going to be processed in parallel, which is a departure from the synchronous execution paradigm. To see how it works, let’s change the LogService method to be an asynchronous one; all we need to do is decorate the class or the method with @Asynchronous:

@Stateless @Asynchronous public class LogService { …

As the call to its log method is the last step executed by the interceptor, and its processing is really quick, there is no real benefit in doing so. To make things more interesting, let’s force a longer execution cycle by inserting a sleep method into the method of LogService:

public void log(final String clazz,final LogLevel level,final String
message) { Logger log = Logger.getLogger(clazz); log.log(Level.parse(level.toString()), message); try { Thread.sleep(5000); log.log(Level.parse(level.toString()), "reached end of method"); } catch (InterruptedException e) { e.printStackTrace(); } }

Using Thread.sleep() when running inside an application server is another classic example of a bad practice, so keep away from this when creating real-world solutions.

Save all files, publish the Store project, and load the query page a couple of times. You will notice that the page is rendered without delay, as usual, and that the reached end of method message is displayed after a few seconds in the Console view. This is a pretty subtle scenario, so you can make it harsher by commenting out the @Asynchronous line and deploying the project again—this time when you refresh the browser, you will have to wait for 5 seconds before the page gets rendered.

Our example didn’t need a return value from the asynchronous method, making it pretty simple to implement. If you need to get a value back from such methods, you must declare it using the java.util.concurrent.Future interface:

@Asynchronous public Future<String> doSomething() { … }

The returned value must be changed to reflect the following:

return new AsyncResult<String>("ok");

The javax.ejb.AsyncResult function is an implementation of the Future interface that can be used to return asynchronous results.

There are other features and considerations around asynchronous methods, such as ways to cancel a request being executed and to check if the asynchronous processing has finished, so the resulting value can be accessed. For more details, check the Creating Asynchronous methods in EJB 3.1 reference at the end of this article.

Understanding WebLogic’s logging service

Before we advance to the event system introduced in Java EE 6, let’s take a look at the logging services provided by Oracle WebLogic Server.

By default, WebLogic Server creates two log files for each managed server:

  • access.log: This is a standard HTTP access log, where requests to web resources of a specific server instance are registered with details such as HTTP return code, the resource path, response time, among others
  • <ServerName.log>: This contains the log messages generated by the WebLogic services and deployed applications of that specific server instance

These files are generated in a default directory structure that follows the pattern $DOMAIN_NAME/servers/<SERVER_NAME>/logs/.

If you are running a WebLogic domain that spawns over more than one machine, you will find another log file named <DomainName>.log in the machine where the administration server is running. This file aggregates messages from all managed servers of that specific domain, creating a single point of observation for the whole domain.

As a best practice, only messages with a higher level should be transferred to the domain log, avoiding overhead to access this file. Keep in mind that the messages written to the domain log are also found at the managed server’s specific log file that generated them, so there’s no need to redirect everything to the domain log.

Anatomy of a log message

Here’s a typical entry of a log file:

####<Jul 15, 2013 8:32:54 PM BRT> <Alert> <WebLogicServer> <sandbox-lap>
<AdminServer> <[ACTIVE] ExecuteThread: '0' for queue:
'weblogic.kernel.Default (self-tuning)'> <weblogic> <> <> <1373931174624>
<BEA-000396> <Server shutdown has been requested by weblogic.>

The description of each field is given in the following table:

Text

Description

####

Fixed, every log message starts with this sequence

<Jul 15, 2013 8:32:54 PM BRT>

Locale-formatted timestamp

<Alert>

Message severity

<WebLogicServer>

WebLogic subsystem-other examples are WorkManager, Security, EJB, and Management

<sandbox-lap>

Physical machine name

<AdminServer>

WebLogic Server name

<[ACTIVE] ExecuteThread: ‘0’ for queue: ‘weblogic.kernel.Default (self-tuning)’>

Thread ID

<weblogic>

User ID

<> 

Transaction ID, or empty if not in a transaction context

<> 

Diagnostic context ID, or empty if not applicable; it is used by the Diagnostics Framework to correlate messages of a specific request

<1373931174624>

Raw time in milliseconds

<BEA-000396>

Message ID

<Server shutdown has been requested by weblogic.>

Description of the event

The Diagnostics Framework presents functionalities to monitor, collect, and analyze data from several components of WebLogic Server.

Redirecting standard output to a log file

The logging solution we’ve just created is currently using the Java SE logging engine—we can see our messages on the console’s screen, but they aren’t being written to any log file managed by WebLogic Server. It is this way because of the default configuration of Java SE, as we can see from the following snippet, taken from the logging.properties file used to run the server:

# "handlers" specifies a comma separated list of log Handler # classes. These handlers will be installed during VM startup. # Note that these classes must be on the system classpath. # By default we only configure a ConsoleHandler, which will only # show messages at the INFO and above levels. handlers= java.util.logging.ConsoleHandler

You can find this file at $JAVA_HOME/jre/lib/logging.properties.

So, as stated here, the default output destination used by Java SE is the console. There are a few ways to change this aspect:

  • If you’re using this Java SE installation solely to run WebLogic Server instances, you may go ahead and change this file, adding a specific WebLogic handler to the handlers line as follows:

    handlers= java.util.logging.ConsoleHandler,weblogic.logging.
    ServerLoggingHandler

  • Tampering with Java SE files is not an option (it may be shared among other software, for instance); you can duplicate the default logging.properties file into another folder $DOMAIN_HOME being a suitable candidate, add the new handler, and instruct WebLogic to use this file at startup by adding this argument to the following command line:

    -Djava.util.logging.config.file=$DOMAIN_HOME/logging.properties

  • You can use the administration console to set the redirection of the standard output (and error) to the log files. To do so, perform the following steps:
    1. In the left-hand side panel, expand Environment and select Servers.
    2. In the Servers table, click on the name of the server instance you want to configure.
    3. Select Logging and then General.
    4. Find the Advanced section, expand it, and tick the Redirect stdout logging enabled checkbox:

    5. Click on Save to apply your changes.

If necessary, the console will show a message stating that the server must be restarted to acquire the new configuration.

If you get no warnings asking to restart the server, then the configuration is already in use. This means that both WebLogic subsystems and any application deployed to that server is automatically using the new values, which is a very powerful feature for troubleshooting applications without intrusive actions such as modifying the application itself—just change the log level to start capturing more detailed messages!

Notice that there are a lot of other logging parameters that can be configured, and three of them are worth mentioning here:

  • The Rotation group (found in the inner General tab): The rotation feature instructs WebLogic to create new log files based on the rules set on this group of parameters. It can be set to check for a size limit or create new files from time to time. By doing so, the server creates smaller files that we can easily handle. We can also limit the number of files retained in the machine to reduce the disk usage.

    If the partition where the log files are being written to reaches 100 percent of utilization, WebLogic Server will start behaving erratically. Always remember to check the disk usage; if possible, set up a monitoring solution such as Nagios to keep track of this and alert you when a critical level is reached.

  • Minimum severity to log (also in the inner General tab): This entry sets the lower severity that should be logged by all destinations. This means that even if you set the domain level to debug, the messages will be actually written to the domain log only if this parameter is set to the same or lower level. It will work as a gatekeeper to avoid an overload of messages being sent to the loggers.
  • HTTP access log enabled (found in the inner HTTP tab): When WebLogic Server is configured in a clustered environment, usually a load-balancing solution is set up to distribute requests between the WebLogic managed servers; the most common options are Oracle HTTP Server (OHS) or Apache Web Server. Both are standard web servers, and as such, they already register the requests sent to WebLogic in their own access logs. If this is the case, disable the WebLogic HTTP access log generation, saving processing power and I/O requests to more important tasks.

Integrating Log4J to WebLogic’s logging services

If you already have an application that uses Log4J and want it to write messages to WebLogic’s log files, you must add a new weblogic.logging.log4j.ServerLoggingAppender appender to your lo4j.properties configuration file.

This class works like a bridge between Log4J and WebLogic’s logging framework, allowing the messages captured by the appender to be written to the server log files.

As WebLogic doesn’t package a Log4J implementation, you must add its JAR to the domain by copying it to $DOMAIN_HOME/tickets/lib, along with another file, wllog4j.jar, which contains the WebLogic appender. This file can be found inside $MW_HOME/wlserver/server/lib. Restart the server, and it’s done!

If you’re using a *nix system, you can create a symbolic link instead of copying the files—this is great to keep it consistent when a path changing these specific files must be applied to the server.

Remember that having a file inside $MW_HOME/wlserver/server/lib doesn’t mean that the file is being loaded by the server when it starts up; it is just a central place to hold the libraries. To be loaded by a server, a library must be added to the classpath parameter of that server, or you can add it to the domain-wide lib folder, which guarantees that it will be available to all nodes of the domain on a specific machine.

Accessing and reading log files

If you have direct access to the server files, you can open and search them using a command-line tool such as tail or less, or even use a graphical viewer such as Notepad. But when you don’t have direct access to them, you may use WebLogic’s administration console to read their content by following the steps given here:

  1. In the left-hand side pane of the administration console, expand Diagnostics and select Log Files.
  2. In the Log Files table, select the option button next to the name of the log you want to check and click on View:

    The types displayed on this screen, which are mentioned at the start of the section, are Domain Log, Server Log, and HTTP Access. The others are resource-specific or linked to the diagnostics framework. Check the Web resources section at the end of this article for further reference.

  3. The page displays the latest contents of the log file; the default setting shows up to 500 messages in reverse chronological order. The messages at the top of the window are the most recent messages that the server has generated.

    Keep in mind that the log viewer does not display messages that have been converted into archived log files.

LEAVE A REPLY

Please enter your comment!
Please enter your name here