Friday, December 17, 2010

A Mock Service with Groovy and Jetty

I recently needed to do some testing on a project that integrated with a Web service supported by another team of developers. After I completed my development work, I wanted to do some quick integration testing but I was forced to wait on the completion of the Web services. I was pretty confident that my code changes would work, but I really needed that "live" interactivity to know for sure.

In order to not be caught in this predicament again, I decided to create a simple mock service using Groovy, Jetty and some static XML files. Here is the result using an example domain model and sample data:

I put together a Groovy script to create an instance of a Jetty server. This is very similar to the example that can be found on the Groovy Web site (wink, wink).

import org.mortbay.jetty.Server
import org.mortbay.jetty.servlet.*
import groovy.servlet.*

@Grab(group = 'org.mortbay.jetty', module = 'jetty-embedded', version = '6.1.0')
def runServer = {
    def server = new Server(8181)
    def context = new Context(server, "/", Context.SESSIONS);
    
    context.resourceBase = "."
    context.addServlet(new ServletHolder(new TemplateServlet()), "*.html")
    context.addServlet(new ServletHolder(new DefaultServlet()), "*.xml")

    server.start()
}

runServer()

First, we import what we need from Jetty and here I am using @Grab to resolve our Jetty dependency.

Next, runServer is defined to do all of our work. Within runServer, we define a new Jetty Server and use it to create a new Jetty Context. I then set the context's resourceBase to "." which is the directory where the above Groovy file, mockService.groovy, is located.

Now, we need to handle requests to our server. Instead of using a standard Web descriptor, web.xml, file we can write a few line of code to add a new Groovy TemplateServlet mapped to handle the *.html request pattern and a new Jetty DefaultServlet mapped to handle the *.xml request pattern.

The Jetty DefaultServlet will handle resolving our static, test XML files.

The TemplateServlet will handle resolving and running our HTML files and it will allow us to be able to create dynamic Groovy templates like the following:

<html>
    <body>
        

File List

<% def f = new File(".") if(f.isDirectory()) { f.listFiles([accept:{file->file ==~ /.*?\.xml/}] as FileFilter).toList()*.name.each { %> ${it} <% } } %> </body> </html>


What is great about the TemplateServlet is that the above, fileList.html, contains Groovy that will be executed when the file is requested. This executed code will find all .xml files located in the current directory and display HTML links to each test XML file. For example, I have these two static, test files:

12345.xml

<?xml version="1.0" encoding="UTF-8"?>
<account>
    <id>12345</id>
    <name>Acount 12345</name>
    <address>
        <street1>123 Test St</street1>
        <city>City</city>
        <state>State</state>
        <postalcode>33333</postalcode>
    </address>
</account>

54321.xml

<?xml version="1.0" encoding="UTF-8"?>
<account>
    <id>54321</id>
    <name>Acount 54321</name>
    <address>
        <street1>123 Test St</street1>
        <city>City</city>
        <state>State</state>
    <postalcode>33333</postalcode>
    </address>
</account>


The fileList.html request will render something like:

File List


12345.xml
54321.xml

All we need to do to get started is open a command prompt, navigate to my development directory where the mockService.groovy file is located and run:

> groovy mockService.groovy

Now, my application can be configured to point to this local service. In this example, when I want to do a GET request for 'Account 12345', the application can make a call to http://localhost:8181/12345.xml. We can follow that pattern and request 'Account 54321' by calling http://localhost:8181/54321.xml or any other xxxxx.xml file I place in my development directory. I am now able to run simple integration tests with my application while the other team works to complete the final service.

To really understand the simplicity that is involved, here is a quick view of my development directory:

/mockService
- 12345.xml
- 54321.xml
- fileList.html
- mockService.groovy

Nothing too magical going on here but for me it is just another way to bring Groovy into the work environment to get things done.

Tuesday, December 07, 2010

Hazelcast Monitoring Tool

I have been using Hazelcast on a few small projects and I have just recently started using the Hazelcast Monitoring Tool. The Hazelcast Monitoring Tool is simply awesome. Just deploy the hazelcast-monitor-x.x.war file on Tomcat and add your Hazelcast cluster to monitor. Now you have a simple, Web based view of your running Hazelcast cluster.

Wednesday, December 01, 2010

Google App Engine, Gaelyk and Twilio

Having fun in the evenings playing with Google App Engine, Gaelyk and Twilio. Using Gaelyk (Groovy) and the Google App Engine platform, I have been able to play around with my trial account at Twilio to create a simple IVR application. What is really nice is that Gaelyk is great for Groovy developers looking for a simple way to get something done on GAE and Twilio is a full-featured IVR platform that is much more flexible and less expensive than the enterprise IVR platform I work with on a daily basis. SMS, recording and robust call handling are some features I plan to explore in more detail in the upcoming nights. Working these past ~2 years with IVR applications I have found that IVR and Web development share many cross-cutting concerns like availability, redundancy, reliability and scalability. With GAE providing the platform and Twilio managing the telephony portion of the application, businesses can deploy a simple IVR with less upfront investment and less maintenance costs.

Tuesday, October 05, 2010

Gaelyk: Routes + Parameters + Binding = Simple Controller

If you haven't heard it from me yet or anyone else, Groovy is an incredible dynamic language for the JVM that brings you(the Java developer) productivity like Parliament Funkadelic brings the funk. When you leverage the power of Grails and the underlying technologies(Spring, Hibernate, GORM...), Java Web development takes a monumental step in the direction of productivity. When you want to run a simple Java Web application on Google App Engine, Groovy/Grails is definitely a great option. Ah, but we do have another option for Groovy on Google App Engine, Gaelyk. It is definitely not Grails, but for simple Web applications, Gaelyk is another technology that helps at getting things done.

If you are familiar with Groovy and/or Grails and/or Groovlets, you can build a 0.1 version Web app in a few hours. One thing that I stumbled upon was utilizing WEB-INF/routes.groovy, parameters(params) and the binding property in the .groovy files to build a simple, easy to read controller. Let's take a look at an example WEB-INF/routes.groovy file first:
// routes
get  "/failure",    forward: "failure.gtpl"

get  "/",           forward: "controller.groovy"
get  "/@a",         forward: "controller.groovy?a=@a"

Above, we map the HTTP method(GET, POST...) and URL patterns to views(.gtpl files) or .groovy files(WEB-INF/groovy/). A GET request of http://myserver.com/failure would forward to and render the failure.gtpl view. Likewise, a GET request of http://myserver.com/ or http://myserver.com/anything(other than failure) would be forwarded to the WEB-INF/groovy/controller.groovy file below:
def index = {
    // do something
    forward "index.gtpl"
}

def list = {
    // do something
    forward "list.gtpl"
}

def view = {
    // do something
    forward "view.gtpl"
}

binding.setProperty("index", index)
binding.setProperty("list", list)
binding.setProperty("view", view)

if(params.a) {
    try {
        def var = binding.getVariable("${params.a}")
        var.call()
    } catch(Exception e) {
        redirect "/failure"
    }
} else
    index.call()

In the WEB-INF/groovy/controller.groovy file above, I have created Closures for each http://myserver.com/anything that I would like to support. Using the script's binding property, I have added each Closure as a variable. When each request that contains a parameter we expect, in this case parameter a(params.a), I try to get that variable and execute the call() method on it. Here, the application supports http://myserver.com/index, http://myserver.com/list and http://myserver.com/view.  If params.a does not exist, we execute the default index Closure. If params.a is not supported(or not bound to the script), the binding.getVariable("${params.a}") call fails and the controller redirects to http://myserver.com/failure.

The nice thing is that we can keep better track of our .groovy files we have and tie them into our model(if needed). Here is a possible future WEB-INF/routes.groovy file:
// routes
get  "/failure",  forward: "failure.gtpl"

get  "/",            forward: "indexController.groovy"
get  "/book/@a",  forward: "bookController.groovy?a=@a"
get  "/author/@a",  forward: "authorController.groovy?a=@a"

Nothing too exciting, it is just something I found to be helpful when working with Gaelyk and I thought I would it share it. Gaelyk is pretty fun to work with. I suggest starting with downloading the Google App Engine Java SDK and the Gaelyk template project if you are new to Groovy. Gaelyk also makes working with Google App Engine super simple because of the Groovy-er access to GAE's API(check out the Gaelyk tutorial). Realistically, most Web developers should be able to get a nice application running in the Google App Engine cloud in a few hours and that is very Groovy!

Tuesday, September 07, 2010

Groovy Set or List

Below is a script that might be helpful to new Groovy developers. We sometimes have to be careful when we are working so closely with Java and Groovy at the same time. In this first block, I create a HashSet (java.util.HashSet) with a Collection as an argument to the Constructor:

def mySet = new HashSet(["test", "tester"])

assert mySet.size() == 2

mySet << "test"
mySet << "tester"

assert mySet.size() == 2


I then try to add duplicate entries ("test" and "tester"), using the << operator, into mySet which does not work and I verify this by asserting that mySet's size is still 2, as when it was instantiated.

Next, I instantiate an empty HashSet (java.util.HashSet) and then assign a collection of values to mySet. Again, I try to add (<<) duplicate entries ("test" and "tester") into mySet.

def mySet = new HashSet()

mySet = ["test", "tester"]

assert mySet.size() == 2

mySet << "test"
mySet << "tester"

assert mySet.size() == 2


In the above snippet, the second assertion fails. What? mySet is an instance of HashSet (java.util.HashSet)! How did this happen? Maybe, I should open a defect? Not so fast. We have to think about the dynamic (not static) nature semantics of Groovy. If we examine the line

mySet = ["test", "tester"]


and focus on the collection

["test", "tester"]

we should know that in Groovy

["test", "tester"].class

is an ArrayList (java.util.ArrayList) and thereafter so is the variable mySet. Now this might seem like a disappointment to Java developers who are starting to learn Groovy, but if we really want a unique collection of entries we can always utilize

mySet.unique()


to take care of this task for us or we can just instantiate HashSet and use the add method or << operator to add values to mySet.

def mySet = new HashSet()

mySet.add("test")
mySet << "tester"

assert mySet.size() == 2

Thursday, September 02, 2010

Why I Love Spring 2.5's PropertyPlaceholderConfigurer

I have been a big fan of Spring's PropertyPlaceholderConfigurer since 2006 when I could wire up a datasource bean, or any bean for that matter, with just some references to properties that I knew were going to be in place. A snippet from a Spring context file for example:

<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
    <property name="driverClassName" value="${my.db.driver}"/>
    <property name="url" value="${my.db.url}"/>  
    <property name="username" value="${my.db.username}"/>
    <property name="password" value="${my.db.password}"/>
</bean>


Now, I can provide PropertyPlaceholderConfigurer with a .properties file location(s) or I could depend on the properties existing as part of the runtime, like when using JBoss Application Server's property service. Then one day, I ran into a bit of an issue. Well, now I have an application with a properties file that has datasource connection information for each development region, DEV, TEST and PROD and the region is a 'prefix' on each property.

Something like...

DEV.my.db.driver
DEV.my.db.url
DEV.my.db.username
DEV.my.db.password

TEST.my.db.driver
TEST.my.db.url
TEST.my.db.username
TEST.my.db.password


... and so on. If you are packaging .properties files into your archive(.war, .jar, .ear), this does help your code be a bit more portable but I usually configure properties outside of an archive but we can't always have our way. So, now we have a special class that reads the properties file and the region variable from SystemProperties as the region variable, SDLC_REGION, is set in each development region as a VM argument.

-DSDLC_REGION=DEV


And that works great. We can leave our Spring context alone and everything works like we need it to. But, I am always trying to reduce classes or utilities(.jar files) that are no longer needed in our applications. So, I took a look back into Spring 2.5's PropertyPlaceholderConfigurer and low and behold, there is a better way to do things. Check it out. Here is my Spring context file now:

<context:property-placeholder location="classpath:db.properties"/>

<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
    <property name="driverClassName" value="${${SDLC_REGION}.my.db.driver}"/>
    <property name="url" value="${${SDLC_REGION}.my.db.url}"/>  
    <property name="username" value="${${SDLC_REGION}.my.db.username}"/>
    <property name="password" value="${${SDLC_REGION}.my.db.password}"/>
</bean>


Now, the VM argument, SDLC_REGION, exists in each environment and it can be a part of our PropertyPlaceholderConfigurer expression. We can now load the correct property for each development region from the packaged .properties file without depending on our utility class anymore. Really cool stuff and again, beautiful work from the people at SpringSource.

Wednesday, September 01, 2010

VXML and Groovy's MarkupBuilder

For the last ~1.5 years I have been working with IVR applications. The framework we use is Java based and we use an Eclipse based IDE with a nice drag and drop editor to create the application's call flow. The IVR application is packaged into a .war or .ear file by the IDE and we deploy the application to a Web container like Tomcat. The IVR is then pointed at the application's entry point URL. The core flow is mapped into Servlets and each request into the application generates VXML (Voice XML) that is interpreted by the actual IVR which is responsible for handling call control and communicating with the other telephony technologies.

I think there could/should be a way to use Groovy's MarkupBuilder and most likely Gaelyk to create IVR applications with less ceremony than using the drag and drop editor. We can create static VXML very simply with Groovy's MarkupBuilder:

import groovy.xml.MarkupBuilder

def writer = new StringWriter()
def vxml = new MarkupBuilder()

vxml.vxml {
    field (name:"color") {
        grammar ("red | green | blue")
        prompt (count: 1, "Say red, green or blue.")
        prompt (count: 2, "Please say red, green or blue.")
        noinput (count: 1) {
            prompt ("I didn't hear you." )
            reprompt ()
        }
        noinput (count: 2) {
            prompt ("Sorry, I still didn't hear you.")
            reprompt ()
        }
        nomatch (count: 1) {
            prompt ("I didn't understand you." )
            reprompt ()
        }
        nomatch (count: 2) {
            prompt ("Sorry, I still didn't understand you.")
            reprompt ()
        }
    }
}

writer.toString()

Here is the output of the above:

<vxml>
    <field name='color'>
        <grammar>red | green | blue</grammar>
        <prompt count='1'>Say red, green or blue.</prompt>
        <prompt count='2'>Please say red, green or blue.</prompt>
        <noinput count='1'>
            <prompt>I didn't hear you.</prompt>
            <reprompt />
        </noinput>
        <noinput count='2'>
            <prompt>Sorry, I still didn't hear you.</prompt>
            <reprompt />
        </noinput>
        <nomatch count='1'>
            <prompt>I didn't understand you.</prompt>
            <reprompt />
        </nomatch>
        <nomatch count='2'>
            <prompt>Sorry, I still didn't understand you.</prompt>
            <reprompt />
        </nomatch>
    </field>
</vxml>


Then, if a nice, simple DSL (Domain Specific Language) is designed for building IVR applications with Groovy, we can then simplify the craft of creating IVR applications. Plus, we would gain the productivity of using Groovy when calling Web services and databases to provide dynamic data for the application. Just a crazy thought.

Monday, August 23, 2010

Having Fun with Groovy

I love browsing over to Groovy Console from time to time to check out scripts that have been recently published. It is a great place to learn Groovy or just have fun with Groovy without having to install anything. In the script below, I wanted to multiply two lists of Strings together so while playing around at the Groovy Console site I wrote:
java.util.ArrayList.metaClass.multiply = { e ->
    def list = new ArrayList()
    delegate.each { a ->
        e.each {
            list.add(a + it)
        }
    }
    list
}

x = ["k1", "k2", "k3"]
y = ["v1", "v2", "v3"]

x * y

Now, there may be a better way of handling this in Groovy, but I get the result I am expecting by implementing the multiply method for the ArrayList MetaClass in the top part of the script.

Then I create my lists, x and y, and multiply (*) them together. Nothing too crazy going on here but this demonstrates the power that Groovy can provide programmers with very little effort. Here is the result:
[k1v1, k1v2, k1v3, k2v1, k2v2, k2v3, k3v1, k3v2, k3v3]


Update:

A few more elegant solutions posted at the Groovy Console site.

Shorter version.*
java.util.ArrayList.metaClass.multiply = { e ->
    delegate.collect { a -> e.collect { a + it } } .flatten()
}

x = ["k1", "k2", "k3"]
y = ["v1", "v2", "v3"]

x * y


Another version.*
java.util.ArrayList.metaClass.multiply = {
    [delegate, it].combinations().collect { a -> a[0] + a[1] }
}

x = ["k1", "k2", "k3"]
y = ["v1", "v2", "v3"]

x * y


No MOP version.**
x = ["k1", "k2", "k3"] 
y = ["v1", "v2", "v3"] 
[x, y].combinations()*.join()


*Courtesy of Paul Holt.
**Courtesy of paulk_asert.

Thursday, June 10, 2010

Intro to Hazelcast's Distributed Query

When you decide to incorporate a distributed data grid as part of your application architecture, a product's scalability, reliability, cost and performance are key considerations that will help you make your decision. Another key consideration will be the accessibility of the data. One nice feature of Hazelcast that I have been working with lately is distributed queries. In simple terms, distributed queries provide an API and syntax that allow a developer to query for entries that exist in a Hazelcast distributed map. Let's look at a very simple example.

In the demo project (link at the bottom) I have one object, a test case and the Hazelcast 1.8.4 jar file as a project dependency. Below is the class that will be put into a distributed map, ReportData. Once we have a distributed map that is full of ReportData entries, we can use Hazelcast's distributed query to find our ReportData entries.

package org.axiomaticit.model;

public class ReportData implements Serializable {

    private static final long serialVersionUID = 2789198967473633902L;
    private Long id;
    private Boolean active;
    private String reportName;
    private String value;
    private Date startDate;
    private Date endDate;

    public ReportData(Long id, Boolean active, String reportName, String value, Date startDate, Date endDate) {
        this.id = id;
        this.active = active;
        this.reportName = reportName;
        this.value = value;
        this.startDate = startDate;
        this.endDate = endDate;
    }

    // all the getters and setters
}


Nothing too complex in the code above. It is just an object that implements Serializable and that contains a few different types (String, Boolean and Date) of attributes. This class will work nicely to help demonstrate Hazelcast's distributed query API and syntax. I omitted the getters and setters for brevity.

// get a "ReportData" distributed map
Map<Long, ReportData> reportDataMap = Hazelcast.getMap("ReportData");

// create a ReportData object
ReportData reportData = new ReportData(...);

// put it into our Hazelcast Distributed Map
reportDataMap.put(reportData.getId(), reportData);

In the test code, I created ~50,000 ReportData objects using a for loop and put them into the "ReportData" distributed map. I used the index, 0..50,000, for the ReportData's id and the reportName is set to "Report " + index. I did a few other things, so we could have a few different dates represented in our map's entries. Check out the demo project for more detail.

Set<ReportData> reportDataSet = (Set<ReportData>) map.values(new SqlPredicate("active AND id > 990 AND reportName = 'Report 995'"));

The above code queries the distributed map for all ReportData objects where active is equal to true, the id is greater than 990 and the reportName is equal to "Report 995".

Below the reportDataSet will contain all ReportData where active is equal to true and id is greater than 49985.

Set<ReportData> reportDataSet = (Set<ReportData>) map.values(new SqlPredicate("active AND id > 49985"));

Below, we have a case where we are building the predicate programmatically using the EntryObject to fetch all ReportData where the id is greater than 49900 and the endDate attribute of ReportData is between two dates, startDate and endDate. I included the code below to show how I am creating a few dates to use in the predicate that eventually gets passed into the map.values(predicate) method.

Calendar calendar1 = Calendar.getInstance();
calendar1.set(2010, 3, 1);
Calendar calendar2 = Calendar.getInstance();
calendar2.set(2010, 3, 31);

Date startDate = new Date(calendar1.getTimeInMillis());
Date endDate = new Date(calendar2.getTimeInMillis());

EntryObject e = new PredicateBuilder().getEntryObject();
Predicate predicate = e.get("id").greaterThan(new Long(49900)).and(e.get("endDate").between(startDate, endDate));

Set<ReportData> reportDataSet = (Set<ReportData>) map.values(predicate);

Getting data from your Hazelcast distributed map using the distributed query API and query syntax is pretty straight forward. Most of these queries ran for about 500 milliseconds to 2 seconds in my IDE. The power and performance comes from the ability to query objects or map entries that are in memory rather than always relying on a round trip to your RDBMS. Distributed queries are an important feature that make Hazelcast a great tool that can help offset the workload of your RDBMS. With Hazelcast and a good knowledge of your enterprise data, you can implement a simple and effective solution that will easily scale to as many Hazelcast nodes your hardware can support. The demo project can be downloaded here. For more information, check out Hazelcast's website or visit the project's home at Google Code.

Sunday, April 11, 2010

Groovy Metaprogramming: propertyMissing

The other day, while working on a Java project, I realized that I could implement an application requirement quite quickly by extending a current domain object and adding an attribute. This is an enterprise wide, industry standard domain object model so I couldn't just add the attribute to the domain object without cutting through some red tape. Plus, it was an attribute that I needed for my application and it would most likely have no use in other projects. So, I had something like this:
public class Policy {

    private String policyNumber;
    private Double value;
    private Double interestRate;

    /* getters and setters */
    ....
}

Then to get things done, I went ahead and created:
public class MyPolicy extends Policy {

    private String myNewProperty;
    private Policy p;

    public MyPolicy(Policy policy) {
        this.p = policy;
    }

    /* getters and setters */
    ....
}

Now, I could fulfill the requirement with ease because I now had a Policy object with the extra attribute I needed, myNewProperty, all in the MyPolicy object. I could now handle the Policy returned from the Web service call and pass it into the MyPolicy constructor, do some work to create an instance of MyPolicy with a Policy object, derive the value of myNewProperty and then send it on to a view for example. Nice, I think that will work for my application.

Later, I thought about how nice it would have been if the Policy object was implemented with Groovy. Then I could take advantage of Groovy's metaprogramming features like propertyMissing. When I have propertyMissing in my language arsenal, I can create the Policy object like so:

class Policy {

    def properties = Collections.synchronizedMap([:])

    String policyNumber
    Double value
    Double interestRate

    def propertyMissing(String name, value)  { properties[name] = value }

    def propertyMissing(String name) { properties[name] }
}

In the implementation above, the propertyMissing(String name, value) method is called when trying to set a value, myNewProperty, that doesn't exist in the Policy object. The propertyMissing(String name) method is called when trying to get a property, myNewProperty, that doesn't exist in the Policy object. By default, the propertyMissing(String name) will return null if the value was never initialized or never dynamically created. Yeah, it is an incredible feature. What is really nice is that I didn't have to create the MyPolicy object at all! In the Groovy world I could write the following:

def p = new Policy()

p.policyNumber = "12345678"
p.value = 12000.00
p.interestRate = 2.3

println p.policyNumber
println p.value
println p.interestRate

/* new property */
p.myNewProperty = "Active"

println p.myNewProperty


Later on I could even write:

/* another new property */
p.myOtherNewProperty = "Wow"

println p.myOtherNewProperty

This would save me some time and now because the Policy object is expandable, other applications that need to extend the Policy object, for application purposes, will be able to utilize these features. I am convinced, Groovy IS productivity for Java.

Monday, March 29, 2010

Spring 3, AspectJ and Hazelcast - Cache Advance

I have been working with Java and related technologies at multiple companies since 2004. Most of the major business problems that I have encountered revolve around working with relatively small data objects and relatively small data stores (less than 50GB). One commonality in the development environment at each of these companies, other than Java, has been some form of legacy data store. In most cases, the legacy data store was not originally designed to support all of the various applications that are now dependent on the legacy system. In some cases, performance issues would arise that were most likely due to over utilization.

One approach to help alleviate utilization issues on legacy resources is with data caching. With data caching we can utilize available memory to keep our data objects closer to our running application. We can take advantage of technologies like Hazelcast, a data distribution platform for Java, to provide support for distributed data caching. In particular, this example focuses on Hazelcast's distributed Map to manage our in-memory caching. Because Hazelcast is easily integrated with most Web applications, include the hazelcast jar and xml file, the overhead is minimal. When we take advantage of Aspect Oriented Programming(AOP), with the help of Spring and AspectJ, we can leave our current implemented code in place and implement our distributed caching strategy with minimal code changes.

Let's look at a simple example where we are loading and saving objects in a simple Data Access Object (DAO). Below, PersistentObject, is the persistent data object we are going to use in this example. Note that this object implements Serializable because it is required if we want to put this object into Hazelcast's distributed Map (it is also a good idea for applications that utilize session replication).

public class PersistentObject implements Serializable {

    private static final long serialVersionUID = 7317128953496320993L;

    private Long id;
    private String field1;
    private String field2;
    private String field3;
    private List<String> fieldList1;

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }
}

Here is our simple interface for the DAO. Yes, this interface is ridiculously simple, but it does what we need it to do for this example.

public interface DataAccessObject {

    public PersistentObject fetch(Long id);

    public PersistentObject save(PersistentObject persistentObject);
}

Here is the implementation of the DataAccesObject interface. Again, really simple and in fact, I left out the meat of the implementation for brevity, but it will work for this example. Each of these methods would usually have some JDBC code or ORM related code. The key here is that our DAO code will not change when we re-factor the code to utilize the distributed data cache because it will be implemented with AspectJ and Spring.

public class DataAccessObjectImpl implements DataAccessObject {

    private static Log log = LogFactory.getLog(DataAccessObjectImpl.class);

    @Override
    public PersistentObject fetch(Long id) {

        log.info("***** Fetch from data store!");
        // do some work to get a PersistentObject from data store
        return new PersistentObject();
    }

    @Override
    public PersistentObject save(PersistentObject persistentObject) {

        log.info("***** Save to the data store!");
        // do some work to save a PersistentObject to the data store
        return persistentObject;
    }
}

The method below, "getFromHazelcast", exists in the DataAccessAspect class. It is an "Around" aspect that gets executed when any method "fetch" is called. The purpose of this aspect and pointcut is to allow us to intercept the call to the "fetch" method in the DAO and possibly reduce "read" calls to our data store. In this method, we can get the Long "id" argument from the called "fetch" method, get our distributed Map from Hazelcast and try to return a PersistentObject from the Hazelcast distributed Map, "persistentObjects". If the object is not found in the distributed Map, we will let the "fetch" method handle the work as originally designed.

@Around("execution(* fetch(..))")
public Object getFromHazelcast(ProceedingJoinPoint pjp) throws Throwable {

    // get method args
    Object[] args = pjp.getArgs();

    // get the id
    Long id = (Long) args[0];

    // check Hazelcast distributed map
    Map<Long, PersistentObject> persistentObjectMap = Hazelcast.getMap("persistentObjects");
    PersistentObject persistentObject = persistentObjectMap.get(id);

    // if the persistentObject is not null
    if(persistentObject != null) {
        log.info("***** Found it in Hazelcast distributed map!");
        return persistentObject;
    }

    // continue with the fetch method that was originally called if PersistentObject was not found
    return pjp.proceed();
}

The method below, "putIntoHazelcast", also exists in the DataAccessAspect class. It is an "AfterReturning" aspect that gets executed when any method "save" returns. As each PersistentObject is persisted to the data store in the "save" method, the "putIntoHazelcast" method will insert or update the PersistentObject in the distributed Map "persistentObjects". This way we have our most recent PersistentObject versions available in the distributed Map. If we just keep inserting/updating all PersistentObject's into the distributed Map, we would have to eventually look into our distributed Map's eviction policy to keep more relavent application data in our cache, unless, we have excess or abundant memory.

@AfterReturning(pointcut="execution(* save(..))", returning="retVal")
public void putIntoHazelcast(Object retVal) throws Throwable {

    // get the PersistentObject
    PersistentObject persistentObject = (PersistentObject) retVal;

    // get the Hazelcast distributed map
    Map<Long, PersistentObject> persistentObjectMap = Hazelcast.getMap("persistentObjects");

    // put the PersistentObject into the Hazelcast distributed map
    log.info("***** Put this PersistentObject instance into the Hazelcast distributed map!");
    persistentObjectMap.put(persistentObject.getId(), persistentObject);
}

I have also included a snippet from the Spring application-context.xml file that provides a simple way to get AspectJ working in the Spring container.

<aop:aspectj-autoproxy proxy-target-class="true"/>

<bean id="dataAccessAspect" class="org.axiomaticit.aspect.DataAccessAspect"/>

<bean id="dataAccessObject" class="org.axiomaticit.dao.DataAccessObjectImpl"/> 


This is a simple example of how Spring, AspectJ and Hazelcast can work together to help reduce "read" calls to a data store. Imagine reducing one application's "read" executions against a legacy data store while improving read performance metrics. This example doesn't really answer all questions and concerns that will arise when implementing and utilizing a Hazelcast distributed data cache with Spring and AspectJ, but I think it shows that these technologies can help lower resource utilization and increase performance. Here is a link to the demo project.