Showing posts with label Spring. Show all posts
Showing posts with label Spring. Show all posts

Sunday, July 31, 2011

Integrating Spring and AVAYA Dialog Designer

For quite a while, I have been an advocate of using the Spring framework for most, if not all, of my recent Java projects. There are many advantages of using Spring for features such as dependency injection, transaction management, and integration (Web services, messaging) because Spring creates an abstraction from the underlying implementations allowing developers to focus less on the 'how-to' and more on the problem at hand.

When working with AVAYA Dialog Designer, an Eclipse based IDE for developing IVR applications, I have found that I rely heavily on shared Java libraries that have been already been tested and proven in our enterprise. With an increasing number of these shared libraries being written with Spring, I found that I needed to bring the Spring context into the AVAYA runtime environment.

Because, the AVAYA IVR environment is a Java Web environment, we can start by locating the web.xml file in the project under the WEB-INF directory. We can add a ContextLoaderListener to the application's web.xml file and we are on our way.
<listener>
    <listener-class>
        org.springframework.web.context.ContextLoaderListener
    </listener-class>
</listener>

<context-param>
    <param-name>contextConfigLocation</param-name>
    <param-value>/WEB-INF/ivr-application-context.xml</param-value>
</context-param>

Next, we need to investigate how we are configuring our application context file to ensure that our application context loads our dependencies' beans into this application's context. In the xml above, I have set the contextConfigLocation parameter to point to the location of the IVR application's context file. In ivr-application-context.xml, I have configured a few beans that I have written in my Dialog Designer project.
...
<bean id="myBean" class="beans.MyBean"/>
...

If we have other application context files, we can import them in ivr-application-context.xml or add them to the contextConfigLocation.

One feature of the Dialog Designer environment is the capability of writing your own custom Java classes, like MyBean, and overriding some of the current application's classes represented by the graphical call flow nodes. Each node is backed by a Java source file. For example, we can override the requestBegin(SCESession mySession) method of the generated AVAYA framework classes and use the SCESession object to access the ServletContext in order to get access to Spring's WebApplicationContext.
...
public void requestBegin(SCESession mySession) {

    WebApplicationContext context = WebApplicationContextUtils.getWebApplicationContext(
    mySession.getServlet().getServletContext());
    MyBean myBean = (MyBean) context.getBean("myBean");
}
...


Now we have access to an instance of MyBean from Spring's context. In the end, we can see how easy it is to bring Spring into an existing Java Web framework that really does not need to exist within the Spring container. The dependent libraries can be accessed and we can wire any other custom beans if needed. Of course, things would be much easier if the AVAYA framework integrated with Spring, but if you find yourself working with a similar contraint, I hope this post illustrates how easy Spring makes integrating with other technologies.

Thursday, May 12, 2011

The Simplicity of Spring MVC 3

The beauty of Spring MVC is the overall simplicity of getting things done. Here is a quick fictional example of the simplicity of a controller in Spring MVC:

@Controller
@RequestMapping("/")
public class WelcomeController {
   ...
}


Above we have the WelcomeController class. All we need to make this class a controller is the @Controller annotation. This controller is mapped to handle all application root requests through the @RequestMapping annotation. Below, we have a few methods that exist within the WelcomeController class that handle GET and POST actions to index.htm.

...
@RequestMapping(value="index.htm", method = RequestMethod.GET)
protected ModelAndView indexGet() { ... }
...


Above we handle the /index.htm GET request and return a Spring ModelAndView object. In the indexGet() method we can add model objects to render in the view and set the view name as shown here:

...
ModelAndView mav = new ModelAndView();
mav.addObject("domainObject", new DomainObject());
mav.setViewName("welcome");
return mav;
...


Below we are handling a POST to /index.htm and here we are expecting our fictional DomainObject class from a form, and we denote that with the @ModelAttribute annotation.

...
@RequestMapping(value="index.htm", method = RequestMethod.POST)
protected ModelAndView indexPost(@ModelAttribute DomainObject instance) { ... }
...


In the indexPost(@ModelAttribute DomainObject instance) method we can handle the DomainObject instance.

...
validate(instance); // do some validation
myService.save(instance); // persist the DomainObject instance
ModelAndView mav = new ModelAndView();
mav.addObject("domainObject", instance);
mav.setViewName("welcome");
return mav;
...


Spring MVC is great for putting together Web applications and the controller infrastructure and configuration is quite fantastic and super simple. A developer can work directly with domain objects in a Web application while avoiding getting wrapped up in technology.

Saturday, February 26, 2011

Getting Started with Spring.NET

I have been a big fan of the Spring Framework for quite some time. As a matter of fact, I count on the many benefits and features of Spring for most of my Java projects. I have found Spring to be a gateway to productivity and better practice development for a number of solutions in the enterprise.

Recently, I have been investigating the .NET platform and C#. We have a mixed Java/C# environment at work and I would like to be a more flexible resource in order to help out on more projects. As I started looking into C#, I thought it was only appropriate to look into the Spring.NET project. It is quite similar to its Java counterpart and here is how I got things started with a very simple C# project:

First, I downloaded Visual C# 2010 Express. It is not quite as fancy as the variety of Visual Studio 2010 offerings, but it is a great IDE to get started with learning C#. Next, I downloaded the latest Spring.NET release, which happens to be version 1.3.1. In Visual C# 2010 Express, I then created a new blank project, 'SpringNET1'. Next, I added the Spring.Core.dll and Common.Logging.dll as references in my new project.

I created a new Class, MyApp.cs. This is the entry point of my application. I will go over the details of the Main(string[] args) function shortly, but basically it grabs the application context, gets an instance of the MyService class, calls MyService's GetName() function and writes the output to the console.

using System;
using Spring.Context;
using Spring.Context.Support;

namespace App.Core
{
    class MyApp
    {
        static void Main(string[] args)
        {
            IApplicationContext ctx = ContextRegistry.GetContext();
            ServiceInterface myService = (MyService) ctx.GetObject("MyService");
            Console.WriteLine(myService.GetName());
            Console.ReadLine();
        }
    }
}

Then I created an interface named ServiceInterface.cs with one function, GetName(), which returns a type of String.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace App.Core
{
    interface ServiceInterface
    {
        String GetName();
    }
}


Below is an implementation of the ServiceInterface, MyService.cs, which implements the GetName() function by returning my name, "RJ Salicco".

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace App.Core
{
    class MyService : ServiceInterface
    {
        public String GetName()
        {
            return "RJ Salicco";
        }
    }
}

Lastly we need to create an Application Config file and let the runtime know about Spring and our Spring objects. The important thing to note here is the way that objects are configured. Using the <object> tag, we define the 'id' of the object and the 'type' of the object which is denoted by the fully qualified class name (namespace and class name) and the Assembly name separated by a comma. The 'id' is what I used to lookup the MyService instance in our context used in the MyApp class.

<configuration>
    <configSections>
        <sectionGroup name="spring">
        <section name="context" type="Spring.Context.Support.ContextHandler, Spring.Core"/>
        <section name="objects" type="Spring.Context.Support.DefaultSectionHandler, Spring.Core" />
    </sectionGroup>
    </configSections>
    <spring>
        <context>
            <resource uri="config://spring/objects"/>
        </context>
        <objects xmlns="http://www.springframework.net">
            <description>Simple bean setup.</description>
            <object id="MyService" type="App.Core.MyService, App.Core"></object>
        </objects>
    </spring>
</configuration>


I knowing this example is really, really simple, but I just wanted to document a starting point. We can all imagine a real world example implementation, wiring in some more objects, referencing objects as constructor arguments or properties and possibly utilizing the features of Spring AOP. As I dig deeper into C# and the .NET platform I plan on sharing what I learn and if Spring.NET delivers just some of the features its Java counterpart delivers, I will feel confident delivering C# solutions along with my Java solutions.

Thursday, September 02, 2010

Why I Love Spring 2.5's PropertyPlaceholderConfigurer

I have been a big fan of Spring's PropertyPlaceholderConfigurer since 2006 when I could wire up a datasource bean, or any bean for that matter, with just some references to properties that I knew were going to be in place. A snippet from a Spring context file for example:

<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
    <property name="driverClassName" value="${my.db.driver}"/>
    <property name="url" value="${my.db.url}"/>  
    <property name="username" value="${my.db.username}"/>
    <property name="password" value="${my.db.password}"/>
</bean>


Now, I can provide PropertyPlaceholderConfigurer with a .properties file location(s) or I could depend on the properties existing as part of the runtime, like when using JBoss Application Server's property service. Then one day, I ran into a bit of an issue. Well, now I have an application with a properties file that has datasource connection information for each development region, DEV, TEST and PROD and the region is a 'prefix' on each property.

Something like...

DEV.my.db.driver
DEV.my.db.url
DEV.my.db.username
DEV.my.db.password

TEST.my.db.driver
TEST.my.db.url
TEST.my.db.username
TEST.my.db.password


... and so on. If you are packaging .properties files into your archive(.war, .jar, .ear), this does help your code be a bit more portable but I usually configure properties outside of an archive but we can't always have our way. So, now we have a special class that reads the properties file and the region variable from SystemProperties as the region variable, SDLC_REGION, is set in each development region as a VM argument.

-DSDLC_REGION=DEV


And that works great. We can leave our Spring context alone and everything works like we need it to. But, I am always trying to reduce classes or utilities(.jar files) that are no longer needed in our applications. So, I took a look back into Spring 2.5's PropertyPlaceholderConfigurer and low and behold, there is a better way to do things. Check it out. Here is my Spring context file now:

<context:property-placeholder location="classpath:db.properties"/>

<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
    <property name="driverClassName" value="${${SDLC_REGION}.my.db.driver}"/>
    <property name="url" value="${${SDLC_REGION}.my.db.url}"/>  
    <property name="username" value="${${SDLC_REGION}.my.db.username}"/>
    <property name="password" value="${${SDLC_REGION}.my.db.password}"/>
</bean>


Now, the VM argument, SDLC_REGION, exists in each environment and it can be a part of our PropertyPlaceholderConfigurer expression. We can now load the correct property for each development region from the packaged .properties file without depending on our utility class anymore. Really cool stuff and again, beautiful work from the people at SpringSource.

Monday, March 29, 2010

Spring 3, AspectJ and Hazelcast - Cache Advance

I have been working with Java and related technologies at multiple companies since 2004. Most of the major business problems that I have encountered revolve around working with relatively small data objects and relatively small data stores (less than 50GB). One commonality in the development environment at each of these companies, other than Java, has been some form of legacy data store. In most cases, the legacy data store was not originally designed to support all of the various applications that are now dependent on the legacy system. In some cases, performance issues would arise that were most likely due to over utilization.

One approach to help alleviate utilization issues on legacy resources is with data caching. With data caching we can utilize available memory to keep our data objects closer to our running application. We can take advantage of technologies like Hazelcast, a data distribution platform for Java, to provide support for distributed data caching. In particular, this example focuses on Hazelcast's distributed Map to manage our in-memory caching. Because Hazelcast is easily integrated with most Web applications, include the hazelcast jar and xml file, the overhead is minimal. When we take advantage of Aspect Oriented Programming(AOP), with the help of Spring and AspectJ, we can leave our current implemented code in place and implement our distributed caching strategy with minimal code changes.

Let's look at a simple example where we are loading and saving objects in a simple Data Access Object (DAO). Below, PersistentObject, is the persistent data object we are going to use in this example. Note that this object implements Serializable because it is required if we want to put this object into Hazelcast's distributed Map (it is also a good idea for applications that utilize session replication).

public class PersistentObject implements Serializable {

    private static final long serialVersionUID = 7317128953496320993L;

    private Long id;
    private String field1;
    private String field2;
    private String field3;
    private List<String> fieldList1;

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }
}

Here is our simple interface for the DAO. Yes, this interface is ridiculously simple, but it does what we need it to do for this example.

public interface DataAccessObject {

    public PersistentObject fetch(Long id);

    public PersistentObject save(PersistentObject persistentObject);
}

Here is the implementation of the DataAccesObject interface. Again, really simple and in fact, I left out the meat of the implementation for brevity, but it will work for this example. Each of these methods would usually have some JDBC code or ORM related code. The key here is that our DAO code will not change when we re-factor the code to utilize the distributed data cache because it will be implemented with AspectJ and Spring.

public class DataAccessObjectImpl implements DataAccessObject {

    private static Log log = LogFactory.getLog(DataAccessObjectImpl.class);

    @Override
    public PersistentObject fetch(Long id) {

        log.info("***** Fetch from data store!");
        // do some work to get a PersistentObject from data store
        return new PersistentObject();
    }

    @Override
    public PersistentObject save(PersistentObject persistentObject) {

        log.info("***** Save to the data store!");
        // do some work to save a PersistentObject to the data store
        return persistentObject;
    }
}

The method below, "getFromHazelcast", exists in the DataAccessAspect class. It is an "Around" aspect that gets executed when any method "fetch" is called. The purpose of this aspect and pointcut is to allow us to intercept the call to the "fetch" method in the DAO and possibly reduce "read" calls to our data store. In this method, we can get the Long "id" argument from the called "fetch" method, get our distributed Map from Hazelcast and try to return a PersistentObject from the Hazelcast distributed Map, "persistentObjects". If the object is not found in the distributed Map, we will let the "fetch" method handle the work as originally designed.

@Around("execution(* fetch(..))")
public Object getFromHazelcast(ProceedingJoinPoint pjp) throws Throwable {

    // get method args
    Object[] args = pjp.getArgs();

    // get the id
    Long id = (Long) args[0];

    // check Hazelcast distributed map
    Map<Long, PersistentObject> persistentObjectMap = Hazelcast.getMap("persistentObjects");
    PersistentObject persistentObject = persistentObjectMap.get(id);

    // if the persistentObject is not null
    if(persistentObject != null) {
        log.info("***** Found it in Hazelcast distributed map!");
        return persistentObject;
    }

    // continue with the fetch method that was originally called if PersistentObject was not found
    return pjp.proceed();
}

The method below, "putIntoHazelcast", also exists in the DataAccessAspect class. It is an "AfterReturning" aspect that gets executed when any method "save" returns. As each PersistentObject is persisted to the data store in the "save" method, the "putIntoHazelcast" method will insert or update the PersistentObject in the distributed Map "persistentObjects". This way we have our most recent PersistentObject versions available in the distributed Map. If we just keep inserting/updating all PersistentObject's into the distributed Map, we would have to eventually look into our distributed Map's eviction policy to keep more relavent application data in our cache, unless, we have excess or abundant memory.

@AfterReturning(pointcut="execution(* save(..))", returning="retVal")
public void putIntoHazelcast(Object retVal) throws Throwable {

    // get the PersistentObject
    PersistentObject persistentObject = (PersistentObject) retVal;

    // get the Hazelcast distributed map
    Map<Long, PersistentObject> persistentObjectMap = Hazelcast.getMap("persistentObjects");

    // put the PersistentObject into the Hazelcast distributed map
    log.info("***** Put this PersistentObject instance into the Hazelcast distributed map!");
    persistentObjectMap.put(persistentObject.getId(), persistentObject);
}

I have also included a snippet from the Spring application-context.xml file that provides a simple way to get AspectJ working in the Spring container.

<aop:aspectj-autoproxy proxy-target-class="true"/>

<bean id="dataAccessAspect" class="org.axiomaticit.aspect.DataAccessAspect"/>

<bean id="dataAccessObject" class="org.axiomaticit.dao.DataAccessObjectImpl"/> 


This is a simple example of how Spring, AspectJ and Hazelcast can work together to help reduce "read" calls to a data store. Imagine reducing one application's "read" executions against a legacy data store while improving read performance metrics. This example doesn't really answer all questions and concerns that will arise when implementing and utilizing a Hazelcast distributed data cache with Spring and AspectJ, but I think it shows that these technologies can help lower resource utilization and increase performance. Here is a link to the demo project.