Friday, 23 October 2009

Cannot Simultaneously Fetch Multiple Bags

It's a gem of an error message, isn't it? I suspect that most people, on encountering it for the first time, wonder if it applies to them. "Did I ask for multiple bags?" or "What the hell is a bag?". It would be cheap to dismiss it as meaningless but, once you investigate it, you realise that the issues behind it are complex and varied. I don't think we can blame the Hibernate developers for adopting an error message that describes the problem that Hibernate is wrestling with internally, rather than trying to second-guess what caused it.

The purpose of this post is not to explain the reasons behind why it happens. Googling the error message will lead you to that. What I want to do is step through some solutions beyond the "use a Set" or "specify the index column" answers.

Let's start with a simple example that I think is typical of how people might run into this error. You want to maintain countries and regions in an application. A classic one-to-many relationship where country is the parent or master and region is the child or detail. The obvious representation in the database is two tables with a foreign key from the child to the parent:

For many people, their natural inclination will be to map this using Hibernate like this:

public class Country implements Serializable, Comparable {

    private long id;
    private String name;
    private List regions = new ArrayList();


    @OneToMany(mappedBy="country", cascade=CascadeType.ALL, fetch=FetchType.EAGER)
    public ListgetRegions() {
        return regions;



I'm using JPA annotations here. The mappedBy attribute is the equivalent of inverse=true on the collection in a Hibernate schema-based mapping.

public class Region implements Serializable, Comparable {

    private long id;
    private String name;
    private Country country;


    @OneToMany(mappedBy="region", cascade=CascadeType.ALL, fetch=FetchType.EAGER)
    public Country getCountry() {
        return country;


I'm not suggesting you would always use FetchType.EAGER of course. But in this case, let's assume that we always want the regions populated when we get the country and vice versa.

I'm thinking of this in a Spring project, probably with a DAO based on Spring's support classes. So something like this:

public class CountryDAOImpl extends HibernateDaoSupport implements CountryDAO {

    public Country createCountry(Country country) {
        return country;

    public Country findCountryByPrimaryKey(long id) {
        return (Country)getHibernateTemplate().get(Country.class, id);

    public List findAllCountries() {
        return getHibernateTemplate().find("from Country");


And you might have this exposed via a service facade, perhaps combined with calls to other DAOs as part of a transaction, and so on. All wired together as usual with Spring. You put it all together, ask Hibernate hbm2ddl to create your tables, run a simple test case and it works. Tables look OK. Life is good.

Then you decide to add another level. Say you need areas within regions. Same idea as before, it just becomes a three-tiered database structure with area having a foreign key into region.

After a judicious cut & paste session (resolving to try abstract DAOs when your boss isn't breathing down your neck) you end up with everything wired together and you run again. The database looks OK but lo and behold ... Hibernate hits you with "Cannot simultaneously fetch multiple bags".

That might be how you ended up here, when you Googled the error message. You may have read the suggestions to use @IndexColumn on the collections, or perhaps read that using List is just a habit and you should really be using a Set? So let's explore these solutions:


The idea here is that you annotate the collection mappings with @IndexColumn. Note that this is a Hibernate annotation, not a JPA annotation, which might start alarm bells ringing for you if the boss mumbled something about not tying yourself to specific technologies.

public class Region implements Serializable, Comparable {

    private long id;
    private String name;
    private Country country;


    @OneToMany(mappedBy="region", cascade=CascadeType.ALL, fetch=FetchType.EAGER)
    public Country getCountry() {
        return country;


You run again and all is well. Unless you don't care about null entries in your collection of countries. Hmm. Something is not quite right here. You might dig around for another solution and find the suggestion that you remove the mappedBy attribute? So you try that and it works. Deep joy. Or maybe not - have a look at the database structure that Hibernate has created - join tables between the expected tables that is going to take some explaining to your DBA, who is already mistrustful of the idea of having Hibernate create a database schema.

It is possible to get back to a more logical database design, but perhaps we should put a hold on this solution and try the other option that was suggested. After all, this should allow us to avoid the Hibernate-specific @IndexColumn annotation ...


OK. A set is just a collection so this shouldn't be that difficult. Global search and replace on the project, changing all references of List to Set and ArrayList to HashSet. Also make sure you've removed those @IndexColumn annotations and reintroduced the mappedBy attribute, otherwise you'll still get those join tables.

The first problem is that your DAO complains. The find method on Spring's Hibernate template wants to return a List rather than a Set. Maybe that global search and replace wasn't such a good idea? When the solution said you should use a Set, it didn't mean everywhere. ;) It's perfectly OK to have your finder methods in your DAOs and facade return List:

    public List findAllCountries() {
        return getHibernateTemplate().find("from Country");

What happens now depends on whether you were diligent enough to create equals() and hashcode() methods on your entities. It's common and sounds perfectly logical to create these based on your primary key column. The IDEs even do the work for you. The problem is that when you create the entity, it won't have an id (assuming we are letting the database generate them). It only gets the id when Hibernate saves it. If you create a series of regions in your country and try to save the country, you'll find that there's only one region. The HashSet thought they were all the same entry (because the ids were all null).

You could be tempted to try a TreeSet instead of a HashSet at this point. It would work, but the HashSet is generally better performing and it's not a good idea to get yourself forced into a choice like this.

The database generated "surrogate" ids are fine, but what you really need to do is create the hashcode() function based on a "real" primary key. In the case of country for example, this would be the name. In the case of an invoice, it would be the invoice number as used by accounts staff. So go back to your equals() and hashcode() methods and make them use the "real" primary key, not the surrogate one that could be null before the record is inserted.

At this point, it wouldn't be a bad idea to add this annotation to your entity so that the database creates indexes on these "real" primary keys.

public class Country implements Serializable, Comparable {


With the region table, you might allow the same region to appear in two countries. It might be "North", "Central", etc. In that case, you'd specify a composite:

@Table(uniqueConstraints=@UniqueConstraint(columnNames={"fk_country_id", "name"}))
public class Region implements Comparable {

And you should find that's a decent solution.


  1. Use java.util.Set for collection mappings (but not on your DAO and facade methods)
  2. Use mappedBy to make the collection end of the relationship the inverse end
  3. Create equals() and hashcode() based on a "real" primary key (with indexes as appropriate)

If you are using JPA/EJB3 of course, you could just switch provider to Toplink and it would be quite happy with the original code using List. But that's another story.

Saturday, 11 April 2009

Toplink JPA and InnoDB

Further to my post on my new-found love of Ubuntu, I've been porting a prototype application I'm working on to MySQL. This is an EJB3/JPA web application that was running on Oracle XE with Toplink as the JPA provider.

When I got it up and running I noticed that the tables were being created in MySQL using the non-transactional MyISAM engine, which doesn't really fit with the whole distributed transaction ethos of the application. I could easily change the storage engine for the tables in MySQL Administrator, but they would revert to MyISAM when I redeployed the application because I am still prototyping and using drop-and-create through Toplink.

My first step was to verify that Toplink wasn't creating these tables with MyISAM through the MySQL dialect that I am using. After some digging into the Toplink documentation I found these handy properties that can be set in persistence.xml:

    <property name="" value="MySQL4"/>
    <property name="toplink.ddl-generation" value="drop-and-create-tables"/>
    <property name="toplink.ddl-generation.output-mode" value="both"/>
    <property name="toplink.application-location" value="/tmp"/>

The toplink.ddl-generation.output-mode of both asks Toplink to externalize its drop and create scripts to files in the directory specified by toplink.application-location as well as issuing the DDL against the database. Apart from being useful in its own right, this confirmed that Toplink wasn't specifying the storage engine for the tables it was creating.

So the next step was to try setting the default storage engine on MySQL. On Linux, this is the easiest solution:

$ sudo /etc/init.d/mysql stop

Then edit /etc/mysql/my.cnf and add the default-storage-engine entry in the [mysqld_safe] section of the config file:

socket          = /var/run/mysqld/mysqld.sock
nice            = 0

$ sudo /etc/init.d/mysql start

Which works a treat, with JPA now creating InnoDB tables instead of MyISAM. But it seems an invasive approach, setting the whole database default to InnoDB.

Far better would be to set the default storage engine on a per-session basis, using SET SESSION storage_engine=InnoDB;. This can also be achieved through the MySQL JDBC driver using the sessionVariables property:

Driver/Datasource Class Names, URL Syntax and Configuration Properties for Connector/J

With GlassFish I was able to achieve this quite easily by adding the property to the connection pool I'd created for the MySQL database:

Once the change to the default storage engine on MySQL was reversed and GlassFish restarted to re-establish the connections in the pool, I was able to redeploy my application and see my tables created with the InnoDB engine.

Thursday, 9 April 2009

Jaunty Jackalope!

I've had GNU/Linux boxes for years. I think the first distribution I had was Unifix running on an old 486DX266 with about 500MB disk and 4MB of RAM. Then, as various pieces of upgraded hardware found their way into the box, it became Slackware followed by RedHat 6.2 (before RedHat got too commercial for my liking).

These days my server is a 500MHz Celeron dual-processor box with 768MB RAM running Ubuntu Intrepid Ibex 8.10. Samba file server, software RAID, caching DNS, Oracle XE, GlassFish v2 and Subversion repository.

There have been occasional flirtations with GNU/Linux as a desktop but, much as I wanted it to, it never quite cut it. Toward the end of last year, I decided to try again with Ubuntu Hardy Heron 8.04 on my new AMD64 laptop. I was pretty excited about it. I'd never tried Evolution as an email client before and loved it. Most of the software looked good and worked well. But after struggling with disappearing fonts and badly sized windows in a manually installed Netbeans 6.5, inability to resume after suspend and no 64-bit Flash plugin for Firefox, once again the idea of a GNU/Linux desktop fell by the wayside.

Anyway, this week I've been trying the pre-release version of Ubuntu: Jaunty Jackalope 9.04. I'm finding it better than ever. The distribution upgrade from 8.04 via 8.10 recovered flawlessly despite my laptop running out of battery half way through. In hindsight it wasn't the best time to dislodge the power cable!

I used the package manager to install Netbeans 6.5 (no need for a manual install now) and added all my usual Java EE plugins and GlassFish v3 Prelude as an application server. It all works a treat. I went for MySQL instead of Oracle this time, partly because there's no package installation for Oracle XE and partly because I wanted to test applications against both. The MySQL package install was a breeze. Subversion client and Netbeans plugin are up and running against my repository and Maven is just Maven.

Adobe recently released an alpha version of a Flash plugin for 64-bit Linux but I couldn't get their installation instructions to work with Firefox 3.0 (copying the to the ~/.mozilla/plugins directory). However, I used the script on this page and all is well. I suspect I may need to undo this at some point as the Adobe plugin becomes available as an install in Firefox but I'll cross that bridge when I come to it.

There are still a few issues (not all Ubuntu issues per se) but I think it's reached a point now where none of them stop me working:

  1. Suspend and resume doesn't work on my Toshiba Equium
  2. GlassFish v3 Prelude Update Tool isn't working but I've not investigated further
  3. Netbeans doesn't understand the way Tomcat is installed on a Unix filesystem

So I'm happy to say that I'm finally using GNU/Linux/Ubuntu - whatever you prefer to call it - exclusively on my laptop now.

The official release date of Ubuntu Jaunty Jackalope 9.04 is 23rd April.

Tuesday, 31 March 2009

Less than Intuitive

I'm struggling with a cold and not feeling very sharp at the moment. Perhaps this accounts for me being over-sensitive about a couple of classes in Java that I've tripped over today? Or maybe writing JUnit tests most of the afternoon has numbed my mind?

The first is java.util.Calendar. It's not something I use every day and this little snippet is typical of mistakes I've made before.

    Calendar calendar = Calendar.getInstance();
    calendar.set(2009, 03, 23);

I wanted March but my output was, of course, Thu Apr 23 00:00:00 BST 2009. Quite reasonably, if not intuitively, months are zero based in java.util.Calendar. But it's an easy one to slip on if you aren't paying attention or are feeling too lazy to type Calendar.MARCH as the second argument.

The second thing I fell foul of was the infamously tricky java.math.BigDecimal. Exemplified by this (although my actual problem was not so obvious):

    BigDecimal a = new BigDecimal("123");
    BigDecimal b = new BigDecimal("123.0");
    System.out.println(a.equals(b) ? "Same" : "Different");

The output in this case is Different. Again perfectly reasonable when you consider what BigDecimal sets out to achieve in terms of rounding control, but still less than intuitive for a spluttering and sneezing developer who's not used it for a while. Mental note: remember to specify the scale of numbers:

    BigDecimal a = new BigDecimal("123").setScale(2);
    BigDecimal b = new BigDecimal("123.0").setScale(2);

And all is well. Maybe after a good night's sleep I'll be the same.

Tuesday, 17 March 2009

Information from the AOP JoinPoint

I know. Logging is the most overdone and boring application of AOP, but even if you are planning on doing something more interesting with it, logging information about the method that triggered the advice is useful.

Here's just a couple of very simple reference examples on getting information about the joinpoint in the advice. They might save you five minutes hunting around in the JavaDocs. Let's start with Spring AOP/AspectJ:

public static final String getJoinPointDetails(JoinPoint joinPoint) {
    String className = joinPoint.getSignature().getDeclaringType().getName();
    String methodName = joinPoint.getSignature().getName();
    String paramList = CollectionUtil.listToString(joinPoint.getArgs());
    return className + " " + methodName + "(" + paramList + ")";

And the equivalent in an EJB3 interceptor:

public static final String getJoinPointDetails(InvocationContext invocationContext) {
    String className = invocationContext.getTarget().getClass().getName();
    String methodName = invocationContext.getMethod().getName();
    String paramList = CollectionUtil.listToString(invocationContext.getParameters());
    return className + " " + methodName + "(" + paramList + ")";

In both cases getName() is from java.lang.Class and returns the package name as a prefix to the class name. If you prefer a shorter version, you can use getSimpleName() to omit the package name.

The CollectionUtil.listToString() would go something like this:

public static final String listToString(List list) {
    StringBuilder sb = new StringBuilder();
    if (list != null) {
        for (int i = 0; i < list.size(); i++) {
            Object o = list.get(i);
            String s = o == null ? "<null>" : o.toString();
            sb.append(i == 0 ? s : ", " + s);
    return sb.toString();

public static final String listToString(Object[] array) {
    return listToString(Arrays.asList(array));

All very simple but handy things to tuck away in a library somewhere.

Friday, 13 March 2009

Simple Synchronizer Token with Spring MVC

The Problem

If you have used Struts 1.x, you'll probably be familiar with using the synchronizer token functionality provided through the saveToken() and isTokenValid() methods of the Action class. These prevent duplicate form submission, either as a result of double-clicking a submit button, or trying to submit a form from the browser history after using the back button.

Out of the box, Spring MVC doesn't have similar functionality, although it is being addressed with Spring Web Flow:

How to prevent user from double clicking the submit in a form using spring MVC AbstractTokenTransactionSynchronizer

So how could you use the synchronizer token pattern with Spring MVC if you aren't using Spring Web Flow?

Synchronizer Token

The basic idea of the synchronizer token pattern is that you keep a value in session scope that marks a point in the flow of the web application. As each form is rendered, it includes the value of the token from that point in time. On submission, the value that was embedded in the form is included in the request. The application can then compare the "historical" request token against the current session token. If the two are the same, processing continues and the session token is given a new value, effectively making the form out of date. If the two are different, it means that the form's token is lagging behind the current session token, i.e. the form has already been submitted.

So, in implementing the synchronizer token pattern, there are three components to deal with. Firstly, providing and managing the session token itself; secondly, having forms embed and submit the historical value of the token; thirdly, providing a mechanism for the application to check the request and session tokens and act accordingly.

1. Managing the Session Token

The way I choose to do this with Spring MVC is to borrow some ideas from the way that Struts 2 addresses the problem. I start off with a session listener to set up the token in session scope:
package mypackage;

import javax.servlet.http.HttpSession;
import javax.servlet.http.HttpSessionEvent;
import javax.servlet.http.HttpSessionListener;

public class TokenListener implements HttpSessionListener {

    public void sessionCreated(HttpSessionEvent sessionEvent) {
        HttpSession session = sessionEvent.getSession();
        ,   TokenFormController.nextToken()


This listener has to be declared in web.xml of course:


We'll see the code for the TokenFormController in a moment. For the time being, note that the session listener is fired whenever the container creates a session and it initialises the synchronizer token with a generated value.

2. Embedding the Historical Token in the Form

When we want to protect a form from duplicate submission, we need to capture the value of the token, embed it in the form as it is rendered and have the historical value submitted along with the other form data. The obvious way to do this is with a hidden field in the form.

Rather than worry about exactly how to do this in each form, I use a simple tag:

<%@tag description="Synchronizer Token" import="mypackage.TokenFormController" %>


This is easily included in the form JSP:

<%@ taglib prefix="tags" tagdir="/WEB-INF/tags/" %>
<form:form action="myAction.action">

This example is written as if the Spring MVC dispatcher servlet is mapped to *.action. Obviously it doesn't matter and this is not a convention commonly used for Spring MVC but it keeps things clear for this example.

3. The TokenFormController

I find the simplest and most transparent way of getting the token functionality into form controllers is to use a custom controller derived from the SimpleFormController hierarchy. This subclass provides token checking and routing to an "invalid token" view. In my example the controller also handles the generation of the next token value and defines the name of the token attribute. We've seen this in action in steps 1 & 2. You may prefer to factor this out into a separate class.

As I tend to use CancellableFormController for most of my input forms, I've created my TokenFormController as a subclass of that. Here is the complete code:
package mypackage;

import java.util.Random;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.servlet.http.HttpSession;
import org.springframework.validation.BindException;
import org.springframework.web.servlet.ModelAndView;
import org.springframework.web.servlet.mvc.CancellableFormController;

public class TokenFormController extends CancellableFormController {

    private static final String TOKEN_KEY = "_synchronizerToken";
    private String invalidTokenView;

    protected ModelAndView onSubmit(
        HttpServletRequest request
    ,   HttpServletResponse response
    ,   Object command
    ,   BindException errors
    ) throws Exception {
        if (isTokenValid(request)) {
            return super.onSubmit(request, response, command, errors);
        return new ModelAndView(invalidTokenView);

    private synchronized boolean isTokenValid(HttpServletRequest request) {
        HttpSession session = request.getSession();
        String sessionToken = (String)session.getAttribute(getTokenKey());
        String requestToken = request.getParameter(getTokenKey());
        if (requestToken == null) {
            // The hidden field wasn't provided
            throw new TokenException("Missing synchronizer token in request");
        if (sessionToken == null) {
            // The session has lost the token.
            throw new TokenException("Missing synchronizer token in session");
        if (sessionToken.equals(requestToken)) {
            // Accept the submission and increment the token so this form can't
            // be submitted again ...
            session.setAttribute(getTokenKey(), nextToken());
            return true;
        return false;

    public static String nextToken() {
        long seed = System.currentTimeMillis(); 
        Random r = new Random();
        return Long.toString(seed) + Long.toString(Math.abs(r.nextLong()));

    public String getInvalidTokenView() {
        return invalidTokenView;

    public void setInvalidTokenView(String invalidTokenView) {
        this.invalidTokenView = invalidTokenView;

    public static String getTokenKey() {
        return TOKEN_KEY;


It's all fairly straightforward if you are familar with the way the SimpleFormController hierarchy fits together. The onSubmit() method ties into the standard controller flow - you just override the usual doSubmitAction(), formBackingObject(), and associated methods in your subclass to provide the controller functionality for your input form. You can pretty much forget about token processing. You'll need the TokenException unchecked exception class but this is just a trivial subclass of RuntimeException.

The isTokenValid() method deals with the token checking. The nextToken() and getTokenKey() methods provide the next value and the name of the token respectively. Refer back to the session listener and tag to see how they make use of these.

The string attribute invalidTokenView, which is returned as the view name of the returned ModelAndView if the request and session tokens don't match, is injected using the dispatcher servlet xml file. Just the same as the way the cancelView property works with the standard CancellableFormController.

A typical configuration in the dispatcher servlet xml file would look something like this:

    <bean id="urlMapping" class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping">
        <property name="mappings">
                <prop key="myAction.action">myActionController</prop>
    <bean id="myActionController" class="mypackage.MyActionController">
        <property name="useCacheControlHeader" value="false"/>
        <!-- Required for CancellableFormController -->
        <property name="cancelParamKey" value="cancel"/>
        <property name="cancelView" value="redirect:myCancel.action"/>
        <!-- Standard FormController properties -->
        <property name="formView" value="myInputForm.jsp"/>
        <property name="successView" value="redirect:mySuccess.action"/>
        <!-- Required for the TokenFormController -->
        <property name="invalidTokenView" value="invalidToken.jsp"/>

For clarity, I've omitted other properties I usually have in here, e.g. a service facade bean that provides access to the Model, and validator references. MyActionController is the subclass of TokenFormController that manages the input form.

That's about it. The only subtlety here, if you want the same back-button behaviour as you might expect from Struts 1.x, is the useCacheControlHeader. Setting this to false (the default is true from AbstractFormController), prevents the browser from getting a fresh copy of the input form as it works back through history.

You'll probably want to do some tweaking if you want to use this in your applications, but hopefully that's enough to give you some ideas about how to implement the synchronizer token pattern with Spring MVC. I've found it quite a productive method.

Tuesday, 10 March 2009

Colour Schemes

I spend most of my time concentrating on the back-end of a web application and front-end design isn't one of my strong points. So anything that helps me to knock together a passable stylesheet and design for an application is welcome.

Enter the Colour Scheme Designer. Choose your basic colour from a colour wheel and let it build colour schemes based on mono, complementary or other models. To help you choose, it displays mock-up web pages in light and dark designs based on your chosen colours. It can even simulate various forms of colour-blindness. When you are happy, you can export the results as text, e.g. to paste as a handy reference in a comment in a CSS file, or as a GIMP palette.

Even if you don't follow the colour guidelines rigidly it's a good way of getting a start with a colour scheme.

Saturday, 7 March 2009

Composite Views with JSP


Tiles has long been my first consideration for assembling pages from components. I got started with Tiles while working on Struts 1.x projects and went Tiles crazy before settling down to a fairly standard header, menu, content, footer approach. Apart from separating component layout from content, the nice part about Tiles and Struts integration was being able to forward to a Tiles definition without having to have a real page that assembled the components.

Tiles is Apache Tiles 2.0 these days, divorced from the Struts 1.x distribution. There are some minor syntactical differences between Struts 1.x Tiles and Tiles 2.0. <tiles:put> becomes <tiles:putAttribute>, for example, but this is just a minor irritation.

I've used Tiles 2.0 very successfully with both Spring MVC and Struts 2 applications. Integration is very straightforward in both cases and you quickly forget that you are using Tiles.

In both cases you define the TilesListener (or the TilesServlet or the TilesFilter) in your web application deployment descriptor:

    <description>Tiles Listener</description>

With Spring MVC, you define a view resolver for Tiles and you can then use a Tiles definition in the view name that you return from your ModelAndView.

<bean id="viewResolver" class="org.springframework.web.servlet.view.UrlBasedViewResolver">
    <property name="viewClass" value="org.springframework.web.servlet.view.tiles2.TilesView"/>

With Struts 2, you define a new result type, optionally making it the default result type with the default="true" attribute. You can then use the Tiles definition in an action result.

    <result-type name="tiles" default="true" class="org.apache.struts2.views.tiles.TilesResult"/>

I've also tried Tiles 2.0 with JSF 1.1 but it didn't really play very nicely. It wasn't possible to specify a Tiles definition in a navigation rule and there was constant fiddling with <f:verbatim> tags in the tiles themselves to prevent content getting out of order.

I note that MyFaces has a non-standard extension in the form of JspTilesViewHandlerImpl, which allows forwarding to Tiles definitions:


Sitemesh looks pretty good for page decoration stuff like headers, footers, sidebars, etc. I like the way you can inject meta-data from the page header into the decorated page. I've not got round to delving much deeper than that.

This is a nice introductory tutorial for Sitemesh:

I was aware of issues with Sitemesh and JSF so my current JSF/JPA project wasn't the time to try and explore it in more detail:

Prelude and Coda

For my current project, all I needed was a basic header with a CSS stylesheet reference and a standard footer. So I used the standard, but perhaps underused, mechanism provided for in the JSP 2.0 specification:

       <display-name>All Pages</display-name>

I like this method for a simple layout or a prototype. Minimal setup. No additional libraries, servlets, filters or tag libraries to consider. But very quickly you can make your pages look consistent and respectable.

The prelude and coda have to follow the same rules as other included fragments, in particular that JSP and tag library start and end tags must always appear in the same document. This means, for example, that I couldn't open my <f:view> tag in the header and close it in the footer. It's OK to deal with HTML tags like <html> and <body> this way of course - there wouldn't be much point otherwise.

Bear in mind also that the include of the prelude and coda content is done when the JSP is first compiled, so a clean and build may be required after you add these directives to web.xml.

Friday, 6 March 2009

Entering the Blogosphere

So, here we go. My first tentative step into the world of blogging.

I've spent the last couple of weeks reviewing some JSF/JPA stuff in preparation for a training course that I will be running toward the end of March. At times it's been a frustrating exercise because of the lack of decent documentation. Fortunately bloggers who have been down the same road and who have had the generosity to write about it have played a major role in filling in the gaps. Thanks to you all.

I guess this is my way of putting something back into the pot.