Saturday, December 6, 2008

GXT Tree and Loading ... Alternative Design (Validated)

In my previous blog I had proposed a design which I could use to dynamically load the tree based on several different modeled objects. I was successful in implementing this. One of the tricks was to ensure the stores were loaded prior to building the tree, and to do this, I wrote a handler which would load each of the stores and listen for the completion of the loads and then build the tree from the model structures.

YSlow and Cache-Headers

If you run YSLow on one of your web applications, you can obtain some really useful information about improving the performance of your application. One common performance improvement is to set expiration headers on static resources (such as CSS, JS and image files). Setting this header, will help preserve the file in the browser cache, thus preventing reload of the image on subsequent page refreshes. Web servers like Apache make this easy (with a configuration file change). However, if you are using Tomcat both as a Web server and application container, the solution here is relative easy, but requires some coding. To facilitate this in Tomcat, you need to create a ServletFilter to add the Cache-Control header to the response, and then register your filtered types in your web.xml. A blog on this was posted by Byron Tymvios on jGuru here.

Tuesday, December 2, 2008

GXT Tree and Loading ... Alternative Design

This is a followup blog to my previous blog entry on GXT Tree and Loading from Restful URLs. While this approach worked, I had concerns over the number of AJAX requests as the tree is fully "expanded". As well, I was considering another possible issue:
  • The BaseModel objects representing the Country, Album objects are used elsewhere in the client (not just within my tree).
In this sense, the tree model is really a "view" into the real data. Having the tree be dynamically loaded prevents me from really being able to easily sync up changes made to the real data used elsewhere. I had already recognized that I likely needed to provide a singleton store pattern for obtaining countries and albums. The only problem was I was using this for lists and selection drop-downs which are modeled as ListStores (which co-incidentally are incompatible with TreeStores). However the fact that I have data being manipulated in two places just seems rather dangerous to me. For this reason, I think the process I intend to use here is still to use a DataProxy for the TreeStore, but instead have it's load method pull values from the corresponding singleton CountryStore and AlbumStore, and then provide some form of a Binder such that I can bind my TreeStore to the CountryStore (thus when a Country is created, deleted or modified, the TreeStore will be notified of the change in the CountryStore and take appropriate action). This also means I might be able to use a lighter-weight "representation" of the Country in the Tree (for example, just the "name" (for display) and "id" (as the foreign key to the CountryStore). In this case, my tree would really boil down to a BaseTreeModel object with three attributes ("name", "id", "type"). I'd want the type so that I can:

  1. Know the singleton store to access (if I need to full model)
  2. Toggle/display to proper icon in the tree
  3. Handle the leaf/childless cases (such as for Country)
One challenge in this case is I would need to ensure the AlbumStore, CountryStore and StampCollectionStore's are loaded prior to initializing/loading the tree store. As well, I would need to make sure the JSON output from the Album and StampCollection Restful Web Services includes an array of the ids of their children. (It actually is doing this today - but I bring this up as a point that others might consider). For example, an Album JSON String might look like the following:

{"total":1,"albums":[{"name":"Australia","id":154,"countries":[1104,733,3851,704]}]}
In the above example, the album for Australia contains references to the countries 1104, 733 etc. If the StampTreeDataProxy came across a parent BaseModel which was an Album reference, I could obtain the concrete Album from the AlbumStore to get the Country references and thus create an output list for the Callback.onSuccess() method.

While this above approach will take me more time to implement, I think it will provide one major performance improvement. The tree building time is strictly driven by tree AJAX calls to obtain the concrete Country, Album and Stamp Collection data models, as opposed to "N" calls for each node in the tree. It would also give a consolidated view of the tree and allow management of the models to be controlled by their respective stores in a more uniform fashion.

Finally, using a ReferenceModel approach, I actually do not even need the DataProxy. I'd have to test this out, but it looks like I can simply use the TreeBinder to bind a Tree to a TreeModel, and the TreeModel can be built programatically once the singleton data stores are created.

Monday, December 1, 2008

GXT Tree and Loading with REST

Anyone who has used GXT can tell you that it is very powerful, but examples (and the javadoc) are still few and hard to find. The examples that come with the SDK are great (see Ext GWT 1.2 Samples), but if you need to something beyond these you typically need to figure it out on your own. I hope to be able to write a series of blog articles on some of the designs I have figured out for the conversion of my Stamp application from Swing to GXT. (Some of my readers may remember I was experimenting with EXT-JS (more on this in a future blog)).
Getting a tree to work in GXT is not too hard. In fact the demos do a pretty job of illustrating this. However, my tree was a little more complex than the trees shown. First some features of my tree:
  1. Has three separate types of objects (Stamp Collections, Albums and Countries)
  2. Stamp Collections and Albums may contain children (Albums and Countries respectfully)
  3. A Restful WebService is used to return the content of a particular item. (for example to get the albums of a stamp collection with id of "5" the URI might look like http://hostname/web-app/resources/collections/albums/5).

My original approach was that I wanted to make a request on the selected tree node when an expansion occured. However when I used filtering, this caused the invocation of the filter to actually load all the nodes (which was very slow on the first filter). You can turn this off, however that defeated the point of the filter. It also didn't seem like the right solution since at least in the examples the trees were being loaded via an RPC call. So the problem really boiled down to making the tree make an AJAX call to the Restful Web Service upon requesting the expansion of an unloaded node. First I should mention that trees in GXT actually act on BaseModel objects. One point of interest here is that BaseModel object are not TreeModel's (in this way it does work differently than Swing-based trees).
The approach I took was to create a customized DataProxy and pass this proxy to the store on creation. Thus as nodes are loaded/expanded the data proxy will be used to obtain the node's children. My proxy essentially overloaded the load( ) method and for convienence added another method getRequestBuilder( ). I would send requests using the output of the request builder to the Restful Web Service and process the results with a JSONReader and output the load result in the callback's onSuccess( ) method. So lets look at some code:
public class StampTreeDataProxy implements DataProxy<BaseModel,List<BaseModel>> {

   public StampTreeDataProxy( ) {
      super();
   }

   /**  
    * Generate the request builder for the object. If it is the root node (ie. null)
    * then request the stamp collections.  Else, get the appropriate Albums or Countries.
    * For any other type of object, since we do not know how to build a Restful Web
    * URI, simply return a null value.
    * 
    * @param item   The current (parent) item.
    */
   protected RequestBuilder getRequestBuilder( BaseModel item ) {
      RequestBuilder builder = null;
      if( item == null ) {
 builder = HttpUtils.getRestfulRequest(StampModelTypeHelper.getResourceName(StampCollection.class));
      } else if( item instanceof NamedBaseTreeModel ){
         String pathInfo = ((item instanceof Album) ?
                StampModelTypeHelper.getResourceName(Country.class) :    
                StampModelTypeHelper.getResourceName(Album.class)) +"/" + ((NamedBaseTreeModel)item).getId();
         builder = HttpUtils.getRestfulRequest(
                StampModelTypeHelper.getResourceName(item.getClass()), pathInfo, HttpMethod.GET );
      }
      return builder;
   }
  
   public void load(DataReader<BaseModel, List<BaseModel>> reader, 
       final BaseModel parent, final AsyncCallback<List<BaseModel>> callback) {
      
      RequestBuilder builder = getRequestBuilder( parent );
      // If the builder is null, then we do not have a Restful Builder we can handle.
      if( builder == null ) {
         callback.onSuccess(new ArrayList<BaseModel>());
         return;
      }
      builder.setCallback(new RequestCallback() {
      
         public void onError(Request request, Throwable exception) {
            GWT.log("error",exception);
         }

      @SuppressWarnings("unchecked")
      public void onResponseReceived(Request request, Response response) {
         if( response.getStatusCode() == Response.SC_OK ) {
            if (HttpUtils.isJsonMimeType(response.getHeader(HttpUtils.HEADER_CONTENT_TYPE))) {
               JSONValue json = JSONParser.parse(response.getText());
               Class _c = StampCollection.class;
               if( parent instanceof StampCollection || parent instanceof Album) {
                  _c = ( parent instanceof StampCollection ) ? Album.class : Country.class;
               }

               // Modified JsonReader which can read structures of ModelType definitions.
               // For this, I believe a regular JsonReader would work, however
               // it would create instances of BaseModel objects instead of
               // my specific types (which have nice accessor methods).
               StructuredJsonReader<BaseListLoadConfig> reader = new
               StructuredJsonReader<BaseListLoadConfig>(new StampModelTypeHelper(), _c );
               ListLoadResult lr = reader.read(new BaseListLoadConfig(), json.toString());
               callback.onSuccess(lr.getData());
            }
            } else {
               GWT.log("a non-status code of OK" + response.getStatusCode(), null);
            }
         }
      });
      try {
         builder.send();
      } catch (RequestException e) {
         e.printStackTrace();
      }
   }
}

Using this DataProxy, I can create an instance and pass it to the store I am creating for the tree:
public class BrowseStore extends TreeStore<BaseModel> {
  
   public BrowseStore( ) {
      super(new BaseTreeLoader<BaseModel>( new StampTreeDataProxy()));
   }
 
   // Simplified method to create the tree content and load it 
   public void load( ) {
      removeAll();
      getLoader().load();
   }
  
}

In this desgin, when load() is called (either on the loader of the store or the store itself), the DataProxy will be called for the root element. Since I am ignoring the reader (argument to the load( ) method) I create the new StructuredJsonReader. As stated above, the reason for this is three fold:
  1. It handles structured ModelTypes (ie. a ModelType with a field representing another ModelType.
  2. The ability to delegate the creation of the BaseModel to another class (in this case the StampModelTypeHelper which will create the appropriate instance of the modeled object (eg. Country).
  3. Finally uses the class provided to create the propert model type definition.

There is one negative of this solution. Currently it is posting a single request per node in the tree (eg. given an Album get all the countries). I plan on redesigning this to be a little more efficient, however given the mixture of BaseModel types, in order to get an efficient structure downloaded, I may need to forgoe the clean model structure in preference for a more efficient algorithm.

Friday, October 3, 2008

ExtJS Action.submit response

When an ExtJS form is submitted, the successful completion of the asynchronous call will call the function mapped to the success config value. This function will take two parameters form and action. If you are returning JSON from the call you an access this directly from the action parameter.

var _form = // ... get your form (eg. formPanel.form) _form.submit({ scope.this, waitMsg:'Doing someting',url: someurl: method: somemethod, success: function(form, action ) { Ext.Msg.alert('Success?', action.result.success ); Ext.Msg.alert('Data returned.', action.result.data.key1 ); } });

The return value should look something like the following:

{"success":true,"data":{"key1":"key 1 result value"}}

Collections and JAX-RS

As I had previously reported, both RestEasy and Jersey both suffered from the inability to return a collection of JAXB marshalled objects. I was thinking on this a little, and while I could use the JAXBCollection fix in Jersey, it seemed a little bit of a hack until it goes into the 1.0 release. Instead (for now), I have written a few wrapper classes which they themselves are XmlRootElements thus supporting marshalling with JAXB. This actually worked and is a reasonable workaround (since I really only need to return collections for four to five persistent objects. These wrapper objects simply have a collection/list of the persistent objects with the XmlElement set to have the appropriate name of the child elements for processing:

package org.javad.stamp.model.collections; import java.util.ArrayList; import java.util.List; import javax.xml.bind.annotation.XmlElement; import javax.xml.bind.annotation.XmlRootElement; import javax.xml.bind.annotation.XmlTransient; import org.javad.stamp.model.Country; @XmlRootElement(name="countryList") public class CountryList { @XmlTransient public List countries = new ArrayList(); public CountryList( ) { } public void setCountries( List c ) { countries = c; } @XmlElement(name="country") public List getCountries() { return countries; } public void addCountry( Country c ) { countries.add(c); } }

Deploying RestEasy in Tomcat 6.0

Since I was successful deploying Jersey to Tomcat 6.0, I decided to check out RestEasy from JBoss. One issue I ran into with Jersey was it's inability to XML serialize a collection of objects. This is bug ID is 18, and it is fixed for the 1.0 release (however a beta of this does not appear to be available yet), but I decided to see if RestEasy (a non-reference implementation) has the same issue.

UPDATE I have discovered from some testing that neither XML nor JSON is currently supported as a return type for a Collection/List of items. An issue has been filed for RestEasy for JSON (RESTEASY-134) with an RC1 delivery, however I did not see one for XML.

Here are the steps I used to get this working:

  1. Download RestEasy from the JBoss website: http://www.jboss.org/resteasy/
  2. Follow the steps 2 to 4 of my previous blog Deploying Jersey in Tomcat 6.0
  3. Download and create a Eclipse library for JavaAssist. Include this in your WEB project (or provide the javaassist.jar for Tomcat)
  4. Create a new RestEasy library in eclipse which contains the content of the rest easy installation lib location. You can probably skip a few of the jars if you do not need all the functionality (such as jyaml.jar and possibly mail.jar)
  5. Modify your web.xml for your project to include the following (this comes right from the sample web.xml in the rest easy install):

    <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" id="WebApp_ID" version="2.5"> <display-name>TestWeb</display-name> <welcome-file-list> <welcome-file>index.html</welcome-file> </welcome-file-list> <context-param> <param-name>resteasy.scan</param-name> <param-value>true</param-value> </context-param> <!-- set this if you map the Resteasy servlet to something other than /* <context-param> <param-name>resteasy.servlet.mapping.prefix</param-name> <param-value>/resteasy</param-value> </context-param> --> <!-- if you are using Spring, Seam or EJB as your component model, remove the ResourceMethodSecurityInterceptor --> <context-param> <param-name>resteasy.resource.method-interceptors</param-name> <param-value> org.jboss.resteasy.core.ResourceMethodSecurityInterceptor </param-value> </context-param> <listener> <listener-class>org.jboss.resteasy.plugins.server.servlet.ResteasyBootstrap</listener-class> </listener> <servlet> <servlet-name>Resteasy</servlet-name> <servlet-class>org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher</servlet-class> </servlet> <servlet-mapping> <servlet-name>Resteasy</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> </web-app>
  6. Start your tomcat in Eclipse and you should be good to go!

Thursday, October 2, 2008

Deploying Jersey in Tomcat 6.0

Jersey is a reference implementation for the JAX-RS (JSR-311) for building RESTful Web services. It is nearing approval with the JSR committees. While there are many wikis and articles in using Jersey with Netbeans (downloading the Netbeans 6.1 EE package includes everything you need for JSR-311), there was very little information on using Tomcat 6.0 with Jersey. After piecing together several blogs, I was able to get a simple resource working in Tomcat using JSR-311.

The following are the steps I used to get this working:

  1. Download and unjar the Jersey distribution (I used 0.8) from https://jersey.dev.java.net/ to a location on your system (lets call it JERSEY_HOME for this article).
  2. Download and install the Java-WS distribution (I used 2.1) from https://jax-ws.dev.java.net/. (lets call it JAXWS_HOME for this article).
  3. Rama Pulavarthi wrote a blog (in fact the key for me) in configuring tomcat to reference the Jersey distribution jars. To summarize, in TOMCAT_HOME/conf/catalina.properties modify the shared.loader property entry to point to your JAXWS_HOME/lib/*.jar. Mine looks like this:

    shared.loader=C:/dev/jaxws-ri/lib/*.jar
  4. Create a new Dynamic Web Project in Eclipse using Tomcat 6.0 as the application server.
  5. Create a new library in which you add the following JARs:
    1. JERSEY_HOME/lib/asm-3.1.jar
    2. JERSEY_HOME/lib/jersey.jar
    3. JERSEY_HOME/lib/jsr311-api.jar
  6. Next modify the web.xml to include the adaptor for jersey. Most of the Blogs refer to a different class than what appears in 0.8 I am not certain which is the right class, only the following works for me (and the documented one is not available throught the downloads!):

    <servlet> <servlet-name>ServletAdaptor</servlet-name> <servlet-class>com.sun.jersey.impl.container.servlet.ServletAdaptor</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>ServletAdaptor</servlet-name> <url-pattern>/resources/*</url-pattern> </servlet-mapping> <session-config> <session-timeout>30</session-timeout> </session-config>
  7. If you start Tomcat, you will see the following error:

    com.sun.jersey.api.container.ContainerException: The ResourceConfig instance does not contain any root resource classes.

    This error is due to not providing any root RESTful resources.

  8. Create a new class and include the following:

    import javax.ws.rs.GET; import javax.ws.rs.ProduceMime; import javax.ws.rs.Path; @Path("/test") public class TestResource { @GET @ProduceMime("text/html") public String getMessage( ) { return "hello"; } }
  9. Start and test the application with the following url:

    http://localhost:8080/appName/resources/test

    and you should see a hello in your web-browser.

Making your servlet application URLs more Restful

If you are a developer who likes to work with Java you may have come to know REST and Restful Web Services. The real advantage of REST is its ability to get rid of some of the "webspeak" and make your URLs a little more platform independent. As a client developer it is much nicer to make a request to http://somehost/Application/stamp/5533 to retrieve stamp 5533 than the traditional http://somehost/Application/servlet/StampServlet?id=5533. Theoretically you could rewrite your application services layer without modifying the client. If you are not familiar with REST and Restful Web Services, a good dissertation on this can be found here by Roger L. Costello.

So all of this is nice, but how can we apply this to a servlet-based Web Application? There are projects out there such as the Java Restlet API, and while I think this is good, it does mean essentially having a non-servlet compatible solution (since the Restlet takes the place of a servlet). Instead, you can take advantage of some of the REST themes by using servlet-based Web Applications following these steps:

  • We want to write a Rest-like servlet for accessing Stamps. We have a servlet of the class StampServlet which is mapped in the web.xml of our servlet container as stamp. This would look like the following:
    <servlet> <description>Servlet for processing Stamp Restful requests</description> <servlet-name>stamp</servlet-name> <servlet-class>org.javad.stamp.servlet.StampServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>stamp</servlet-name> <url-pattern>/stamp/*</url-pattern> </servlet-mapping>

    The key here is the /* in the url-pattern of the servlet-mapping. This will mean any URI that is received that starts with "stamp" will send it to the StampServlet, but we can use any of the information on the URI chain to help direct the servlet to process the request in a Restful way. For example, if we are retrieving the details of a stamp our URL would look like:
    http://hostname/StampApp/stamp/563

    Using the GET method, where the ID of the stamp in this case is 563. If this was a modify function, the URL would look similar only a method of PUT would be used.
  • REST uses many of the less-popular methods of HTTP. In particular the PUT and DELETE methods. Fortunately, the HttpServlet class does implement these with the doPut(HttpServletRequest,HttpServletResponse) and doDelete(HttpServletRequest,HttpServletResponse) methods. If you are not comfortable using these methods (for example the HttpServlet Javadoc does mention PUT for use in placing files on the server like FTP), we can make our URLs be a little less Restful, but still Restlike by using POST method and inserting a keyword after the ID of the object in question. I should note, that for some applications like GWT, using protocols other than POST or GET is difficult. In GWT we can attach a header value (X-HTTP-Method-Override) to indicate we'd like to use DELETE, but it is still sent to the POST method receiver initially. In this case, using a POST method URL along with the method override header we can still have a Restful URL that would look like the following:
    <form action="http://hostname/StampApp/stamp/563" method="POST"> </form>

    More practical would be the use in an AJAX application using the XMLHttpRequest in Javascript (since we can override the X-HTTP-Method-Override header:
    var req = new XMLHttpRequest(); req.open( "POST", "http://hostname/StampApp/stamp/563" ); req.setRequestHeader("X-HTTP-Method-Override","DELETE"); req.send();
  • URL Construction is great, but how do I make use of this in my servlet doPOST( )? Since we mapped the servlet with a /*, anything to the right of the stamp in the URI is treated as part of the PathInfo. Therefore within your doPOST() method, you can request the path-info and take the appropriate action. Similar to the following:
    protected void doPost(HttpServletRequest request, HttpServletResponse response) { String pathInfo = request.getPathInfo(); Method method = request.getHeader("X-HTTP-Method-Override"); if( pathInfo.isEmpty() ) { createNewStamp( ); // purely a POST } else if( method != null && pathInfo != null) { long id = Integer.parseInt( pathInfo.split("/")[0]); if ("DELETE".equalsIgnoreCase( method ) ) { doDelete(request,response); // or call deleteStamp(id); } else if ( "PUT".equalsIgnoreCase( method )) { doPut( request, response ); // or call modifyStamp(id); } else { // if there are further elements in the pathInfo call the appropriate code... } } else { log("POST method called for stamp details without Method Override header. Use GET method to retrieve stamp details or specify a Method Override header."); } }

While this is not perfect, it is certainly easier to program a client to a URI like stamp/563 than the traditional way of stamp?id=563&action=DELETE. In the above example, it is likely that the code for servicing an action like Delete is in a specified method, so instead of trying to call the doDelete( ) simply a call to deleteStamp(id) would probably be more appropriate. Using this technique allows you to support both methods of processing your objects in a consistent way, while being flexible in support for the client technology. While it does muck up your doPost() methods a little bit this is a minor tradeoff for more readable URIs.

I should also mention that additional values in the pathInfo are used in Rest to indicate additional actions to take place against that object. For example:

http://hostname/StampApp/stamp/563/catalogues

With a method of GET would refer to a request to retrieve the catalogues for the stamp with the ID 563. In this situation having a controller (such as that provided by the Restful application) would definitely help in forwarding these to the correct service. Depending on the complexity of your application, you may be able to simply do this internally within your servlet such as in the else case mentioned above, however for more than a few actions this can be complex. Especially if catalogues (using the example above) can exist outside of the context of a stamp. This would mean you'd have to provide a servlet or some application that can process catalogues and find a way to tie the stamp servlet with the catalogue servlet. The worst case I can think of in my application would be trying to get all of the stamps that are in a country, filtered by an album in a stamp collection. This might look like:
http://hostname/StampApp/collection/56/album/25/country/76/stamps

As you can see, this is not quite as readable. Since I can also get stamps by album or simply by collection I might be more prone to simple request the stamps for collection 56 and then take on the album/country ids as queryString data:

http://hostname/StampApp/collection/56/stamps?album=25&country=76

Not pure Rest, but Rest-like. If I truly did want to retain the Restful URI, I would likely write a controller which returned stamps, and if the pathInfo of the GET request for the collection servlet contained stamps in it I would directly call it passing the pathInfo and then allow the controller to decide how to filter the URI (in my case I have a StampFilter object which accepts the three objects and calls the appropriate JPA Query based on the filter setup so this would be quite easy for me to do).

Monday, September 29, 2008

Remote Assistance - Not always able to connect

I ran into a situation where I could not connect to my fathers PC using Remote Assistance. Now, given that he lives about 3000 miles away it is essential that I am able to connect with Remote Assistance to help him with miscellanious activities. I discovered that the Remote Assistance files are really nothing more than XML, and upon edting the XML I realized there was a series of invalid IPs in the UPLOADDATA block's RPTICKET attribute. Upon removing the invalid IPs I was able to immediately connect. I am not sure where they all came from (some were the obvious local IPs behind his firewall).

ExtJS ComboBox - Act like HTML Select

At first I struggled to get my ExtJS ComboBoxes to act like a straight HTML Select. In my application, I have a grid which loads the data into a store via AJAX. Later, create or edit dialogs display which contain a combobox of items represented by the store. Rather than load these again from the server, I wanted to reuse the data from the store. The problem I had is selecting a value in the dialog would filter the grid using the same filter. It appeared that ExtJS was applying the filter on the store rather than the view components. My original solution involved creating an array of items after the store had been loaded and passing this array to a combobox constructor. After further research, I was mistaken and the following needs to be done to make the ExtJS Combobox components would as a separate filter like an HTML Select:

  1. Provide a hiddenId option to the ComboBox constructor.
  2. Set the triggerAction to 'all'. Doing so essentially clears the filter and will treat the combobox like it is querying from the store separately.
  3. Ensure the mode is set to 'local'.

Here is an example of a configuration:

org.javad.CatalogueHelper = { getCatalogueSelection : function(id, options) { options = options || {}; // initialize if undefined var catalogues = new Ext.form.ComboBox(Ext.applyIf(options, { id: id, // Passed in id to create composite fields. fieldLabel: 'Catalogue', allowBlank : false, hiddenName: id + '-selected', name : id, mode:'local', editable: false, valueField: 'id', // from store, what to store in FORM displayField: 'displayString', // Value to display in Select store: org.javad.CatalogueStore, hiddenId: id + '-value', width: 200, triggerAction: 'all', // key to clear filter selectOnFocus: true })); return catalogues; } };

Tuesday, August 26, 2008

JPA and Optional Associations

I was writing some unit tests to test a few of my queries and noticed they were failing to return results (even though the resultant rows were clearly in the database). Looking into my changes, the only differences was that I recently added the EclipseLink eclipselink.join-fetch for two one-to-many relationships. From looking at the resultant SQL, it became clear what the problem was. By adding these join-fetch statements, my query became more efficient (since I didn't have subsequent row-by-row lazy fetches later), but it also became invalid in some circumstances. In particular, the fetch join added some AND statements to the WHERE clause whereby the foreign key id was equal to the primary table id. However if the many-side relationship is empty, then this statement will not return any rows. I think the way around this would be to box the AND statement in a compound OR statement with an exists condition. Currently to my knowledge the JPA implementations do not support this, and I am going to research whether or not this is achievable by manually modifying the join-fetch statement.

Friday, August 22, 2008

Refactoring Followup - Strange Side Effect

While I was quick to point out that my refactoring of my services end up being a little bit painful on the Swing Heavy-client side, when it came time to refactor my web-services client (currently a series of dedicated servlets used by my mobile devices) it was surprisingly a clean update. Why was this? What lesson can I pull from this. The main one was that the web-services code was very specific in its function which was mainly to parse the query parameters and marshall the appropriate calls to the service to obtain the results. Since I already had tests around the services calls, the main work I had to do with the refactor was re-insert a lot of the XML JAXB annotations to produce the appropriate XML output.

As an aside, while looking at MoXY with EclipseLink, I noticed their tools to assist in providing a true WebServices implementation. This might be an area to look into adopting (in providing a true JAX-WS implementation).

Friday, August 15, 2008

Refactoring: Biting off more than you can chew?

So as most can assert from reading my blog I have developed a stamp tracking software "suite" of tools. The core of this was based off a JPA implementation which was recently moved to EclipseLink. With some of the new tools available to Eclipse for use with JPA (in particular Dali), I decided to think about reorganizing my services and trying to streamline some of my code (Using MappedSuperClasses as abstract parents for example). I focussed on my StampModel module which had a pretty decent unit test coverage of around 80% (some of the code not covered include error handling and a lot of my upgrade code) and was also where all of my entities existed. I even created a bugzilla entry for this work so that I could track my progress. I estimated it would take me no more than 12 hours. Well, 30+ hours later I finally have repackaged, rewritten or revamped the module (and database tables). For the first time in nearly a week I was able to open my Swing based editor last night and manage my collection. I still have some outstanding issues, but my refactoring is pretty much complete. There are a couple of lessons I took away from this.
  1. Never assume how long it will take. Invariably something will come up (and I have to admit the Olympics coverage may have pulled my attention away from eclipse on occasion).
  2. When I started refactoring I relied on my unit tests and decided to make updates to the Swing client after refactoring was completed. This was a mistake as often I had to think back to what the update was rather than making the change immediately (I am thinking along the lines of repackaging, changing initialization code etc.) Overall these changes tended to be pretty simplistic (change package X to package Y) but occasionally a DAO that I did not particularly like in the previous module was changed. In one occasion, the new DAO did not really fit and I had to rewrite some controller code (but at least it is better now).
  3. Part of my refactoring was to move my entities which were annotated to be orm.xml based. I should have done this first without changing the data-model. Changing the database schema along with moving the entities to orm.xml created a lot of thrashing to work through.

Tuesday, August 5, 2008

Certifications even for the experienced?

Those of you who know me know that I sometimes shun the world of academia. Not that I do not think education is useful. I do. I am the current holder of two bachelor degrees and numerous certifications and I can honestly say they have proven useful in my career. But too often I see a candidate interviewing at my company who has years of academic training (multiple bachelor degrees, masters degrees, certifications up the wazoo), but they can not solve a simple puzzle put in front of them, or they display a complete lack of communication skills. After I was Java certified in 2003, I decided that I would take additional Java certifications to advance my knowledge. To date this has not happened, largely because my life is considerably busier and I much prefer to tinker and explore technologies than read the official way of doing something. Attending conferences has also been a pleasure to increase my awareness of new technologies.

Recently while being disposed, I was flipping though a copy of the Web Component Developer prep-book put out by Manning some years ago. It was a decent book, and as I flipped through the pages I found myself reading up on the techologies that I used on a daily basis, as well as advise my teams in their use. I feel in general, dispite the maturity of many Web Component pieces (such as Servlet, TAGs and EL), I feel many developers really do not understand why they are using them, or when to use one technology (for example TAGs) over another (such as Servlets). I proposed to the team that reading a book on such a subject matter might prove useful for their benefit. As a technology leader, I therefore felt it only appropriate that I study and become a Sun Certified Web Component Developer.

So this leads me to the conclusion that I suppose I need to engage in the duties of reading a book (for this current challenge I have chosen Head First Servlets & JSP from O'Reilly covering the 2nd edition), preparing myself and actually taking the exam. Based on my study habits, the best motivator I can think of is to schedule the exam now, that way I have to be prepared prior to the actual "writing" of the exam.

Sunday, July 27, 2008

Automatically recording Create/Update Timestamps with JPA

Now that I am using Bugzilla to track various bugs and enhancements for my projects, I finally got around to addressing this issue of "Provide the ability to record the create/update timestamp." for my stamp objects. This seems simple enough, and most database applications record this information, however by default this is not something that is automated by the JPA frameworks (unlike the Primary Key with the @Id annotation). Let us first examine the ways we could accomplish this:
  1. Provide a database trigger to try and insert the timestamps automatically.
  2. Manually set them (or have each persistent service set them) before performing a persist() operation with an EntityManager.
  3. Use aspects to dynamically insert the timestamp.
  4. Provide an EntityListener which automatically sets the creation/modification timestamp at persist time
  5. .

Obviously the first solution is very database specific and is not really tied to the JPA code. The second solution is likely to be error prone and easily missed. The final solution is the best way to handle his. Daniel Pfeifer provides an excellent walkthrough of this technique in his blog here. I have a few comments on this. First off, the concept of an entity listener, does not follow the normal "implement this interface" convention. An entity listener is any POJO class which contains one or more methods in the format:
public void someMethod( Object obj )


It should be noted that there are javax.persistence annotations for each of the JPA lifecycle states. In the case of create timestamps, annotating a method with @PrePersist will allow it to be used to provide the creation timestamp. For the modify timestamp, annotating a method with @PreUpdate will call this method on persisting an entity which was previously persisted. The entity listener can be registered either in the orm.xml file (as described by Daniel Pfeifer) or one the Entity class itself using the annotation @EntityListeners( class ... ). Personally I prefer this technique as it allows me to programatically tie a class with its behavior (such as storing timestamps on create/update) without having an external configuration. Using an external configuration in several modules, and unit tests quickly becomes muttled in that you forget to update the file under test etc. It also increases the developer's awareness of this association, and to be candid, it is unlikely that I really want to swap out the persistence timestamp handling at package time with a different solution. Since the annotation is applied to an abstract @MappedSuperClass annotated class, any implementing classes will automatically inherit this behavior. I might change my stance on this approach in the future, but at least for this this seems to be the right way to approach this.

Subversion/Bugzilla and Eclipse

I wanted to try and get my newly installed Bugzilla system configured to work with eclipse and subversion. I wanted to be able to reference my Bug IDs on subversion check-in and be able to navigate to them in Eclipse. Mark Phippard did a fantastic job discussing this in his blog here.

Since I was connecting to a bugzilla database which I am allowing anonymous view access to, the bugtraq:url property I used was http://<hostname>/bugzilla/show_bug.cgi?id=%BUGID%. Doing so, meant that when I check in my files and provide a Bug ID, the subversion history will have a record of it, which will be provided as a link to the information page of the bug in eclipse. Pretty neat stuff!

Sunday, July 13, 2008

HSQLDB/EclipseLink Bug Filed

Well I got around to filing the uniqueness column constraint issue for HSQLDB and EclipseLink. The Bug ID is: 240618.

EclipseLink Startup Lag

I took a few minutes to look into the startup timing differences between EclipseLink and Toplink Essentials. It looks like with EclipseLink they have changed the way the login information is handled for the decryption of the password. Previously in TopLink Essentials, the password was decrypted through the SecurableObjectHolder, which forced the JCEEncryptor to be initialized only upon creating the new instance. In EclipseLink, rather than initializing on first use, they are pre-initializing the JCEEncryptor through the SecurableObjectHolder and then decrypting the password. Another difference between the EclipeLink and TopLink Essentials encryptor, is with EclipseLink they are creating a separate cypher for encryption and decryption. Having looked at the code, this makes sense from a performance scaling perspective. Having separate encryptors/decryptors means that you do not need to reinitialize them for each encryption or decryption. Of course, on startup that would mean the instantiation of not one cypher (for decrypting the password) but two, which does account for about a 4000 ms* difference in the profile runs. This interesting enough is also the difference between the application startup running either EclipseLink and TopLink Essentials.

* Since I am profiling in Netbeans with All Classes the performance numbers themselves are quite poor. What is more interesting is the ~ 18% performance degradation on startup this causes.

Saturday, July 12, 2008

EclipseLink - Initial Impressions

As was suggested by Doug Clarke, I took a look at EclipseLink. I was actually caught a little off guard on the whole subject, as with the birth of my daughter I had pretty much gone under a rock for the past few months. EclipseLink is yet another JPA provider, but there are some interesting aspects to it:
  • It was chosen by Sun Microsystems to serve as the reference implementation for Java Persistence 2.0.
  • The JPA development community has essentially switched from TopLink JPA right over to EclipseLink.
  • EclipseLink is bringing further capabilities with support for MOXy (JAXB), SDO and OSGI to name a few.

There are several good articles on the tool itself from the EclipseLink home page.

I downloaded the most recent release from the Eclipse website, and set about setting up my applications to use it. Generally speaking, I had tried to make only API calls to the javax.persistence APIs, thus I had very little package dependency changes. Since EclipseLink is based off TopLink virtually all the classes from TopLink appear in EclipseLink under a different naming convention. Basically oracle.toplink.essentials became org.eclipse.persistence. Since I have a nice level of unit testing around my services, I was able to quickly identify the places that were failing. In particular, I had some Query Hints that I was applying if the provider was the TopLink provider. I had to replicate this functionality for EclipseLink code paths (fortunately it was only in a few places). I think the biggest impact was on the Upgrade Tooling in which the SQLSchemaUpdateUtility had package dependencies on TopLink Essentials. Instead I roled this into a StatementExecutor interface/implementation and used reflection to call the APIs. Similarly, I did something similar for EclipseLink if I detected it as the persistence provider.

One observation I have noticed is that EclipseLink takes significantly longer to "initialize" than TopLink Essentials. This is most noticeable with the Unit Tests and HSQLDB, where the execution time is approximately ~ 5.0 seconds different. I brought up my JavaSE application in Netbeans 6.1 using the Profiler, and there is a significant time lag building and initializing the EntityManagerFactory. I am not convinced yet that there isn't a setting I am missing which is causing this lag. For example, launching my JavaSE application (which will query to see if there are any upgrades needed, query for all collections and countries, display the Swing UI etc.), almost 56% of the total startup time was spent in the deploy() method of the EntityManagerSetupImpl.

EclipseLink provides a nice compact javax.persistence_1.0.0.jar file which represents all of the J2EE javax.persistence classes required to compile. This is great for application development, and means you can provide your JPA provider at runtime (along with the persistence.xml configuration). Of course, this assumes you are using non-compile dependencies for things like QueryHints etc.

The documentation for EclipseLink seems a little disorganized, but overall there is a lot of information available through the EclipseLink Wiki User Guide.

I am looking forward to using some of the query features (such as fetches for foreign key objects) and it will take some time to really take advantage of the broader features being offered.

Commentary for my HSQLDB and Uniqueness Post

I figured I should post an update about the HSQLDB issue regarding the JPA "unqiue=true" constraint issue. As was suggested by Doug Clarke, I took a look at EclipseLink and it too suffered from the same issue. I will file a bug sometime this weekend with EclipseLink (Last week their Bugs link was broken). As I was thinking about the solution, I think this could affect more than HSQLDB potentially and therefore I think the way to handle this would be add a new method to org.eclipse.persistence.internal.databaseaccess.DatabasePlatform named boolean shouldUniqueKeyBeHandledAsConstraint() and then the code I mentioned earlier could be genericized further with the HSQLPlatform class implementing the aforementioned method with a value of true.

Saturday, July 5, 2008

HSQLDB and Toplink : Uniqueness Constraints

After some effort, I was able to figure out a way to place the UNIQUE constraint from the @Column definition on an JPA entity bean, and have it handled properly with the Toplink JPA. I was actually surprised that even with build 40 or Toplink Essentials v2.1 this was still an issue. The problem was, if you defined a value of unique=true in your JPA column annotation, Toplink Essentials would insert the UNIQUE keyword in the create table routine. This would break on HSQLDB, which does not support this keyword during column descriptor creation. The challenge was to fool Toplink into handling the unique attributes like contraints which it would add after the table was created with an ALTER TABLE sequence.

The route I have chosen for now to solve this for my unit testing needs was to provide replacement Toplink-Essentials class files ahead of the Toplink-Essentials JAR file used by my application. Therefore my applications all run with the approved Toplink distributable, however my unit tests run with the instrumented files. There were two small changes I had to make:

(1) Modified oracle.toplink.essentials.tools.schemaframework.FieldDefinition to not write out the unique field keyword if the database was HSQL. From Line 168 of the appendDBString method:
   if(isUnique() && !session.getPlatform().isHSQL()) {
      session.getPlatform().printFieldUnique(writer, shouldPrintFieldIdentityClause);
   }


(2) Modified oracle.toplink.essentials.tools.schemaframework.DefaultTableGenerator to add the unique constraints if the database is HSQL on line 253 of initTableSchema()
 if( dbField.isUnique() && databasePlatform.isHSQL()) {
   tblDef.addUniqueKeyConstraint(dbTbl.getName(),dbField.getName());
 }

Currently I have not posted any information to the Glassfish project with these updates. I am not certain this is the ideal way to achieve this, but from my unit testing perspective I appear to be off to the races. If you are interested in these changes, please contact me and I'll look into working to get them submitted for Glassfish.

Thursday, July 3, 2008

JPA Unit Testing

Up until now, I have been testing my stamp services using a test-schema that lives on my MySQL server. This server is remote (well in my basement connected to a 100Mb ethernet). While this has worked well for me, as the number of unit tests have increased in my code, the time of the tests is also increasing. Eskatos's wrote a great little blog article which introduced HSQLDB to my vernacular (see Unit test JPA Entities with in-memory database). Of course in his scenario, hibernate was used in place of Toplink. However I was intrigued by the idea of using a in-memory only database for unit tests. So I set about to get my unit tests to run. This turned out to be tricky to get working using Toplink. The first issue I ran into was that the tables refused to be created on startup. It turns out, this is due to an issue with the toplink.target.database property missing from the persistence.xml file. This was outlined in a useful blog TopLink JPA and HSQLDB Quirk.

Even after making these changes however, I still could not get toplink to properly create the tables. It turns out, I had several entity beans which had name fields defined as unique=true. This caused the UNIQUE keyword to be written in the CREATE TABLE statements by Toplink which appears to be an invalid syntax for the HSQLDB database. After removing this JPA constraint from the affected objects I was able to successfully create the tables and run my tests. I also had some minor refactoring to do in some SQL utilities to leverage the persistenceUnit configuration, but I was very impressed with the speed.

Overall, my test suite went from executing in approximately eighteen seconds down to just four. While eighteen seconds may not seem like a long time, it was sufficiently long to disrupt my work efficiency. I decided to retain my MySQL peristence unit (for occasional "live" DB testing), and have now configured two test targets in Eclipse which take a org.javad.jpa.serviceName environment variable to switch between the hsqldb and toplink-test persistence units.

The final activity I will have left to do, is to determine a way to reinsert the unique statements in my entity beans without have the SQL generated for HSQLDB. There are a few threads out there, so I should be able to come up with something.

Finally, here is my persistence unit configuration for the HSQLDB database:
<persistence-unit name="hsqldb" transaction-type="RESOURCE_LOCAL">
  <provider>oracle.toplink.essentials.PersistenceProvider</provider>
  <class>org.javad.stamp.model.Album</class>
  <class>org.javad.stamp.model.CatalogueNumberReference</class>
  <class>org.javad.stamp.model.Category</class>
  <class>org.javad.stamp.model.Country</class>
  <class>org.javad.stamp.model.Stamp</class>
  <class>org.javad.stamp.model.StampCollection</class>
  <class>org.javad.model.ClassVersion</class>
  <class>org.javad.services.TestEntityWithIdentity</class>
  <properties>
    <property name="toplink.jdbc.user" value="sa"/>
    <property name="toplink.jdbc.password" value=""/>
    <!-- <property name="toplink.logging.level" value="FINEST"/> -->
    <property name="toplink.jdbc.url" value="jdbc:hsqldb:mem:."/>
    <property name="toplink.jdbc.driver" value="org.hsqldb.jdbcDriver"/>
    <property name="toplink.ddl-generation" value="create-tables"/>
    <property name="toplink.target-database" value="HSQL"/>
  </properties>
</persistence-unit>

Monday, June 9, 2008

The Tale of Two Tomcats

The other night I was baffled as to why I could get my Linux server running with Tomcat properly doing BASIC authentication, whereas my desktop development environment would not authenticate through Eclipse to the same URI. Well it turns out, that when you have a web project in Eclipse, while the Tomcat binaries are run from your tomcat install, it uses a separate "set" of configuration files when run in Eclipse. Therefore, my tomcat-users.xml that existed in my tomcat installation location was being completely ignored in favor of the Eclipse configuration file. Once I discovered this, I was able to quickly get BASIC authentication to work (which is fine for testing purposes). Below is an example of how this shows up in the Eclipse Package Explorer:


Sunday, June 8, 2008

MySQL with Servlets - Poor uptime

Now that I have my mobile application working on the smartphone I have been quite pleased with the client. Then I started getting strange timeouts and no matter what I was trying to do I was unable to execute queries against the database. From looking in the Tomcat logs I discovered an interesting exception:

Last packet sent to the server was 3 ms ago.
        at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2579)
        at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2867)
        at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1616)
        at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1708)
        at com.mysql.jdbc.Connection.execSQL(Connection.java:3255)
        at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1293)
        at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1428)



This led me to research whether this could be a MySQL issue. It turns out, that the connection pool will close all connections after eight hours of inactivity. Their solution of using the "autoReconnect" property apparently will not work under most circumstances. This is covered in section 26.4.5.3.4 on the MySQL reference manual.

The solution? Well it has been suggested the writing a small daemon thread which wakes up every hour and executes some small query should be sufficient to keep the connections open. I have not implemented this yet, but this seems reasonable. In my case I'll probably tie it to one of my servlets in their init() methods.

Saturday, June 7, 2008

Visual Studio/Cradling Emulators and Your Device

It does not take long searching the Web to see that no one has had much luck cradling the device emulator when you have a device already attached to ActiveSync. Unfortuantely, if you are switching sync'd devices (between a device and an emulator) pretty soon Active Sync does not recognize the emulator at all. After fooling with this a little bit, I did discover a way to get it re-detected.

  1. Cradle the emulator through the Emulator Manager. This of course will not be detected by ActiveSync.
  2. Go to ActiveSync, and click on File->Connection Settings...
  3. When the dialog appears, click on the Connect... button to bring up the connect dialog and click next which starts the polling step. Cancel the polling after a second.
  4. You will see the Connection Settings... dialog become unresponsive (all grey - I guess they are performing code in the UI tread (interesting observation)), and after about 5 seconds hear the familiar "beep" and the emulator will become cradled.

This process has worked for me switching between the device (my HTC-8900) and my smartphone emulator. After performing this more than twenty times however it may not longer work, in which case either restarting ActiveSync (killing and restarting the process) or a reboot of the computer may be required. However typically I find this works ok for a normal development session of a few hours where you are mostly using the Emulator and every so often downloading to your device to ensure that it still works.

Friday, May 30, 2008

MS Bugs - You mean they exist?

I have had the pleasure of playing with Microsoft Visual Studio 2008 to try a little Windows Mobile 6 development. This has been a great opportunity to learn more of C# and I have to admit that while I am learning .NET as I go, I am coming up to speed quite quickly. I decided the other night to provide some local storage for some of the values I am retrieving from my server. Microsoft provides a really light-weight version of their SQLServer which runs on the mobile platforms called SqlCe and I have been quite impressed with it so far. I read/watched some of the "How do I?" videos provided on the MSDN website and started using SqlCeResultSets. I wrote a simple loop in my code similiar to the following:
   ...
   SqlCeResultSet resultSet = _sqlController.getResultSet(ResultSetType.COUNTRIES);
   if (resultSet.HasRows)
   {
      foreach (SqlCeUpdatableRecord record in resultSet)
      {
          // do something with the result
      }
   }


However, evertime it hit the foreach statement the program would terminate. I suspected this was an infinite loop or some other issue. Well it turns out that indeed there is a bug in .NET 3.5 which causes the ResultSet to enter an infinite loop when the GetEnumerable( ) is called. An interesting article that outlines this is located Jim Wilson's Blog. Fortunately this issue only cost me a few minutes, but I was glad to see that this was a Microsoft issue and not my beginner C# programming skills at fault.

Friday, May 23, 2008

JPA Identity Interger/Long or String?

In my previous article JAXB and the nasty XmlID, I discussed how if you wanted to use @XmlID and @XmlIDREF as pointer references to XML serialized objects that the values had to be Strings. In some cases they might be fine, especially if your primary key is a compound key that you are serializing. However, the more common case for simple persisted objects is you have used some numeric identity value as the primary key. A short search will show almost all demos/tutorials and examples of JPA use either a Integer or Long wrapper. This of course is not a restriction of JPA. You can use anything you want as the primary key, but if you are going to leverage some of the @GenerateValue options, unless you are willing to define your own Alpha scheme, you will get a numeric value. Which leads me to my point: If I want to XML Serialize an object using JAXB (which is much easier than doing it by hand with a DOM Document) that contains foreign key references, surely I can do this in some way without having to convert my identities to Strings?

The simple answer is yes, but not by applying the JAXB annotations on the persisted property. Instead, you would need to create a proxy method that can convert your PK (Primary Key) into a string representation, and this method is tagged with the @XmlID annotation. In order to be used by JAXB you need at least a String property. Lets look at a simple code example.
   import javax.xml.bind.annotation.XmlID;
   import javax.xml.bind.annotation.XmlRootElement;
   import javax.xml.bind.annotation.XmlTransient;
   import javax.xml.bind.annotation.XmlAttribute;
   import javax.persistence.Entity;
   import javax.persistence.Id;
   import javax.persistence.GeneratedValue;
   import javax.persistence.Transient;

   @XmlRootElement
   @Entity
   public class PersistedObject {
       @Id
       @GeneratedValue(strategy=TABLE, generator="CUST_GEN")
       @XmlTransient // we are not going to write out the id
       private Long id = null;

       @XmlTransient
       @Transient   // this is not an entity managed attribute
       private String identityString = null;

       @XmlTransient
       public Long getId( ) { return id; }

       public void setId( Long id ) { this.id = id; }

       @XmlID
       @XmlAttribute(name="id")
       public String getIdentityString( ) {
          return ( id != null ) ? id.toString() : "0";
       }
   }


In this manner, we can denote our JPA identity with the data-type which makes sense (either a Long or Integer) yet allow for easy XML Serialization through the use of the @XmlID field on the getIdentityString() method. This is certainly not ideal, and I would've preferred to put the annotation on a method only, however JAXB requires the XmlID tag on a property.

Unfortunately for me, I only thought of this after converted my persistent beans, services and unit tests over to String Ids. Fortunately SCM tools (subversion in this case) come to the rescue and I can easily back out my changes.

Monday, May 19, 2008

Smartphone Connectivity in Visual Studio 2008

Hooking up Microsoft Visual Studio 2008 and using the network on an emulated smartphone is a rather tricky business if you are not familiar with the procedure. Fortunately, a very good blog exists to capture this information at: Akhune's Weblog. The limitation of this of course is that you can not have your mobile device AND the emulator both connected to ActiveSync at the same time.

An Update (05/23/2008): Having used the emulator a little more, I think the key point that is often lost is the usage of the "cradle" option to dock the emulator with ActiveSync. Once you get in the habit of doing this per development session, you will find it is much easier. Also, I have not had very good results of having both my Smartphone and the Emulator cradled simultaneous and recommend using one or the other. This appears to be a pretty common issue.

Tuesday, May 13, 2008

Flexjson limitations

The old saying goes "If it sounds too good to be true, it is too good to be true". Well this applied to Flexjson. While this tool is very capable of externalizing an object to the JSON format, there are several shortcomings which make it difficult to use at the moment. The contributors have mentioned they are working to address these issues. Currently Flexjson only can process primitives, wrappers, Strings and Dates. Objects (that are not collections) are they themselves externalized into JSON. In complex data models, you may not wish to serialize the entire downstream object. You may wish to only serialize it's ID. This (and my gripping about the implementation) is shown in one of my previous posts on JAXB and XmlID. Currently Flexjson has no clean way of supporting this. The only was to attempt this would be through the usage of include() and exclude() on the JSONSerializer. The downside of this, is that you essentially need a specific handler for each object you want to serialize since the attributes/conditions of inclusion or exclusion will change. Lets look at an example:
@Entity
public class Stamp implements Serializable {
  private Long id;
  private String description;
  private Country country;
  // ... other attributes and methods
}

@Entity
public class Country implements Serializable {
   private Long id;
   private String name;
   // ...
}
So in this example, if we wanted to serialize all of the Stamps to JSON, by default the country would be serialized for each stamp. In a typical system, we might have 200 countries and 50,000 stamps. This means that our countries are fully serilized in a redundant fashion many times over. In this situation what we really want is the country id field. We can get this in the following way:
  Writer out = // ... some writer like a PrintWriter
  Collection stamps = stampService.getAll( );
  JSONSerializer serializer = new JSONSerializer( );
  serializer = serializer.include("id","description","country.id").exclude("*");
  for( Stamp s: stamps ) { 
     out.write( serializer.serialize( s ) );
  }

While this works, If your object has many properties and object relationships, this can get a little exhaustive setting up the include and exclude parameters. It also means you either (a) need a introspective tool to read this from your beans or (b) you need to provide some handler for each bean to setup the includes and excludes properly. I personally have great faith in the Open Source community, and will look forward to leveraging the next version of Flexjson to cleaner handle this situation with a Transformer (Transformers today only handle Strings, primitives and dates). Until then I suppose I'll have to come up with some solution that is tied to my object model.

Saturday, May 10, 2008

JSON Annotating of Java Beans

As I was working through a GWT Code Serializer for generating Java serialization code to serialize objects as XML or JSON, I started thinking about how nice it was to leverage JPA Annotations in documenting JSON serialization on persistent beans. Wouldn't it be nice to serialize objects to JSON as easily as we can do with JAXB? After searching the WEB for a solution, I was almost about ready to go ahead and develop an Annotation infrastructure for this very problem. That was when I came across Flexjson, which does exactly this. In fact, unlike JAXB's marshalling terminology, Flexjson actually calls it serialization. As of yet, I have not tried Flexjson, but from looking at the examples shown in the link above, it seems to do exactly what one would expect.

Thursday, May 8, 2008

The Engineers Approach to Baby Crying

I have had the fortunate gift of becoming a father recently to a beautiful little girl "Angela". However, as a first time parent, trying to deal with a crying baby was next to impossible. That was until we found out what all the sounds meant and what worked and didn't work. Now she is three weeks old and almost sleeping through the night. The kep principle is to feel confident and calm that you can take care of whatever the needs of the baby are. That being said, a little decision matrix can go a long way in assisting this. For that reason, I have put together this nice "binary flow chart" which will guide you through to happier parenting!

Saturday, May 3, 2008

Olympus and Poor Usability - Lesson 1

Today I was helping my father install a version of Olympus Master 2.0 on his Vista laptop. We were struck back when it asked him to enter the Country. The list seemed to include every country in the world, however they were in no particular order. In his case we has looking for "Canada", and hitting the letter "C" brought up "Cuba". Suffice it to say, we gave up trying to find Canada since finding it in two hundred odd items in no particular order was impossible and frustrating. We selected the first item "United States". Which makes me wonder. Does all software that prompts for a country put "United States" at the top? Is it because the software engineers feel the average American is too stupid to find their country in an alphabetical list? It is these little short cut techniques in software that really destroys to usability of a product.

Friday, May 2, 2008

JAXB and the nasty @XmlID

I wonder why the developers of JAXB decided to make the @XmlID annotations support Strings only? You would think that a String or any primitive type would've been acceptable. The Javadoc of XmlID states:
The usage is subject to the following constraints:
  • At most one field or property in a class can be annotated with @XmlID.
  • The JavaBean property's type must be java.lang.String.
  • The only other mapping annotations that can be used with @XmlID are:@XmlElement and @XmlAttribute
The other property annotations like XmlAttribute and XmlElement support primitives and wrappers. This of course means if you are using JAXB to XML Serialize a JPA Entity, your primary ID key needs to be a string instead of a Integer/Long value. JPA Persistence will still treat this as an integer in your datastore if you have the @GeneratedValue annotation set, so at least from this perspective your data model does not need to change. The advantage of using the @XmlID annotation is it allows you to use the @XmlIDREF tag in other Entities (meaning the entire entity is not XML Serialized only it's @XmlID value. Here is a simple example:
   @Entity @XmlRootElement
   public class Company implements Serializable {
      @Id @Column(name="company_id") @GeneratedValue
      @XmlAttribute @XmlID
      private String id;

      // ... setters, getters and other methods.
   }

   @Entity @XmlRootElement
   public class Employee implements Serializable {
      @Id @GeneratedValue
      @XmlAttribute @XmlID
      private String id;
   
      @JoinColumn(name="COMPANY_REF", referencedColumnName = "company_id")
      @ManyToOne(optional=false)
      @XmlIDREF
      private Company worksfor;

      // ... setters and other methods
   }

Now if we marshall an Employee with an id of "50" who works for a company with an id of "20", the resulting XML would look something like the following:

   <employee id="50">
      <worksfor>20</worksfor>
   <employee>

Thursday, May 1, 2008

Eclipse, Toplink, JPA and a Lost Evening

So I decided to put together a simple web-application using the dynamic web project in Eclipse. However, my recent workspace had become rather "corrupted" and so I had created a new one named "eclipse 3.3" (under my common \dev\workspace area). To my dismay, I simply could not get my web-application which was working in my old workspace to work. The crazy thing about this, is I know my persistence.xml and project was setup correct. The Tomcat log was producing the following:
INFO: The configured persistenceUnitName is: MileageTracker
[TopLink Config]: 2008.05.01 10:23:15.703--ServerSession(2165595)--Thread(Thread[main,5,main])--The alias name for the entity class [class org.javad.mileage.model.Vehicle] is being defaulted to: Vehicle.
javax.persistence.PersistenceException: No Persistence provider for EntityManager named MileageTracker: The following providers:
oracle.toplink.essentials.PersistenceProvider
oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider
Returned null to createEntityManagerFactory.

at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:154)
at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:83)
... 
Now what is curious about this, is Toplink was found, and it determined some information about my EntityBean. I spent a lot of time trying to discover what was wrong here, checking my other projects from the previous workspace etc. It turns out the problem was an Eclipse bug in which you have workspaces with spaces (" ") it will fail to find the Entity Manager Factory. The reference bug ID is 210280 Sufficit to say, this cost me a few hours of productivity.....

Tuesday, April 29, 2008

GWT and JPA with Servlets

I am doing some investigation of GWT and in writing an application, I wanted to integrate the servlet end of my application with a JPA services oriented architecture (either through the servlet itself, or a standalone JPA service). I have written a few JPA applications and there are several things I like about JPA:

  • There is a nice model(bean)/service view of the world.
  • Defining your persistence behavior on your JavaBeans "feels" right.
  • Takes care of all the ugly connection/pooling stuff for you.
  • You do not need to be a service "guru" (personally I prefer the business layer and presentation layer) to get a working application with persistent storage.

While there are certainly disadvantages with JPA, I wanted to leverage my JPA knowledge, tools and services with a GWT application using RPC-Servlets. The nice thing about using RPC-Servlets is you can code your servlet in a very DSL like manner. (I was going to look into REST and Restlets but decided GWT was enough to learn for now). Getting JPA and RPC-Servlets proved rather difficult at first, and I almost abandoned the approach in favor of a pure servlet "service" approach delivering JSON or XML objects and then using HTTPRequests to get the responses from GWT.

My first attempt was to try and stick my servlet(s) that referenced my JPA services outside of the GWT client packages and then try and run these within the hosted tomcat within GWT. This proved problematic since GWT/tomcat bundles an earlier version of the xerces library which is not compatible with the persistence.xml xsi schema. (I also wasn't very convinced this would work anways since I am not sure if the imbedded tomcat runs with a 1.4-compliant JDK or not (thus preventing JPA annotations)). So I needed another approach. I had read that GWT hosted browser could run without the Tomcat imbedded, hence allowing you to work and debug within the hosted browser, but use a standard OOTB tomcat service. This appealed to me alot, since I can easily setup a dynamic web project in eclipse to host my servlet/service and I can work in the natural hosted browser of GWT. After a little tweaking I got this to work in eclipse and so far, while not as clean as a pure GWT hosted-mode environment I am able to do everything I need to do.

Application Description

I am going to write an application which allows me to store user profiles (website profiles, not Windows profiles) in a database with encrypted passwords. If you are like me, after signing up for a few sites, you can never remember you user ids and passwords. The application will be called ProfileManager and it will be written in GWT as a web-application, using JPA to access the database through a ProfileService.

GWT Application Setup

I created an eclipse project for my GWT application using the projectCreator and applicationCreator scripts provided by GWT with the -eclipse flag. This provides you with the basic scaffolding necessary for the creation of a GWT application. For the purposes of this blog, the package path of my module is org.javad.profile.gwt.ProfileManager. Executing the ProfileManager.launch script launches the GWT toolkit development shell and my application. I am not going to go into more details here as these procedures are well documented in text and websites.

Step 1: Create the Service Interface

My service interface under the package org.javad.profile.gwt.client.rpc. The interface has to extend the google interface RemoteService. For simplicity this interface defines only a single method getAll( ) that looks like the following:


package org.javad.profile.gwt.client.rpc;

import java.util.List;
import com.google.gwt.user.client.rpc.RemoteService;


public interface RPCProfileService extends RemoteService {
   public List getAll( );
}
As more functionality is added, I will add the signatures to the service interface. Along with the interface, I need an Asynchronous interface which is defined in the same package:

package org.javad.profile.gwt.client.rpc;
import com.google.gwt.user.client.rpc.AsyncCallback;

public interface RPCProfileServiceAsync {
   public void getAll( AsyncCallback callback );
}

Step 2: Create your Serializable Bean

Since JPA uses annotations, we unfortunately have to translate the JPA POJO to a GWT DAO. One product you might want to look at is HiberObjects. For this project, I created a simplified model of my JPA POJO called "ProfileBean" under org.javad.profile.gwt.client.model. Since this overview does not use the ProfileBean directly, I am not going to say any more on it, other than you'd need it for a full GWT Application.

Step 3: Create an Externalized JAR

In order to implement the service interface (and retrieve ProfileBean objects) you need to bundle these into an external JAR to associate to your web project. Within eclipse you can do this with the File->Export ...->JAR File functionality. This JAR forms the externalized view of our service which we'll need to implement within the web project.

Step 4: Modify the .launch Script

When you create your project, the GWT toolkit created a ProfileManager.launch script which you can use to launch your application. The problem is, this launch script will launch the Toolkit Development shell with an imbedded Tomcat, will attempt to connect on port 8888 (default for GWT toolkit) and will not include a web-application root name, which if you are deploying with web project will be needed.

The GWTShell command can take some arguments which we'll use to adjust this. Editing the ProfileManager.launch shell, you need to change the value of the stringAttribute " org.eclipse.jdt.launching.PROGRAM_ARGUMENTS"

  1. First, we need to tell the shell not to launch an imbedded Tomcat. This is done by specifiying the -noserver option.
  2. The port needs to be specified. In my case, my Tomcat is running on port 8080 which can be defined by specifying the -port 8080 option.
  3. Finally, we need to change the application which is launched by the application script by prepending the web-application name in front of the module's HTML file.

An example of this line from my ProfileManager.launch script looks like the following:

<stringAttribute key="org.eclipse.jdt.launching.PROGRAM_ARGUMENTS" value="-noserver -port 8080 -out www Profiles/org.javad.profile.gwt.ProfileManager/ProfileManager.html"/>

Now executing the script will open the Toolkit Shell an attempt to execute the GWT application with the correct port number and web-application name. A tomcat instance will not be started by the Toolkit Shell. The navigator window of course will not be able to connect to your application (since we have not hooked it up yet) so your window should look something like this:

Step 5: Make a call to the Service

Before we go on to the servlet project, it'll be helpful to know that we have the connection to the service working within GWT when the servlet is ready. To test this, I modified the the onModuleLoad( ) method of the application to simply dispatch a call to the RPC service. Obviously if you are going to use a DAO/Controller pattern this would be abstracted, but this is simply a test to know you are on the right path.


   ...
   RPCProfileServiceAsync service = (RPCProfileServiceAsync)GWT.create(RPCProfileService.class);
   ServiceDefTarget endpoint = (ServiceDefTarget) service;
   String moduleRelativeURL = GWT.getModuleBaseURL() + "servlet/ProfileServlet";
   endpoint.setServiceEntryPoint(moduleRelativeURL);
   AsyncCallback callback = new AsyncCallback() {
      public void onSuccess(Object result) {
         System.out.println("in callback");
      }
      public void onFailure(Throwable caught) {
         caught.printStackTrace();
      }
   };
   service.getAll(callback);

If we were to launch the application now, it would fail since there would be no response from the service. However when we complete the next section, we should see to "in callback" message shown in the console of the GWTShell.

Servlet Project For the servlet project, I am using Eclipse Europa with the WTP 2.0. This includes Dali, which allows you to easy define you JPA POJOs using the built in editor. Step 1: Create the Project Create the project within Eclipse using the "File->New->Project..." and selecting the dynamic web project under the Web project types. Enter a project name (this will be the default web-application name) so for mine I chose "Profiles". For the Project Facets step, make sure you choose the "Java Persistence" facet. This will allow you to manage your JPA objects.

Step 2: Add the GWT RPC library When we exported the RPC library from our GWT project (see Step 3 above) we created the contract that the servlet needs obey. We now need to import this JAR into the web project as a references library. After adding it as a referenced library, we also need to ensure that it is copied to the server deployment location. This is done by selecting the Properties of our project and selecting the necessary JARs under the J2EE Module Dependency option. I also included the TopLink JPA, MySQL (connector library) and gwt-servlet.jar as can be seen from the image below:

Step 3: Create your RPC Servlet

To implement our servlet, I created a class GWTProfileServlet which extends the RemoteServiceServlet and implements our RPCProfileService. This is located in the org.javad.profile.servlet package under the src variant of my web project.


package org.javad.profile.servlet;

import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
import org.javad.profile.gwt.client.rpc.RPCProfileService;
import org.javad.profile.model.Profile;
import org.javad.profile.service.ProfileService;
import com.google.gwt.user.server.rpc.RemoteServiceServlet;


public class GWTProfileServlet extends RemoteServiceServlet implements RPCProfileService {
   @Override
   public List getAll() {
      ProfileService service = ProfileService.getInstance();
      Collection profiles = service.getAll();
      System.out.println("debug: got all profiles" + profiles);
      return new ArrayList();
   }
}

In the example above, I make a call to the ProfileService which is a JPA Enabled service for managing profile objects from the datastore. You could also add your JPA persistence code directly to the servlet here. I am also printing out a debug message just to show that I am getting the output in the servlet.

Step 4: Create the servlet mapping

Since we want the GWTProfileServlet to be mapped under the GWT Application, in the web.xml for the web project, you need to define the servlet-mapping that includes the GWT Module name. So for our application, the servlet path was appended to the module path, making a url-pattern name as follows:


<servlet>
<description></description>
<display-name>RPCProfileServlet</display-name>
<servlet-name>RPCProfileServlet</servlet-name>
<servlet-class>org.javad.profile.servlet.GWTProfileServlet</servlet-class></servlet>
<servlet-mapping>
   <servlet-name>RPCProfileServlet</servlet-name>
   <url-pattern>/org.javad.profile.gwt.ProfileManager/servlet/ProfileServlet</url-pattern>
</servlet-mapping>

Step 5: Copy over the base GWT Application

In order for the web application to properly server the GWT Application to the imbedded browser we need to provide some files to the Tomcat.

First compile the GWT Application that you wrote in the first section above using the ProfileManager-compile.cmd through the External Tools in Eclipse.

Create a folder under the www-root named "org.javad.profile.gwt.ProfileManager". In this folder copy the following files from the www-root folder from your GWT Project:

   ProfileManager.html
   org.javad.profile.gwt.ProfileManager.nocache.js
   hosted.html
   gwt.js
Now you should be able to start your application server, and launch the GWT Toolkit browser and debug the application (both the servlet and the GWT application) using the eclipse debugger.

Monday, April 28, 2008

Failed Servers in Eclipse

I was trying out the Dali plugin for Eclipse and trying to create a persistence class within a dynamic web project, but I couldn't get my Apache/Tomcat 6.0 server to start within Eclipse. It would appear that I was running into the Eclipse Bug 117611 reported here. The resolution? Remove the server from my eclipse configuration and recreate it. This did the trick and I was able to synchronize the web application.

Saturday, April 5, 2008

Generated Unit Tests

So recently, my employer asked me to evaluate a tool called AgitarOne. The tool claimed to achieve an average of 80% test coverage in unit tests that were pre-generated. The idea is to test the code using permutations. The online demos were quite impressive, but seemed rather simplistic. They were testing basically a beans validation of its input parameters in a constructor. Right away my concern was how would it handle large classes that had a lot of "connections" to other classes and complex interplay. One of our developers had a test/demo system we could use to send off our code to see what was generated. Well, their claims were correct. It did reach 77-85% coverage in the files I passed. But what was I to make of the output? Some of the tests were 100s of lines long. It turns out the mocking framework used (Mockingbird), was mocking out all occurrences of "B" used by "A". While this seemed inevitable to reach the coverage goals in a generated test, after discussion with our testing "round circle" we came to some common conclusions which I think are relevant to discuss in general:
  • The process of manually creating tests forces the developer to think about their code, how they would test it, whether they have covered all the conditions etc. Generating a test for them, essentially takes away this process, thus losing that valuable design, test, code, refactor cycle. This reasoning accounts for 25-50% of the benefits one achieves by writing unit tests.
  • Tests become executable documentation for the code under test. Generated tests can never know (without a brilliant generator that can use Javadoc and other design deliverables to generate test conditions - this does not exist to my knowledge) the intent of the test. They only know that a method takes N arguments, returns a value of Y (or throws exception Z) and smarter tools like AgitarOne can actually make some assertions on what is being modified during method invocation.
  • Generation of tests on existing code assumes the code as written is correct. There in is a problem, but as I mention in the previous point, without some sort of language that can derive proofs, I do not see the test generation being able to make this assertion.
  • Generation violates one of the key development practices. The test code should be maintained and written with the same care, style and detail production code.
  • One simple question. Have you ever come back to a failed test after 6 months to try and see why it is failing, only to see a hundred lines of mock code followed by 30 assertions trying to figure out why your code change caused the test to fail? Enough said.

In no way am I not advocating the generation unit tests. It is a very novel concept, but I think it removes a crucial component of good software development practice. Automation tends to lead to dependence, and dependence can lead to ignorance. From my viewpoint, generation of tests does not inspire one to follow good software development practices, but to ignore them with the safety net. As one of collegues said.... "I'd rather have one good test that is comprehensible than 20 tests I can not read".

Thursday, March 20, 2008

My Idiotic Move...

So I had encountered a bug during a data entry session. I had entered about 50 stamps from India, and then as I was completing the entries for India, needed to create a new country for the Indian feudal state of "Gwailor". Well, I added the new country, and closed my application. I just happened to forget, to add some other stamps I had purchased over the weekend and thus reopened my application. To my dismay, all the India stamps I had entered had become "Indian States - Gwailor"? This looked like a bug, but potentially not an obvious one. Why would creating and persisting 50 odd stamps with a reference to a country (India) which clearly had a unique identifier all of sudden become (Indian States - Gwailor). Well it turned out, that during my overzealous striving to minimize and re-use persistence setup code, I had made the EntityManager ThreadLocal. I also discovered I was not actually closing the EntityManager. Thus the entity manager was only being closed on application closure, which caused everything to be flushed from memory to the DB. Somehow (and I think this is TopLink bug, but would be hard to validate) the creation of the new country swapped out the persistent reference to the other country in the stamps. I had put a trace on this and was 100% sure it was not in my service code where this was happening. After much humming and hahhing, I decided on a little refactor. The first was to move the Transaction processing from the ServerFactory into a separate TransactionHandler. Each of my services maintained an instance of the TransactionHandler. The second thing I did is rather than imbedding the obtaining of the EntityManager within the transaction code, I added to each persistent method. Afterall, the creation of an EntityManager is lightweight (as long as you have a cached EntityFactory). I can then pass the EntityManager to the TransactionHandler and it can manage the inner/outer transaction context. While I could have simply have inner transactions, I only want to flush and send events from the top-most call. This means, if a method such as "setAsPrimary( CatalogueNumber catNum, Stamp s )" calls the save( ) for Stamp on the StampService, and this method calls the super.save( ) in PersistenceService, then I only want the save( ) in the StampService to emit the event and flush the EntityManager prior to commit. Either way, I now have the nice TransactionHandler which has a simple "begin( EntityManager em )" method. Only the first call to this method in the stack trace will actually start a transaction (tied to the first EntityManagers). Subsequent calls will merely return the inital calls EntityManager so operations within that transaction are called against same EntityManager. The API for this class is pretty simple: public EntityManager begin( EntityManager em ) throws PersistenceException; public void commit( ); public void rollback( ); public boolean isOuterTransaction( ); protected void validateThreadContext( ); The latter method is used on the begin to validate the the service instance is not being used in a non-thread safe way, and will throw an IllegalStateException if the TransitionHandler thread ID does not match the current thread ID. (While this shouldn't happen, I was concerned that I could have some older areas of code that was calling services in a non-thread safe manner. This ensures that is not happening). So far, this refactoring (while changing a lot of methods) actually went surprisingly clean, and was easy to verify its success with the unit tests I had. I actually was able to add some additional testing including a test that was validating this scenario I described above.