Saturday, December 6, 2008
GXT Tree and Loading ... Alternative Design (Validated)
YSlow and Cache-Headers
Cache-Control
header to the response, and then register your filtered types in your web.xml. A blog on this was posted by Byron Tymvios on jGuru here.
Tuesday, December 2, 2008
GXT Tree and Loading ... Alternative Design
- The BaseModel objects representing the Country, Album objects are used elsewhere in the client (not just within my tree).
- Know the singleton store to access (if I need to full model)
- Toggle/display to proper icon in the tree
- Handle the leaf/childless cases (such as for Country)
{"total":1,"albums":[{"name":"Australia","id":154,"countries":[1104,733,3851,704]}]}
Callback.onSuccess()
method.
While this above approach will take me more time to implement, I think it will provide one major performance improvement. The tree building time is strictly driven by tree AJAX calls to obtain the concrete Country, Album and Stamp Collection data models, as opposed to "N" calls for each node in the tree. It would also give a consolidated view of the tree and allow management of the models to be controlled by their respective stores in a more uniform fashion.
Finally, using a ReferenceModel approach, I actually do not even need the DataProxy. I'd have to test this out, but it looks like I can simply use the TreeBinder to bind a Tree to a TreeModel, and the TreeModel can be built programatically once the singleton data stores are created.
Monday, December 1, 2008
GXT Tree and Loading with REST
Getting a tree to work in GXT is not too hard. In fact the demos do a pretty job of illustrating this. However, my tree was a little more complex than the trees shown. First some features of my tree:
- Has three separate types of objects (Stamp Collections, Albums and Countries)
- Stamp Collections and Albums may contain children (Albums and Countries respectfully)
- A Restful WebService is used to return the content of a particular item. (for example to get the albums of a stamp collection with id of "5" the URI might look like http://hostname/web-app/resources/collections/albums/5).
My original approach was that I wanted to make a request on the selected tree node when an expansion occured. However when I used filtering, this caused the invocation of the filter to actually load all the nodes (which was very slow on the first filter). You can turn this off, however that defeated the point of the filter. It also didn't seem like the right solution since at least in the examples the trees were being loaded via an RPC call. So the problem really boiled down to making the tree make an AJAX call to the Restful Web Service upon requesting the expansion of an unloaded node. First I should mention that trees in GXT actually act on BaseModel objects. One point of interest here is that BaseModel object are not TreeModel's (in this way it does work differently than Swing-based trees).
The approach I took was to create a customized
DataProxy
and pass this proxy to the store on creation. Thus as nodes are loaded/expanded the data proxy will be used to obtain the node's children. My proxy essentially overloaded the load( )
method and for convienence added another method getRequestBuilder( )
. I would send requests using the output of the request builder to the Restful Web Service and process the results with a JSONReader and output the load result in the callback's onSuccess( ) method. So lets look at some code:
public class StampTreeDataProxy implements DataProxy<BaseModel,List<BaseModel>> { public StampTreeDataProxy( ) { super(); } /** * Generate the request builder for the object. If it is the root node (ie. null) * then request the stamp collections. Else, get the appropriate Albums or Countries. * For any other type of object, since we do not know how to build a Restful Web * URI, simply return a null value. * * @param item The current (parent) item. */ protected RequestBuilder getRequestBuilder( BaseModel item ) { RequestBuilder builder = null; if( item == null ) { builder = HttpUtils.getRestfulRequest(StampModelTypeHelper.getResourceName(StampCollection.class)); } else if( item instanceof NamedBaseTreeModel ){ String pathInfo = ((item instanceof Album) ? StampModelTypeHelper.getResourceName(Country.class) : StampModelTypeHelper.getResourceName(Album.class)) +"/" + ((NamedBaseTreeModel)item).getId(); builder = HttpUtils.getRestfulRequest( StampModelTypeHelper.getResourceName(item.getClass()), pathInfo, HttpMethod.GET ); } return builder; } public void load(DataReader<BaseModel, List<BaseModel>> reader, final BaseModel parent, final AsyncCallback<List<BaseModel>> callback) { RequestBuilder builder = getRequestBuilder( parent ); // If the builder is null, then we do not have a Restful Builder we can handle. if( builder == null ) { callback.onSuccess(new ArrayList<BaseModel>()); return; } builder.setCallback(new RequestCallback() { public void onError(Request request, Throwable exception) { GWT.log("error",exception); } @SuppressWarnings("unchecked") public void onResponseReceived(Request request, Response response) { if( response.getStatusCode() == Response.SC_OK ) { if (HttpUtils.isJsonMimeType(response.getHeader(HttpUtils.HEADER_CONTENT_TYPE))) { JSONValue json = JSONParser.parse(response.getText()); Class _c = StampCollection.class; if( parent instanceof StampCollection || parent instanceof Album) { _c = ( parent instanceof StampCollection ) ? Album.class : Country.class; } // Modified JsonReader which can read structures of ModelType definitions. // For this, I believe a regular JsonReader would work, however // it would create instances of BaseModel objects instead of // my specific types (which have nice accessor methods). StructuredJsonReader<BaseListLoadConfig> reader = new StructuredJsonReader<BaseListLoadConfig>(new StampModelTypeHelper(), _c ); ListLoadResult lr = reader.read(new BaseListLoadConfig(), json.toString()); callback.onSuccess(lr.getData()); } } else { GWT.log("a non-status code of OK" + response.getStatusCode(), null); } } }); try { builder.send(); } catch (RequestException e) { e.printStackTrace(); } } }
Using this
DataProxy
, I can create an instance and pass it to the store I am creating for the tree:
public class BrowseStore extends TreeStore<BaseModel> { public BrowseStore( ) { super(new BaseTreeLoader<BaseModel>( new StampTreeDataProxy())); } // Simplified method to create the tree content and load it public void load( ) { removeAll(); getLoader().load(); } }
In this desgin, when load() is called (either on the loader of the store or the store itself), the DataProxy will be called for the root element. Since I am ignoring the reader (argument to the load( ) method) I create the new StructuredJsonReader. As stated above, the reason for this is three fold:
- It handles structured ModelTypes (ie. a
ModelType
with a field representing anotherModelType.
- The ability to delegate the creation of the BaseModel to another class (in this case the
StampModelTypeHelper
which will create the appropriate instance of the modeled object (eg. Country). - Finally uses the class provided to create the propert model type definition.
There is one negative of this solution. Currently it is posting a single request per node in the tree (eg. given an Album get all the countries). I plan on redesigning this to be a little more efficient, however given the mixture of BaseModel types, in order to get an efficient structure downloaded, I may need to forgoe the clean model structure in preference for a more efficient algorithm.
Friday, October 3, 2008
ExtJS Action.submit response
success
config value. This function will take two parameters form
and action
. If you are returning JSON from the call you an access this directly from the action
parameter.
The return value should look something like the following:var _form = // ... get your form (eg. formPanel.form) _form.submit({ scope.this, waitMsg:'Doing someting',url: someurl: method: somemethod, success: function(form, action ) { Ext.Msg.alert('Success?', action.result.success ); Ext.Msg.alert('Data returned.', action.result.data.key1 ); } });
{"success":true,"data":{"key1":"key 1 result value"}}
Collections and JAX-RS
XmlRootElements
thus supporting marshalling with JAXB. This actually worked and is a reasonable workaround (since I really only need to return collections for four to five persistent objects. These wrapper objects simply have a collection/list of the persistent objects with the XmlElement
set to have the appropriate name of the child elements for processing:
package org.javad.stamp.model.collections; import java.util.ArrayList; import java.util.List; import javax.xml.bind.annotation.XmlElement; import javax.xml.bind.annotation.XmlRootElement; import javax.xml.bind.annotation.XmlTransient; import org.javad.stamp.model.Country; @XmlRootElement(name="countryList") public class CountryList { @XmlTransient public Listcountries = new ArrayList (); public CountryList( ) { } public void setCountries( List c ) { countries = c; } @XmlElement(name="country") public List getCountries() { return countries; } public void addCountry( Country c ) { countries.add(c); } }
Deploying RestEasy in Tomcat 6.0
- Download RestEasy from the JBoss website: http://www.jboss.org/resteasy/
- Follow the steps 2 to 4 of my previous blog Deploying Jersey in Tomcat 6.0
- Download and create a Eclipse library for JavaAssist. Include this in your WEB project (or provide the javaassist.jar for Tomcat)
- Create a new RestEasy library in eclipse which contains the content of the rest easy installation lib location. You can probably skip a few of the jars if you do not need all the functionality (such as jyaml.jar and possibly mail.jar)
- Modify your web.xml for your project to include the following (this comes right from the sample web.xml in the rest easy install):
<?xml version="1.0" encoding="UTF-8"?> <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" id="WebApp_ID" version="2.5"> <display-name>TestWeb</display-name> <welcome-file-list> <welcome-file>index.html</welcome-file> </welcome-file-list> <context-param> <param-name>resteasy.scan</param-name> <param-value>true</param-value> </context-param> <!-- set this if you map the Resteasy servlet to something other than /* <context-param> <param-name>resteasy.servlet.mapping.prefix</param-name> <param-value>/resteasy</param-value> </context-param> --> <!-- if you are using Spring, Seam or EJB as your component model, remove the ResourceMethodSecurityInterceptor --> <context-param> <param-name>resteasy.resource.method-interceptors</param-name> <param-value> org.jboss.resteasy.core.ResourceMethodSecurityInterceptor </param-value> </context-param> <listener> <listener-class>org.jboss.resteasy.plugins.server.servlet.ResteasyBootstrap</listener-class> </listener> <servlet> <servlet-name>Resteasy</servlet-name> <servlet-class>org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher</servlet-class> </servlet> <servlet-mapping> <servlet-name>Resteasy</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> </web-app>
- Start your tomcat in Eclipse and you should be good to go!
Thursday, October 2, 2008
Deploying Jersey in Tomcat 6.0
- Download and unjar the Jersey distribution (I used 0.8) from https://jersey.dev.java.net/ to a location on your system (lets call it
JERSEY_HOME
for this article). - Download and install the Java-WS distribution (I used 2.1) from https://jax-ws.dev.java.net/. (lets call it
JAXWS_HOME
for this article). - Rama Pulavarthi wrote a blog (in fact the key for me) in configuring tomcat to reference the Jersey distribution jars. To summarize, in
TOMCAT_HOME/conf/catalina.properties
modify theshared.loader
property entry to point to yourJAXWS_HOME/lib/*.jar
. Mine looks like this:shared.loader=C:/dev/jaxws-ri/lib/*.jar - Create a new Dynamic Web Project in Eclipse using Tomcat 6.0 as the application server.
- Create a new library in which you add the following JARs:
- JERSEY_HOME/lib/asm-3.1.jar
- JERSEY_HOME/lib/jersey.jar
- JERSEY_HOME/lib/jsr311-api.jar
- Next modify the
web.xml
to include the adaptor for jersey. Most of the Blogs refer to a different class than what appears in 0.8 I am not certain which is the right class, only the following works for me (and the documented one is not available throught the downloads!):<servlet> <servlet-name>ServletAdaptor</servlet-name> <servlet-class>com.sun.jersey.impl.container.servlet.ServletAdaptor</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>ServletAdaptor</servlet-name> <url-pattern>/resources/*</url-pattern> </servlet-mapping> <session-config> <session-timeout>30</session-timeout> </session-config> - If you start Tomcat, you will see the following error:
This error is due to not providing any root RESTful resources.com.sun.jersey.api.container.ContainerException: The ResourceConfig instance does not contain any root resource classes. - Create a new class and include the following:
import javax.ws.rs.GET; import javax.ws.rs.ProduceMime; import javax.ws.rs.Path; @Path("/test") public class TestResource { @GET @ProduceMime("text/html") public String getMessage( ) { return "hello"; } }
- Start and test the application with the following url:
and you should see a hello in your web-browser.http://localhost:8080/appName/resources/test
Making your servlet application URLs more Restful
http://somehost/Application/stamp/5533
to retrieve stamp 5533 than the traditional http://somehost/Application/servlet/StampServlet?id=5533
. Theoretically you could rewrite your application services layer without modifying the client. If you are not familiar with REST and Restful Web Services, a good dissertation on this can be found here by Roger L. Costello.So all of this is nice, but how can we apply this to a servlet-based Web Application? There are projects out there such as the Java Restlet API, and while I think this is good, it does mean essentially having a non-servlet compatible solution (since the Restlet takes the place of a servlet). Instead, you can take advantage of some of the REST themes by using servlet-based Web Applications following these steps:
- We want to write a Rest-like servlet for accessing Stamps. We have a servlet of the class
StampServlet
which is mapped in the web.xml of our servlet container asstamp
. This would look like the following:<servlet> <description>Servlet for processing Stamp Restful requests</description> <servlet-name>stamp</servlet-name> <servlet-class>org.javad.stamp.servlet.StampServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>stamp</servlet-name> <url-pattern>/stamp/*</url-pattern> </servlet-mapping>
The key here is the/*
in the url-pattern of the servlet-mapping. This will mean any URI that is received that starts with "stamp" will send it to the StampServlet, but we can use any of the information on the URI chain to help direct the servlet to process the request in a Restful way. For example, if we are retrieving the details of a stamp our URL would look like:http://hostname/StampApp/stamp/563
Using theGET
method, where the ID of the stamp in this case is563
. If this was a modify function, the URL would look similar only a method ofPUT
would be used. - REST uses many of the less-popular methods of HTTP. In particular the
PUT
andDELETE
methods. Fortunately, the HttpServlet class does implement these with thedoPut(HttpServletRequest,HttpServletResponse)
anddoDelete(HttpServletRequest,HttpServletResponse)
methods. If you are not comfortable using these methods (for example the HttpServlet Javadoc does mention PUT for use in placing files on the server like FTP), we can make our URLs be a little less Restful, but still Restlike by using POST method and inserting a keyword after the ID of the object in question. I should note, that for some applications like GWT, using protocols other than POST or GET is difficult. In GWT we can attach a header value (X-HTTP-Method-Override
) to indicate we'd like to use DELETE, but it is still sent to the POST method receiver initially. In this case, using aPOST
method URL along with the method override header we can still have a Restful URL that would look like the following:<form action="http://hostname/StampApp/stamp/563" method="POST"> </form>
More practical would be the use in an AJAX application using the XMLHttpRequest in Javascript (since we can override theX-HTTP-Method-Override
header:var req = new XMLHttpRequest(); req.open( "POST", "http://hostname/StampApp/stamp/563" ); req.setRequestHeader("X-HTTP-Method-Override","DELETE"); req.send(); - URL Construction is great, but how do I make use of this in my servlet
doPOST( )
? Since we mapped the servlet with a/*
, anything to the right of thestamp
in the URI is treated as part of the PathInfo. Therefore within yourdoPOST()
method, you can request the path-info and take the appropriate action. Similar to the following:protected void doPost(HttpServletRequest request, HttpServletResponse response) { String pathInfo = request.getPathInfo(); Method method = request.getHeader("X-HTTP-Method-Override"); if( pathInfo.isEmpty() ) { createNewStamp( ); // purely a POST } else if( method != null && pathInfo != null) { long id = Integer.parseInt( pathInfo.split("/")[0]); if ("DELETE".equalsIgnoreCase( method ) ) { doDelete(request,response); // or call deleteStamp(id); } else if ( "PUT".equalsIgnoreCase( method )) { doPut( request, response ); // or call modifyStamp(id); } else { // if there are further elements in the pathInfo call the appropriate code... } } else { log("POST method called for stamp details without Method Override header. Use GET method to retrieve stamp details or specify a Method Override header."); } }
While this is not perfect, it is certainly easier to program a client to a URI like
stamp/563
than the traditional way of stamp?id=563&action=DELETE
. In the above example, it is likely that the code for servicing an action like Delete is in a specified method, so instead of trying to call the doDelete( )
simply a call to deleteStamp(id)
would probably be more appropriate. Using this technique allows you to support both methods of processing your objects in a consistent way, while being flexible in support for the client technology. While it does muck up your doPost()
methods a little bit this is a minor tradeoff for more readable URIs.
I should also mention that additional values in the pathInfo are used in Rest to indicate additional actions to take place against that object. For example:
http://hostname/StampApp/stamp/563/catalogues
With a method of
GET
would refer to a request to retrieve the catalogues for the stamp with the ID 563. In this situation having a controller (such as that provided by the Restful application) would definitely help in forwarding these to the correct service. Depending on the complexity of your application, you may be able to simply do this internally within your servlet such as in the else case mentioned above, however for more than a few actions this can be complex. Especially if catalogues (using the example above) can exist outside of the context of a stamp. This would mean you'd have to provide a servlet or some application that can process catalogues and find a way to tie the stamp servlet with the catalogue servlet. The worst case I can think of in my application would be trying to get all of the stamps that are in a country, filtered by an album in a stamp collection. This might look like:
As you can see, this is not quite as readable. Since I can also get stamps by album or simply by collection I might be more prone to simple request the stamps for collection 56 and then take on the album/country ids as queryString data:http://hostname/StampApp/collection/56/album/25/country/76/stamps
Not pure Rest, but Rest-like. If I truly did want to retain the Restful URI, I would likely write a controller which returned stamps, and if the pathInfo of thehttp://hostname/StampApp/collection/56/stamps?album=25&country=76
GET
request for the collection servlet contained stamps
in it I would directly call it passing the pathInfo and then allow the controller to decide how to filter the URI (in my case I have a StampFilter object which accepts the three objects and calls the appropriate JPA Query based on the filter setup so this would be quite easy for me to do).
Monday, September 29, 2008
Remote Assistance - Not always able to connect
ExtJS ComboBox - Act like HTML Select
- Provide a
hiddenId
option to the ComboBox constructor. - Set the
triggerAction
to'all'
. Doing so essentially clears the filter and will treat the combobox like it is querying from the store separately. - Ensure the
mode
is set to'local'
.
org.javad.CatalogueHelper = { getCatalogueSelection : function(id, options) { options = options || {}; // initialize if undefined var catalogues = new Ext.form.ComboBox(Ext.applyIf(options, { id: id, // Passed in id to create composite fields. fieldLabel: 'Catalogue', allowBlank : false, hiddenName: id + '-selected', name : id, mode:'local', editable: false, valueField: 'id', // from store, what to store in FORM displayField: 'displayString', // Value to display in Select store: org.javad.CatalogueStore, hiddenId: id + '-value', width: 200, triggerAction: 'all', // key to clear filter selectOnFocus: true })); return catalogues; } };
Tuesday, August 26, 2008
JPA and Optional Associations
eclipselink.join-fetch
for two one-to-many relationships. From looking at the resultant SQL, it became clear what the problem was. By adding these join-fetch statements, my query became more efficient (since I didn't have subsequent row-by-row lazy fetches later), but it also became invalid in some circumstances. In particular, the fetch join added some AND statements to the WHERE clause whereby the foreign key id was equal to the primary table id. However if the many-side relationship is empty, then this statement will not return any rows. I think the way around this would be to box the AND statement in a compound OR statement with an exists condition. Currently to my knowledge the JPA implementations do not support this, and I am going to research whether or not this is achievable by manually modifying the join-fetch statement.
Friday, August 22, 2008
Refactoring Followup - Strange Side Effect
As an aside, while looking at MoXY with EclipseLink, I noticed their tools to assist in providing a true WebServices implementation. This might be an area to look into adopting (in providing a true JAX-WS implementation).
Friday, August 15, 2008
Refactoring: Biting off more than you can chew?
- Never assume how long it will take. Invariably something will come up (and I have to admit the Olympics coverage may have pulled my attention away from eclipse on occasion).
- When I started refactoring I relied on my unit tests and decided to make updates to the Swing client after refactoring was completed. This was a mistake as often I had to think back to what the update was rather than making the change immediately (I am thinking along the lines of repackaging, changing initialization code etc.) Overall these changes tended to be pretty simplistic (change package X to package Y) but occasionally a DAO that I did not particularly like in the previous module was changed. In one occasion, the new DAO did not really fit and I had to rewrite some controller code (but at least it is better now).
- Part of my refactoring was to move my entities which were annotated to be orm.xml based. I should have done this first without changing the data-model. Changing the database schema along with moving the entities to orm.xml created a lot of thrashing to work through.
Tuesday, August 5, 2008
Certifications even for the experienced?
Recently while being disposed, I was flipping though a copy of the Web Component Developer prep-book put out by Manning some years ago. It was a decent book, and as I flipped through the pages I found myself reading up on the techologies that I used on a daily basis, as well as advise my teams in their use. I feel in general, dispite the maturity of many Web Component pieces (such as Servlet, TAGs and EL), I feel many developers really do not understand why they are using them, or when to use one technology (for example TAGs) over another (such as Servlets). I proposed to the team that reading a book on such a subject matter might prove useful for their benefit. As a technology leader, I therefore felt it only appropriate that I study and become a Sun Certified Web Component Developer.
So this leads me to the conclusion that I suppose I need to engage in the duties of reading a book (for this current challenge I have chosen Head First Servlets & JSP from O'Reilly covering the 2nd edition), preparing myself and actually taking the exam. Based on my study habits, the best motivator I can think of is to schedule the exam now, that way I have to be prepared prior to the actual "writing" of the exam.
Sunday, July 27, 2008
Automatically recording Create/Update Timestamps with JPA
- Provide a database trigger to try and insert the timestamps automatically.
- Manually set them (or have each persistent service set them) before performing a
persist()
operation with an EntityManager. - Use aspects to dynamically insert the timestamp.
- Provide an EntityListener which automatically sets the creation/modification timestamp at persist time .
Obviously the first solution is very database specific and is not really tied to the JPA code. The second solution is likely to be error prone and easily missed. The final solution is the best way to handle his. Daniel Pfeifer provides an excellent walkthrough of this technique in his blog here. I have a few comments on this. First off, the concept of an entity listener, does not follow the normal "implement this interface" convention. An entity listener is any POJO class which contains one or more methods in the format:
public void someMethod( Object obj )
It should be noted that there are
javax.persistence
annotations for each of the JPA lifecycle states. In the case of create timestamps, annotating a method with @PrePersist
will allow it to be used to provide the creation timestamp. For the modify timestamp, annotating a method with @PreUpdate
will call this method on persisting an entity which was previously persisted. The entity listener can be registered either in the orm.xml file (as described by Daniel Pfeifer) or one the Entity class itself using the annotation @EntityListeners( class ... )
. Personally I prefer this technique as it allows me to programatically tie a class with its behavior (such as storing timestamps on create/update) without having an external configuration. Using an external configuration in several modules, and unit tests quickly becomes muttled in that you forget to update the file under test etc. It also increases the developer's awareness of this association, and to be candid, it is unlikely that I really want to swap out the persistence timestamp handling at package time with a different solution. Since the annotation is applied to an abstract @MappedSuperClass
annotated class, any implementing classes will automatically inherit this behavior. I might change my stance on this approach in the future, but at least for this this seems to be the right way to approach this.
Subversion/Bugzilla and Eclipse
Since I was connecting to a bugzilla database which I am allowing anonymous view access to, the
bugtraq:url
property I used was http://<hostname>/bugzilla/show_bug.cgi?id=%BUGID%
. Doing so, meant that when I check in my files and provide a Bug ID, the subversion history will have a record of it, which will be provided as a link to the information page of the bug in eclipse. Pretty neat stuff!
Sunday, July 13, 2008
HSQLDB/EclipseLink Bug Filed
EclipseLink Startup Lag
SecurableObjectHolder
, which forced the JCEEncryptor
to be initialized only upon creating the new instance. In EclipseLink, rather than initializing on first use, they are pre-initializing the JCEEncryptor
through the SecurableObjectHolder
and then decrypting the password. Another difference between the EclipeLink and TopLink Essentials encryptor, is with EclipseLink they are creating a separate cypher for encryption and decryption. Having looked at the code, this makes sense from a performance scaling perspective. Having separate encryptors/decryptors means that you do not need to reinitialize them for each encryption or decryption. Of course, on startup that would mean the instantiation of not one cypher (for decrypting the password) but two, which does account for about a 4000 ms* difference in the profile runs. This interesting enough is also the difference between the application startup running either EclipseLink and TopLink Essentials.
* Since I am profiling in Netbeans with All Classes the performance numbers themselves are quite poor. What is more interesting is the ~ 18% performance degradation on startup this causes.
Saturday, July 12, 2008
EclipseLink - Initial Impressions
- It was chosen by Sun Microsystems to serve as the reference implementation for Java Persistence 2.0.
- The JPA development community has essentially switched from TopLink JPA right over to EclipseLink.
- EclipseLink is bringing further capabilities with support for MOXy (JAXB), SDO and OSGI to name a few.
There are several good articles on the tool itself from the EclipseLink home page.
I downloaded the most recent release from the Eclipse website, and set about setting up my applications to use it. Generally speaking, I had tried to make only API calls to the javax.persistence APIs, thus I had very little package dependency changes. Since EclipseLink is based off TopLink virtually all the classes from TopLink appear in EclipseLink under a different naming convention. Basically
oracle.toplink.essentials
became org.eclipse.persistence
. Since I have a nice level of unit testing around my services, I was able to quickly identify the places that were failing. In particular, I had some Query Hints that I was applying if the provider was the TopLink provider. I had to replicate this functionality for EclipseLink code paths (fortunately it was only in a few places). I think the biggest impact was on the Upgrade Tooling in which the SQLSchemaUpdateUtility
had package dependencies on TopLink Essentials. Instead I roled this into a StatementExecutor
interface/implementation and used reflection to call the APIs. Similarly, I did something similar for EclipseLink if I detected it as the persistence provider.
One observation I have noticed is that EclipseLink takes significantly longer to "initialize" than TopLink Essentials. This is most noticeable with the Unit Tests and HSQLDB, where the execution time is approximately ~ 5.0 seconds different. I brought up my JavaSE application in Netbeans 6.1 using the Profiler, and there is a significant time lag building and initializing the EntityManagerFactory. I am not convinced yet that there isn't a setting I am missing which is causing this lag. For example, launching my JavaSE application (which will query to see if there are any upgrades needed, query for all collections and countries, display the Swing UI etc.), almost 56% of the total startup time was spent in the
deploy()
method of the EntityManagerSetupImpl
.
EclipseLink provides a nice compact
javax.persistence_1.0.0.jar
file which represents all of the J2EE javax.persistence classes required to compile. This is great for application development, and means you can provide your JPA provider at runtime (along with the persistence.xml configuration). Of course, this assumes you are using non-compile dependencies for things like QueryHints etc.
The documentation for EclipseLink seems a little disorganized, but overall there is a lot of information available through the EclipseLink Wiki User Guide.
I am looking forward to using some of the query features (such as fetches for foreign key objects) and it will take some time to really take advantage of the broader features being offered.
Commentary for my HSQLDB and Uniqueness Post
org.eclipse.persistence.internal.databaseaccess.DatabasePlatform
named boolean shouldUniqueKeyBeHandledAsConstraint()
and then the code I mentioned earlier could be genericized further with the HSQLPlatform
class implementing the aforementioned method with a value of true.
Saturday, July 5, 2008
HSQLDB and Toplink : Uniqueness Constraints
UNIQUE
constraint from the @Column
definition on an JPA entity bean, and have it handled properly with the Toplink JPA. I was actually surprised that even with build 40 or Toplink Essentials v2.1 this was still an issue. The problem was, if you defined a value of unique=true
in your JPA column annotation, Toplink Essentials would insert the UNIQUE
keyword in the create table routine. This would break on HSQLDB, which does not support this keyword during column descriptor creation. The challenge was to fool Toplink into handling the unique attributes like contraints which it would add after the table was created with an ALTER TABLE
sequence. The route I have chosen for now to solve this for my unit testing needs was to provide replacement Toplink-Essentials class files ahead of the Toplink-Essentials JAR file used by my application. Therefore my applications all run with the approved Toplink distributable, however my unit tests run with the instrumented files. There were two small changes I had to make:
(1) Modified
oracle.toplink.essentials.tools.schemaframework.FieldDefinition
to not write out the unique field keyword if the database was HSQL. From Line 168 of the appendDBString
method:if(isUnique() && !session.getPlatform().isHSQL()) { session.getPlatform().printFieldUnique(writer, shouldPrintFieldIdentityClause); }
(2) Modified
oracle.toplink.essentials.tools.schemaframework.DefaultTableGenerator
to add the unique constraints if the database is HSQL on line 253 of initTableSchema()
if( dbField.isUnique() && databasePlatform.isHSQL()) { tblDef.addUniqueKeyConstraint(dbTbl.getName(),dbField.getName()); }
Currently I have not posted any information to the Glassfish project with these updates. I am not certain this is the ideal way to achieve this, but from my unit testing perspective I appear to be off to the races. If you are interested in these changes, please contact me and I'll look into working to get them submitted for Glassfish.
Thursday, July 3, 2008
JPA Unit Testing
toplink.target.database
property missing from the persistence.xml file. This was outlined in a useful blog TopLink JPA and HSQLDB Quirk.
Even after making these changes however, I still could not get toplink to properly create the tables. It turns out, I had several entity beans which had name fields defined as
unique=true
. This caused the UNIQUE
keyword to be written in the CREATE TABLE
statements by Toplink which appears to be an invalid syntax for the HSQLDB database. After removing this JPA constraint from the affected objects I was able to successfully create the tables and run my tests. I also had some minor refactoring to do in some SQL utilities to leverage the persistenceUnit configuration, but I was very impressed with the speed.
Overall, my test suite went from executing in approximately eighteen seconds down to just four. While eighteen seconds may not seem like a long time, it was sufficiently long to disrupt my work efficiency. I decided to retain my MySQL peristence unit (for occasional "live" DB testing), and have now configured two test targets in Eclipse which take a
org.javad.jpa.serviceName
environment variable to switch between the hsqldb
and toplink-test
persistence units.
The final activity I will have left to do, is to determine a way to reinsert the unique statements in my entity beans without have the SQL generated for HSQLDB. There are a few threads out there, so I should be able to come up with something.
Finally, here is my persistence unit configuration for the HSQLDB database:
<persistence-unit name="hsqldb" transaction-type="RESOURCE_LOCAL"> <provider>oracle.toplink.essentials.PersistenceProvider</provider> <class>org.javad.stamp.model.Album</class> <class>org.javad.stamp.model.CatalogueNumberReference</class> <class>org.javad.stamp.model.Category</class> <class>org.javad.stamp.model.Country</class> <class>org.javad.stamp.model.Stamp</class> <class>org.javad.stamp.model.StampCollection</class> <class>org.javad.model.ClassVersion</class> <class>org.javad.services.TestEntityWithIdentity</class> <properties> <property name="toplink.jdbc.user" value="sa"/> <property name="toplink.jdbc.password" value=""/> <!-- <property name="toplink.logging.level" value="FINEST"/> --> <property name="toplink.jdbc.url" value="jdbc:hsqldb:mem:."/> <property name="toplink.jdbc.driver" value="org.hsqldb.jdbcDriver"/> <property name="toplink.ddl-generation" value="create-tables"/> <property name="toplink.target-database" value="HSQL"/> </properties> </persistence-unit>
Monday, June 9, 2008
The Tale of Two Tomcats
tomcat-users.xml
that existed in my tomcat installation location was being completely ignored in favor of the Eclipse configuration file. Once I discovered this, I was able to quickly get BASIC authentication to work (which is fine for testing purposes). Below is an example of how this shows up in the Eclipse Package Explorer:
Sunday, June 8, 2008
MySQL with Servlets - Poor uptime
Last packet sent to the server was 3 ms ago. at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2579) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2867) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1616) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1708) at com.mysql.jdbc.Connection.execSQL(Connection.java:3255) at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1293) at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1428)
This led me to research whether this could be a MySQL issue. It turns out, that the connection pool will close all connections after eight hours of inactivity. Their solution of using the "
autoReconnect
" property apparently will not work under most circumstances. This is covered in section 26.4.5.3.4 on the MySQL reference manual.
The solution? Well it has been suggested the writing a small daemon thread which wakes up every hour and executes some small query should be sufficient to keep the connections open. I have not implemented this yet, but this seems reasonable. In my case I'll probably tie it to one of my servlets in their
init()
methods.
Saturday, June 7, 2008
Visual Studio/Cradling Emulators and Your Device
- Cradle the emulator through the Emulator Manager. This of course will not be detected by ActiveSync.
- Go to ActiveSync, and click on
File->Connection Settings...
- When the dialog appears, click on the
Connect...
button to bring up the connect dialog and click next which starts the polling step. Cancel the polling after a second. - You will see the Connection Settings... dialog become unresponsive (all grey - I guess they are performing code in the UI tread (interesting observation)), and after about 5 seconds hear the familiar "beep" and the emulator will become cradled.
This process has worked for me switching between the device (my HTC-8900) and my smartphone emulator. After performing this more than twenty times however it may not longer work, in which case either restarting ActiveSync (killing and restarting the process) or a reboot of the computer may be required. However typically I find this works ok for a normal development session of a few hours where you are mostly using the Emulator and every so often downloading to your device to ensure that it still works.
Friday, May 30, 2008
MS Bugs - You mean they exist?
... SqlCeResultSet resultSet = _sqlController.getResultSet(ResultSetType.COUNTRIES); if (resultSet.HasRows) { foreach (SqlCeUpdatableRecord record in resultSet) { // do something with the result } }
However, evertime it hit the
foreach
statement the program would terminate. I suspected this was an infinite loop or some other issue. Well it turns out that indeed there is a bug in .NET 3.5 which causes the ResultSet to enter an infinite loop when the GetEnumerable( ) is called. An interesting article that outlines this is located Jim Wilson's Blog. Fortunately this issue only cost me a few minutes, but I was glad to see that this was a Microsoft issue and not my beginner C# programming skills at fault.
Friday, May 23, 2008
JPA Identity Interger/Long or String?
The simple answer is yes, but not by applying the JAXB annotations on the persisted property. Instead, you would need to create a proxy method that can convert your PK (Primary Key) into a string representation, and this method is tagged with the @XmlID annotation. In order to be used by JAXB you need at least a String property. Lets look at a simple code example.
import javax.xml.bind.annotation.XmlID; import javax.xml.bind.annotation.XmlRootElement; import javax.xml.bind.annotation.XmlTransient; import javax.xml.bind.annotation.XmlAttribute; import javax.persistence.Entity; import javax.persistence.Id; import javax.persistence.GeneratedValue; import javax.persistence.Transient; @XmlRootElement @Entity public class PersistedObject { @Id @GeneratedValue(strategy=TABLE, generator="CUST_GEN") @XmlTransient // we are not going to write out the id private Long id = null; @XmlTransient @Transient // this is not an entity managed attribute private String identityString = null; @XmlTransient public Long getId( ) { return id; } public void setId( Long id ) { this.id = id; } @XmlID @XmlAttribute(name="id") public String getIdentityString( ) { return ( id != null ) ? id.toString() : "0"; } }
In this manner, we can denote our JPA identity with the data-type which makes sense (either a Long or Integer) yet allow for easy XML Serialization through the use of the @XmlID field on the
getIdentityString()
method. This is certainly not ideal, and I would've preferred to put the annotation on a method only, however JAXB requires the XmlID tag on a property.
Unfortunately for me, I only thought of this after converted my persistent beans, services and unit tests over to String Ids. Fortunately SCM tools (subversion in this case) come to the rescue and I can easily back out my changes.
Monday, May 19, 2008
Smartphone Connectivity in Visual Studio 2008
An Update (05/23/2008): Having used the emulator a little more, I think the key point that is often lost is the usage of the "cradle" option to dock the emulator with ActiveSync. Once you get in the habit of doing this per development session, you will find it is much easier. Also, I have not had very good results of having both my Smartphone and the Emulator cradled simultaneous and recommend using one or the other. This appears to be a pretty common issue.
Tuesday, May 13, 2008
Flexjson limitations
@Entity public class Stamp implements Serializable { private Long id; private String description; private Country country; // ... other attributes and methods } @Entity public class Country implements Serializable { private Long id; private String name; // ... }
Writer out = // ... some writer like a PrintWriter Collectionstamps = stampService.getAll( ); JSONSerializer serializer = new JSONSerializer( ); serializer = serializer.include("id","description","country.id").exclude("*"); for( Stamp s: stamps ) { out.write( serializer.serialize( s ) ); }
While this works, If your object has many properties and object relationships, this can get a little exhaustive setting up the include and exclude parameters. It also means you either (a) need a introspective tool to read this from your beans or (b) you need to provide some handler for each bean to setup the includes and excludes properly. I personally have great faith in the Open Source community, and will look forward to leveraging the next version of Flexjson to cleaner handle this situation with a Transformer (Transformers today only handle Strings, primitives and dates). Until then I suppose I'll have to come up with some solution that is tied to my object model.
Saturday, May 10, 2008
JSON Annotating of Java Beans
Thursday, May 8, 2008
The Engineers Approach to Baby Crying
Saturday, May 3, 2008
Olympus and Poor Usability - Lesson 1
Friday, May 2, 2008
JAXB and the nasty @XmlID
- At most one field or property in a class can be annotated with @XmlID.
- The JavaBean property's type must be java.lang.String.
- The only other mapping annotations that can be used with @XmlID are:@XmlElement and @XmlAttribute
@Entity @XmlRootElement public class Company implements Serializable { @Id @Column(name="company_id") @GeneratedValue @XmlAttribute @XmlID private String id; // ... setters, getters and other methods. } @Entity @XmlRootElement public class Employee implements Serializable { @Id @GeneratedValue @XmlAttribute @XmlID private String id; @JoinColumn(name="COMPANY_REF", referencedColumnName = "company_id") @ManyToOne(optional=false) @XmlIDREF private Company worksfor; // ... setters and other methods }
Now if we marshall an Employee with an id of "50" who works for a company with an id of "20", the resulting XML would look something like the following:
<employee id="50"> <worksfor>20</worksfor> <employee>
Thursday, May 1, 2008
Eclipse, Toplink, JPA and a Lost Evening
INFO: The configured persistenceUnitName is: MileageTracker [TopLink Config]: 2008.05.01 10:23:15.703--ServerSession(2165595)--Thread(Thread[main,5,main])--The alias name for the entity class [class org.javad.mileage.model.Vehicle] is being defaulted to: Vehicle. javax.persistence.PersistenceException: No Persistence provider for EntityManager named MileageTracker: The following providers: oracle.toplink.essentials.PersistenceProvider oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider Returned null to createEntityManagerFactory. at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:154) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:83) ...
Tuesday, April 29, 2008
GWT and JPA with Servlets
I am doing some investigation of GWT and in writing an application, I wanted to integrate the servlet end of my application with a JPA services oriented architecture (either through the servlet itself, or a standalone JPA service). I have written a few JPA applications and there are several things I like about JPA:
- There is a nice model(bean)/service view of the world.
- Defining your persistence behavior on your JavaBeans "feels" right.
- Takes care of all the ugly connection/pooling stuff for you.
- You do not need to be a service "guru" (personally I prefer the business layer and presentation layer) to get a working application with persistent storage.
While there are certainly disadvantages with JPA, I wanted to leverage my JPA knowledge, tools and services with a GWT application using RPC-Servlets. The nice thing about using RPC-Servlets is you can code your servlet in a very DSL like manner. (I was going to look into REST and Restlets but decided GWT was enough to learn for now). Getting JPA and RPC-Servlets proved rather difficult at first, and I almost abandoned the approach in favor of a pure servlet "service" approach delivering JSON or XML objects and then using HTTPRequests to get the responses from GWT.
My first attempt was to try and stick my servlet(s) that referenced my JPA services outside of the GWT client packages and then try and run these within the hosted tomcat within GWT. This proved problematic since GWT/tomcat bundles an earlier version of the xerces library which is not compatible with the persistence.xml xsi schema. (I also wasn't very convinced this would work anways since I am not sure if the imbedded tomcat runs with a 1.4-compliant JDK or not (thus preventing JPA annotations)). So I needed another approach. I had read that GWT hosted browser could run without the Tomcat imbedded, hence allowing you to work and debug within the hosted browser, but use a standard OOTB tomcat service. This appealed to me alot, since I can easily setup a dynamic web project in eclipse to host my servlet/service and I can work in the natural hosted browser of GWT. After a little tweaking I got this to work in eclipse and so far, while not as clean as a pure GWT hosted-mode environment I am able to do everything I need to do.
Application Description
I am going to write an application which allows me to store user profiles (website profiles, not Windows profiles) in a database with encrypted passwords. If you are like me, after signing up for a few sites, you can never remember you user ids and passwords. The application will be called ProfileManager and it will be written in GWT as a web-application, using JPA to access the database through a ProfileService.
GWT Application Setup
I created an eclipse project for my GWT application using the projectCreator and applicationCreator scripts provided by GWT with the -eclipse flag. This provides you with the basic scaffolding necessary for the creation of a GWT application. For the purposes of this blog, the package path of my module is org.javad.profile.gwt.ProfileManager. Executing the ProfileManager.launch
script launches the GWT toolkit development shell and my application. I am not going to go into more details here as these procedures are well documented in text and websites.
Step 1: Create the Service Interface
My service interface under the package org.javad.profile.gwt.client.rpc. The interface has to extend the google interface RemoteService. For simplicity this interface defines only a single method getAll( ) that looks like the following:
package org.javad.profile.gwt.client.rpc;
import java.util.List;
import com.google.gwt.user.client.rpc.RemoteService;
public interface RPCProfileService extends RemoteService {
public List getAll( );
}
As more functionality is added, I will add the signatures to the service interface. Along with the interface, I need an Asynchronous interface which is defined in the same package:
package org.javad.profile.gwt.client.rpc;
import com.google.gwt.user.client.rpc.AsyncCallback;
public interface RPCProfileServiceAsync {
public void getAll( AsyncCallback callback );
}
Step 2: Create your Serializable Bean
Since JPA uses annotations, we unfortunately have to translate the JPA POJO to a GWT DAO. One product you might want to look at is HiberObjects. For this project, I created a simplified model of my JPA POJO called "ProfileBean" under org.javad.profile.gwt.client.model. Since this overview does not use the ProfileBean directly, I am not going to say any more on it, other than you'd need it for a full GWT Application.
Step 3: Create an Externalized JAR
In order to implement the service interface (and retrieve ProfileBean objects) you need to bundle these into an external JAR to associate to your web project. Within eclipse you can do this with the File->Export ...->JAR File functionality. This JAR forms the externalized view of our service which we'll need to implement within the web project.
Step 4: Modify the .launch Script
When you create your project, the GWT toolkit created a ProfileManager.launch script which you can use to launch your application. The problem is, this launch script will launch the Toolkit Development shell with an imbedded Tomcat, will attempt to connect on port 8888 (default for GWT toolkit) and will not include a web-application root name, which if you are deploying with web project will be needed.
The GWTShell command can take some arguments which we'll use to adjust this. Editing the ProfileManager.launch shell, you need to change the value of the stringAttribute " org.eclipse.jdt.launching.PROGRAM_ARGUMENTS"
- First, we need to tell the shell not to launch an imbedded Tomcat. This is done by specifiying the -noserver option.
- The port needs to be specified. In my case, my Tomcat is running on port 8080 which can be defined by specifying the -port 8080 option.
- Finally, we need to change the application which is launched by the application script by prepending the web-application name in front of the module's HTML file.
An example of this line from my ProfileManager.launch script looks like the following:
<stringAttribute key="org.eclipse.jdt.launching.PROGRAM_ARGUMENTS" value="-noserver -port 8080 -out www Profiles/org.javad.profile.gwt.ProfileManager/ProfileManager.html"/>
Now executing the script will open the Toolkit Shell an attempt to execute the GWT application with the correct port number and web-application name. A tomcat instance will not be started by the Toolkit Shell. The navigator window of course will not be able to connect to your application (since we have not hooked it up yet) so your window should look something like this:
Step 5: Make a call to the Service
Before we go on to the servlet project, it'll be helpful to know that we have the connection to the service working within GWT when the servlet is ready. To test this, I modified the the onModuleLoad( ) method of the application to simply dispatch a call to the RPC service. Obviously if you are going to use a DAO/Controller pattern this would be abstracted, but this is simply a test to know you are on the right path.
...
RPCProfileServiceAsync service = (RPCProfileServiceAsync)GWT.create(RPCProfileService.class);
ServiceDefTarget endpoint = (ServiceDefTarget) service;
String moduleRelativeURL = GWT.getModuleBaseURL() + "servlet/ProfileServlet";
endpoint.setServiceEntryPoint(moduleRelativeURL);
AsyncCallback callback = new AsyncCallback() {
public void onSuccess(Object result) {
System.out.println("in callback");
}
public void onFailure(Throwable caught) {
caught.printStackTrace();
}
};
service.getAll(callback);
If we were to launch the application now, it would fail since there would be no response from the service. However when we complete the next section, we should see to "in callback" message shown in the console of the GWTShell.
Servlet Project For the servlet project, I am using Eclipse Europa with the WTP 2.0. This includes Dali, which allows you to easy define you JPA POJOs using the built in editor. Step 1: Create the Project Create the project within Eclipse using the "File->New->Project..." and selecting the dynamic web project under the Web project types. Enter a project name (this will be the default web-application name) so for mine I chose "Profiles". For the Project Facets step, make sure you choose the "Java Persistence" facet. This will allow you to manage your JPA objects.
Step 2: Add the GWT RPC library When we exported the RPC library from our GWT project (see Step 3 above) we created the contract that the servlet needs obey. We now need to import this JAR into the web project as a references library. After adding it as a referenced library, we also need to ensure that it is copied to the server deployment location. This is done by selecting the Properties of our project and selecting the necessary JARs under the J2EE Module Dependency option. I also included the TopLink JPA, MySQL (connector library) and gwt-servlet.jar as can be seen from the image below:Step 3: Create your RPC Servlet
To implement our servlet, I created a class GWTProfileServlet which extends the RemoteServiceServlet and implements our RPCProfileService. This is located in the org.javad.profile.servlet
package under the src variant of my web project.
package org.javad.profile.servlet;
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
import org.javad.profile.gwt.client.rpc.RPCProfileService;
import org.javad.profile.model.Profile;
import org.javad.profile.service.ProfileService;
import com.google.gwt.user.server.rpc.RemoteServiceServlet;
public class GWTProfileServlet extends RemoteServiceServlet implements RPCProfileService {
@Override
public List getAll() {
ProfileService service = ProfileService.getInstance();
Collection profiles = service.getAll();
System.out.println("debug: got all profiles" + profiles);
return new ArrayList();
}
}
In the example above, I make a call to the ProfileService which is a JPA Enabled service for managing profile objects from the datastore. You could also add your JPA persistence code directly to the servlet here. I am also printing out a debug message just to show that I am getting the output in the servlet.
Step 4: Create the servlet mapping
Since we want the GWTProfileServlet to be mapped under the GWT Application, in the web.xml for the web project, you need to define the servlet-mapping that includes the GWT Module name. So for our application, the servlet path was appended to the module path, making a url-pattern name as follows:
<servlet>
<description></description>
<display-name>RPCProfileServlet</display-name>
<servlet-name>RPCProfileServlet</servlet-name>
<servlet-class>org.javad.profile.servlet.GWTProfileServlet</servlet-class></servlet>
<servlet-mapping>
<servlet-name>RPCProfileServlet</servlet-name>
<url-pattern>/org.javad.profile.gwt.ProfileManager/servlet/ProfileServlet</url-pattern>
</servlet-mapping>
Step 5: Copy over the base GWT Application
In order for the web application to properly server the GWT Application to the imbedded browser we need to provide some files to the Tomcat.
First compile the GWT Application that you wrote in the first section above using the ProfileManager-compile.cmd through the External Tools in Eclipse.
Create a folder under the www-root named "org.javad.profile.gwt.ProfileManager". In this folder copy the following files from the www-root folder from your GWT Project:
ProfileManager.html
org.javad.profile.gwt.ProfileManager.nocache.js
hosted.html
gwt.js
Now you should be able to start your application server, and launch the GWT Toolkit browser and debug the application (both the servlet and the GWT application) using the eclipse debugger.
Monday, April 28, 2008
Failed Servers in Eclipse
Saturday, April 5, 2008
Generated Unit Tests
- The process of manually creating tests forces the developer to think about their code, how they would test it, whether they have covered all the conditions etc. Generating a test for them, essentially takes away this process, thus losing that valuable design, test, code, refactor cycle. This reasoning accounts for 25-50% of the benefits one achieves by writing unit tests.
- Tests become executable documentation for the code under test. Generated tests can never know (without a brilliant generator that can use Javadoc and other design deliverables to generate test conditions - this does not exist to my knowledge) the intent of the test. They only know that a method takes N arguments, returns a value of Y (or throws exception Z) and smarter tools like AgitarOne can actually make some assertions on what is being modified during method invocation.
- Generation of tests on existing code assumes the code as written is correct. There in is a problem, but as I mention in the previous point, without some sort of language that can derive proofs, I do not see the test generation being able to make this assertion.
- Generation violates one of the key development practices. The test code should be maintained and written with the same care, style and detail production code.
- One simple question. Have you ever come back to a failed test after 6 months to try and see why it is failing, only to see a hundred lines of mock code followed by 30 assertions trying to figure out why your code change caused the test to fail? Enough said.
In no way am I not advocating the generation unit tests. It is a very novel concept, but I think it removes a crucial component of good software development practice. Automation tends to lead to dependence, and dependence can lead to ignorance. From my viewpoint, generation of tests does not inspire one to follow good software development practices, but to ignore them with the safety net. As one of collegues said.... "I'd rather have one good test that is comprehensible than 20 tests I can not read".