Tuesday, December 29, 2015

NodeJS and Promises with MySQL Connector - beware of the unreleased connection

In my NodeJS application I am using the nice MySQL library node-MySQL. I have heavily used Q promises for managing the asynchronous nature of code vs. using callbacks. By using a connection pool, I can reuse connections without having to worry about re-negotiating with the database with every query. My general design is to get a connection from the pool, do whatever I need to do with it and release it. I do not share a connection to another class/service except in the case of "in-transaction" pre/post operations. For example, on update() I have a "postUpdateAdditions()" function that takes the connection as an argument. The rule is it can not release/manage the connection as it is within a transaction.

I was recently getting some strange behavior running my integration tests whereby I was getting out-of-connection pool errors. These appear as timeouts in the system/deadlocks. After spending hours hunting down the problem I discovered two places where I was not properly releasing the connection (and thus I was connection leaking). In general this is not easy to detect (and I even added some logic to my connection manager class to track new and released connections (and to try and flag if they get out of wack)

I came across a strange situation with promises that I didn't think would happen, but it must be a timing related issue.

I had some code like the following:

var defer = q.defer();
connection.query(query_string, function (err, rows) {
    if( err ) {
       defer.reject(err); // rejection will rollback and release in caller
    }
    // do another operation on connection and return to caller
    defer.resolve(somedata);
});

return defer.promise;


Saturday, December 19, 2015

Recently I had to reinstall my operating system and chose Windows 10. I had an issue with my NodeJS 3.x environment and switched to NodeJS 5.x. In doing so, I had issues with the node-gyp again.

It is not clear the order in which I installed everything or if the order matters, but I had to install both the "Visual Studio 2015 Community Edition" as well as the "Visual Studio C++ Build Tools". This was able to solve the various compilation issues I was having.

Saturday, October 3, 2015

Installing libraries with node-gyp and NodeJS 4.1+ on Windows 8+

I have noticed that some node modules come down and compile on Windows. These may be because they detect a compiler environment or because as part of their install they need to be compiled with native libraries. Either way, inevitably you may run into node-gyp trying to compile. Assuming you have the correct tool chain (see the [node-gyp github site](https://github.com/nodejs/node-gyp)) you may still get an error during compile that looks like this:

C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120\Microsoft.Cpp.Platform.targets(64,5): error MSB8020: The build tools for Visual Studio 2012 (Platform To
olset = 'v110') cannot be found. To build using the v110 build tools, please install Visual Studio 2012 build tools. Alternatively, you may upgrade to the cur
rent Visual Studio tools by selecting the Project menu or right-click the solution, and then selecting "Upgrade Solution...". [D:\aurelia-binding-issue-178\nod
e_modules\browser-sync\node_modules\socket.io\node_modules\engine.io\node_modules\ws\node_modules\utf-8-validate\build\validation.vcxproj]
gyp ERR! build error
gyp ERR! stack Error: `C:\Program Files (x86)\MSBuild\12.0\bin\msbuild.exe` failed with exit code: 1
gyp ERR! stack at ChildProcess.onExit (D:\Java\nodejs\node_modules\npm\node_modules\node-gyp\lib\build.js:270:23)
gyp ERR! stack at emitTwo (events.js:87:13)
gyp ERR! stack at ChildProcess.emit (events.js:172:7)
gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:200:12)
gyp ERR! System Windows_NT 6.3.9600
gyp ERR! command "D:\\java\\nodejs\\node.exe" "D:\\Java\\nodejs\\node_modules\\npm\\node_modules\\node-gyp\\bin\\node-gyp.js" "rebuild"
gyp ERR! cwd D:\aurelia-binding-issue-178\node_modules\browser-sync\node_modules\socket.io\node_modules\engine.io\node_modules\ws\node_modules\utf-8-validate
gyp ERR! node -v v4.1.1
gyp ERR! node-gyp -v v3.0.3
gyp ERR! not ok

This is indicating simply that it can not find the 2012 build tools. However installing these tools will not help. Why? Because Node 4.1.1 is compiled to 2013 spec. Therefore you need to pass the following flag to npm install such that it knows to use the 2013 version.... --msvs_version=2013

This will now result in you having to always run install with the following syntax npm install --msvs_version=2013

Update

Monday, October 20, 2014

Webstorm supports Open Source projects

Since all of my open source projects are turning to HTML-5/JS based architectures, I started looking into platforms to develop stamp-webservices 4.0 that would be on the NodeJS platform. I am also supporting ng-scrolling-table which is AngularJS based. I had previously used Netbeans 8.0.X for developing ng-scrolling-table, however the lack of NodeJS debugging and support for many of the NodeJS capabilities was forcing me to look elsewhere. I tried Eclipse again, but like usual with Eclipse, it either works great or not at all. For me it was the later. I had been using MS Visual Studio 2013 however I found this platform rather "annoying" (to say the least - lots of odd key sequences, scrolling in the files and slower than I'd like for pure JS tools).

This led me to look at Webstorm. I had always avoided it, because it seems like just one more IDE tool, was a commercial product and I wanted to support Netbeans/Eclipse initiatives. But after one evening of using the trial version I was impressed! Not only was I able to do full HTML-5 debugging in it, I was able to launch my NodeJS server, debug content, it recognized when node was running and asked to restart (when switching between run and debug modes) and was able to executer all of my Mocha integration tests from the IDE with full stack trace link through. I was sold. I also noticed that they offer a program to support Open Source projects (both of these are) and provide a free license key to the projects. Superb, and I may not consider discussing with my manager to obtain a license for work related activities. Nice to see a company supporting the open source community (which all these companies use internally)

Wednesday, August 27, 2014

Playing flac files in Windows Media Player (WMP)

If you are like me, you don't have a lot of tools for playing video and music files. You use a few simple tools and keep in simple. In my case I primarily use WMP (Windows Media Player) on Windows 8.X and sometimes VLC Player, (esp. on my Linux systems).

If you try and play .flac files in WMP they will not play because the codec is not available. No worries, as there is a great project I found hosted on xiph.org that provides these files to the Windows environment.

Saturday, October 12, 2013

Writing Selenium Tests in a more DSL like way

I noted in a previous blog post that I was starting to write more tests in a more fluent DSL like manner. I thought I would illustrate this. Here is an example of a test that does the following (assuming you are logged in on the home screen):

  1. From the navigator (think of it as a button bar) click the "Create wantlist" - this will launch a create wantlist editor and return this solvent.
  2. We want to fill in the fields and check for certain conditions (like if the form is valid or not). 
  3. We want to hit "save" button
I do not show the additional steps of verifying the item that got created in the table.

public void create() {
    Navigation nav = new Navigation(config);
    WantlistEditor editor = nav.create(Navigation.CreateAction.WANTLIST);
    String id = generateUnique("");
    editor.invalid()
            .set(factory.get(StampAttribute.Country, Country.BASUTOLAND))
            .set(factory.get(StampAttribute.Denomination, "2d"))
            .set(factory.get(StampAttribute.Description, "scarlet (\"Tail\" flaw)"))
            .set(factory.get(StampAttribute.Catalogue, Catalogue.STANLEY_GIBBONS))
            .set(factory.get(StampAttribute.CatalogueCondition, Condition.MINT_NG))
            .set(factory.get(StampAttribute.CatalogueNumber, "193a-" + id))
            .set(factory.get(StampAttribute.CatalogueValue, 2.25))
            .valid()
            .save()
            .success();



On thing to point out is that when I call a method like valid() or invalid() I am not returning a boolean from the function (in the past it would have been a isValid() type syntax. Instead, I am coding the test as I expect it to behave and if it is not behaving in this manner it will fail at that point. As well a method like success() is programmed to look for the closure of the editor as marking a successful create.

As mentioned earlier, I am using an attribute factory to get the attribute. An attribute is a nifty class I created which is able to hold a value that was set (so I can verify it) and also state what the input type is (selection, checkbox, text etc.) and is able to return a selenium By object that I can use for selection.

On large disadvantage of using the chaining approach is exceptions. Unless your methods themselves do not report errors well, it is difficult to trace what line causes the error. This is one disadvantage of DSL interfaces and means you need to think better about the erroneous conditions and how best to log/print them to console/log files.

Selecting ChosenJS values with Selenium

I had previously blogged about selection of GXT selections with selenium and figured it might be worthwhile if I posted what I was doing with my new AngularJS app that is using ChosenJS as my selection of choice (I wrapped this in a simple directive)

For my selenium tests, I try and write most of the code in DSL format in a object termed a Solvent.  We use this term at my place of employment and I think it is a good term and have continued to use it.  This allows me to encapsulate the underlying implementation while presenting a functional level to the selenium tests.  For form controls, I use a generic Attribute class that along with an AttributeHandler knows how to process any attribute type (for example, checkboxes, textfields, textareas, selections, date pickets etc.).  Thus we come to the "code" of how I handle ChosenJS Selections.

First off, lets look at what the generated HTML looks like for the AngularJS control (this represents a drop-down for my stamp conditions):


<select class="condition-selector ng-pristine ng-valid ng-valid-required" name="condition" data-jq-chosen data-options="conditions" data-placeholder="Select a condition" data-ng-model="model.activeCatalogueNumber.condition" data-ng-required="true" data-ng-change="validate()" tabindex="-1" required="required" style="display: none;">
 <option value="? string:0 ?"></option>
 <!-- ngRepeat: cond in conditions -->
 <option data-ng-repeat="cond in conditions" value="0" class="ng-scope ng-binding">Mint</option>
 <option data-ng-repeat="cond in conditions" value="1" class="ng-scope ng-binding">Mint (NH)</option>
 <option data-ng-repeat="cond in conditions" value="2" class="ng-scope ng-binding">Used</option>
 <option data-ng-repeat="cond in conditions" value="3" class="ng-scope ng-binding">Cancel to Order</option>
 <option data-ng-repeat="cond in conditions" value="4" class="ng-scope ng-binding">Mint No Gum</option>
</select>


Mostly this is Angular, although you may notice the data-jq-chosen and data-options variables which are instructions/directives for my directive (one indicates it should turn this drop-down into a ChosenJS control and the other that it should watch/bind to the conditions variable on the controller). If you are not using AngularJS this probably doesn't mean to much to you.
If ChosenJS was not present we could simply operate on this HTML and use an element selector like By.ByName("condition-selector") and click it and choose the appropriate <option> tag matching the value desired. But when ChosenJS is present, it will add additional HTML into the DOM. Specifically for the above example we have this:

<div class="chosen-container chosen-container-single chosen-container-single-nosearch" style="width: 121px;" title="">
 <a class="chosen-single chosen-single-with-deselect" tabindex="-1"><span>Mint</span><abbr class="search-choice-close"></abbr></a>
 <div class="chosen-drop">
  <div class="chosen-search"><input type="text" autocomplete="off" readonly="" tabindex="21"></div>
  <ul class="chosen-results">
   <li class="active-result result-selected ng-scope ng-binding" data-option-array-index="1" style="">Mint</li>
   <li class="active-result ng-scope ng-binding" data-option-array-index="2" style="">Mint (NH)</li>
   <li class="active-result ng-scope ng-binding" data-option-array-index="3" style="">Used</li>
   <li class="active-result ng-scope ng-binding" data-option-array-index="4" style="">Cancel to Order</li>
   <li class="active-result ng-scope ng-binding" data-option-array-index="5" style="">Mint No Gum</li>
  </ul>
 </div>
</div>


And it is this fragment that we need to interact with (since the chosen-container floats on top of the select DOM element). This ChosenJS DOM will always appear as a peer to the select DOM elements. I order to get this to work in Selenium we need to:

  1. Click the chosen-container to invoke the drop-down (this is unpopulated until shown for performance reasons)
  2. Click the appropriate li element.
How I do this, is with some code in my AttributeHandler and when I do a swtich statement on the control type, the logic I have for the select control (which in my case are all ChosenJS) looks like the following:

case Select:
  elm = parent.findElement(attr.asBy());
 String chosen = "../div[contains(@class,'chosen-container')]";
 elm.findElement(SolventHelper.ngWait(By.xpath(chosen + "/a"))).click();
 elm.findElement(SolventHelper.ngWait(By.xpath(chosen + "/div[contains(@class,'chosen-drop')]/ul/li[.=\"" + attr.getValue() + "\"]"))).click();
 assertEquals(attr.getValue(), elm.findElement(SolventHelper.ngWait(By.xpath(chosen + "/a"))).getText());
 break;


In the code fragment above, the parent is passed to the AttributeHandler as a WebElement. This is the container where that form element is located. It could be a fieldset, a panel div or a popup window etc. It is just a starting point so I can find the appropriate element in close proximity. The ngWait( ) method comes from the Protractor project (obviously it is a Java version of this) and simply waits for Angular to complete its digest (The fluent angular project uses this and you can see the source here.)

You may also notice I do an assertion check just to ensure that the value got set on the link. I may ultimately remove this.

But wait, I see you are using xpaths.... that is so last year.... You are right. CSS Selectors is definitely a better way to go. Except.... there is no CSS selector for parent. Often I have found I am starting to use a more mix of CSS Selectors and XPaths. I suppose I could have gotten a Web Element for the "chosen" variable and then simply used CSS selectors from that. Overall, what is nice about this, is I have yet to run into a problem with timing issues like I had with GXT and EXT-JS selectors. Also, the li tags for the values will be ALL the values in a scrollable window, so selenium will select the right value even if it is not on the screen.

I have been a little slow putting together a regression suite in Selenium for my redesigned Stamp Web application written in AngularJS, but I want to upgrade to 1.2.0 and in order to do so wanted to have about 15-20 critical test cases covered first. So far my experiences has been very positive and the number of timing issues is almost negligible (the only issue I have had is when I launch the application I maximize it and sometimes the app will start the login before maximized which causes some issues). Other than that, I have yet to have to use a single "pause()" method which is really nice.




Thursday, October 3, 2013

Arquillian with Glassfish 4 - early trials

With the release of Glassfish 4 and J2EE version 7, I decided to work at upgrading my stamp web services application to the platform. An important part of my project is my 580 odd tests that include approximately 300 or so Arquillian tests. Since my application takes advantage of Container Managed transaction context and persistence providers, it is not sufficient for me to simply tests these in a out-of-container context. Afterall, how can I verify that listeners or cascading relationships cleanup correctly on a delete operation preventing orphans?

Problem with Latest Arquillian


In making the shift to Glassfish 4, I also upgraded the version of Arquillian and ShrinkWrap I was using and ran into essentially classpath hell. I figured my adventures were sufficient that I would blog about it.

First off, once I had it all working, I ran into a bug under Arquillian 1.1.1.Final. The exception was

WELD-001456 Argument resolvedBean must not be null

It turns out, there is an existing bug for this: . In order to workaround this bug, I needed to use Arquillian 1.0.3.Final.  Hopefully this will get fixed soon and I can move up.

Updates to pom.xml


So moving to what I changed (that is why you came here right?) lets start with the pom.xml. 

  1. Since J2EE 7 uses a pretty recent version of Jersey for JAX-RS compliance, I modified my included Jersey dependencies from compile to provided. (The exception was jersey-client used in my tests that I might not need anymore with the latest JSR specs). I tied the version to the version in Glassfish 4 for compatibility (1.17)
  2. I didn't think I would need eclipselink when testing with the embedded glassfish, but this turned out to be needed.  So for this I used the 2.5.0 version matching Glassfish 4.  As well, since I do not have a compile time dependency I changed the scope from provided to runtime.

          <dependency>
                <groupId>org.eclipse.persistence</groupId>
                <artifactId>eclipselink</artifactId>
                <version>${eclipselink.version}</version>
                <exclusions>
                    <exclusion>
                       <artifactId>commonj.sdo</artifactId>
                        <groupId>commonj.sdo</groupId>
                    </exclusion>
                </exclusions>
                <scope>runtime</scope>
            </dependency>

  3. I was previously including the dependency org.eclipse.persistence.moxy to support MOXy in my application, but since this is the default provider now for Glassfish 4 JAXB processing (for JSON) I was able to remove it.
  4. For the glassfish-embedded-all artifact, I switched to use 4.0.
  5. Finally, in the surefire-plugin configuration, I had some classpath dependency exclusions. I had to add CDI to this or else Arquillian was bombing out (one of the JARs was bringing in a incompatible version of cdi-api). I found this using the netbeans maven plugin and the graphical display and this really helps. For reference here is the exclusions:
    
    <classpathDependencyExcludes>
      <classpathDependencyExcludes>javax.servlet:servlet-api</classpathDependencyExcludes>
      <classpathDependencyExcludes>org.apache.felix:javax.servlet</classpathDependencyExcludes>
      <classpathDependencyExcludes>javax:javaee-web-api</classpathDependencyExcludes>
      <classpathDependencyExcludes>javax.enterprise:cdi-api</classpathDependencyExcludes>
      <classpathDependencyExcludes>org.jboss.spec:jboss-javaee-web-6.0</classpathDependencyExcludes>
    </classpathDependencyExcludes>
    
  6. Updated the version of ShrinkWrap to 2.0.0


I would provide the full pom.xml, but it is quite large (currently 508 lines) so I only included these snippets (also pygments.org is not up so I can not get nicely formatted code) .

Summary


Overall, this was a little more painful than I expected - although most of the pain came in the changes to ShrinkWrap and Arquillian versions that I thought I needed for GF4.   What compounded this was the sneaky inappropriate version of CDI that was causing all the dependency injection to fail.

 

Friday, November 9, 2012

J2EE Application with MOXy

As part of my refactoring of my existing J2EE application using JAX-RS, I wanted to try and reduce the JAXB custom code in place. Previously I was using Jackson and Jersey for JSON serialization using the "Mapped" notation. This involved creating my own JAXBContext object and I was required to create some special mapping processing so that I could map an attribute as etiher an array, non-string or other value.

Since I was going to use the latest version of eclipselink with the refactoring (currently EclipseLink 2.4.1) I decided to check out using MOXy which I had always had my eye on. My goal was to avoid the mapping approach (which was inherently buggy since it required me to remember to add any field to a property file for proper serialization and I could not serialize the same named field differently for different types - ie. field X was always to be type Y). MOXy seemed to like it might be able to address these concerns.

To leverage MOXy in your environment, you need to include it as a provider. In my case I included it in my JAX-RS Application class and set some configuration properties on it. Here is an example of my current application (note I am using Package scanning for JAX-RS resources/providers in Jersey)


public class WebApplication extends PackagesResourceConfig {

 @SuppressWarnings("unused")
 private static final Logger logger = Logger.getLogger(WebApplication.class.getName());

 public WebApplication() {
  super("org.javad.web.providers","org.javad.preferences.model.resources","org.javad.stamp.model.resources");
 }

 @Override
 public Set<Object> getSingletons() {
  MOXyJsonProvider moxyJsonProvider = new MOXyJsonProvider();

  moxyJsonProvider.setAttributePrefix("");
  moxyJsonProvider.setFormattedOutput(true);
  moxyJsonProvider.setIncludeRoot(false);
  moxyJsonProvider.setMarshalEmptyCollections(true);
  moxyJsonProvider.setValueWrapper("$");

  Set<Object> set = new HashSet<Object>();
  set.add(moxyJsonProvider);
  return set;
 }
}

You can read more on the MOXy settings here.

You then need to reference your Application in your web.xml as part of your JAX-RS resource mapping:
<init-param>
    <param-name>javax.ws.rs.Application</param-name>
    <param-value>org.javad.web.WebApplication</param-value>
</init-param>
There are other ways to configure the MOXy configuration, but this is the one that I choose and seemed to fit well for my environment (since I was using a PackageResourceConfig from Jersey for finding my resources).

Since several of my JPA entities included a foreign reference to another JPA entity, I wanted only the ID of the foreign entity to be serialized. Previously I had been using the @XmlIDRef to drive this for serialization, but I had difficulty using this with MOXy. Instead, I needed to declare an XML Adapter to do the serialization and deserialization of the field. This is done using the @XmlJavaTypeAdapter annotation. For my album entity, the foreign key reference to the stamp collection entity looks like this:
@XmlJavaTypeAdapter(StampCollectionRefAdapter.class)
@XmlElement(name = StampFormConstants.STAMP_COLLECTION_REF )
@ManyToOne(optional=false,fetch=FetchType.EAGER)
@JoinColumn(name="COLLECTION_ID",nullable=false)
private StampCollection collection;
In the StampCollectionRefAdapter I transfer the StampCollection entity to an ID and vice versa (when deserializing from a JSON posting). One issue I ran into was that I could not inject the StampCollectionService into this class. This is because these classes are initialized by MOXy (it would appear) prior to the CDI and EntityManagerFactory initialization, so I needed to obtain the EntityManagerFactory which was used in my initialization servlet listener (this sets up a few configurations in my environment). I had tried declaring this class as a @Stateless EJB but did not have any luck (likely because it was constructed explicitly). I may look into this further later, but for now the only way I could get access to the service to transfer the ID back to the physical entity was to leveage the entity manager factory from the initialization listener through a static context (note: this would might fail if I tried to perform any transactions in against this, but should work for normal queries)
public class StampCollectionRefAdapter extends XmlAdapter<StampCollectionRefAdapter.StampCollectionRefType, StampCollection> {

 public StampCollectionRefAdapter() {
  super();
 }
 
 @Override
 public StampCollection unmarshal(StampCollectionRefType v) throws Exception {
  StampCollection collection = null;
  EntityManagerFactory emf = SessionInitializer.getEntityManagerFactory();
  if( emf != null ) {
   EntityManager em = emf.createEntityManager();
   if( em != null ) {
    collection = em.find(StampCollection.class, v.id);
   }
  }
  
  return collection;
 }

 @Override
 public StampCollectionRefType marshal(StampCollection v) throws Exception {
  StampCollectionRefType t = new StampCollectionRefType();
  t.id = v.getId().longValue();
  return t;
 }

 public static class StampCollectionRefType {
  @XmlValue
  public long id;
 }
}

I am having a small issue with arrays of Entities. I have a serializable class which is Genericized and has two fields:
  • total - representing the total number of objects (not necessarily the items returned)
  • items - the List of entities to be returned
The first oddity was I could not define a getItems() method on the abstract class. Instead I had to declare it on the concrete class that implements the Generic type. The reason for this is I want my array of items to be the plural of the actual items contained within it. This requires me to declare the collection similiar to the following:
@XmlRootElement(name="list")
@XmlAccessorType(XmlAccessType.PROPERTY)
public class AlbumModelList extends AbstractModelList<Album> {
 @Override
 @XmlElement(name="albums")
 public List<Album> getItems() {
  return items;
 }
}
The resulting JSON will thus look as I intended:
{
    "total": 5,
    "albums": [
        {
            "id": "2000",
            "name": "Test Collection X",
            "countryRefs": [],
            "stampCollectionRef": 1
        },
        {
            "id": "1",
            "name": "Test Collection x2",
            "countryRefs": [],
            "stampCollectionRef": 1
        },
        {
            "id": "2",
            "name": "Test Collection x4",
            "countryRefs": [],
            "stampCollectionRef": 1
        },
        {
            "id": "3",
            "name": "Test Collection x5",
            "countryRefs": [],
            "stampCollectionRef": 1
        },
        {
            "id": "4",
            "name": "Test Collection x6",
            "countryRefs": [],
            "stampCollectionRef": 1
        }
    ]
}
But if I use a content type of XML (which I am not using very often) the Array is not showing up as am <albums/> element that contains <album/> nodes but instead looks like the following:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<list>
    <total>5</total>
    <albums id="2000">
        <name>Test Collection X</name>
        <stampCollectionRef>1</stampCollectionRef>
    </albums>
    <albums id="1">
        <name>Test Collection x2</name>
        <stampCollectionRef>1</stampCollectionRef>
    </albums>
    <albums id="2">
        <name>Test Collection x4</name>
        <stampCollectionRef>1</stampCollectionRef>
    </albums>
    <albums id="3">
        <name>Test Collection x5</name>
        <stampCollectionRef>1</stampCollectionRef>
    </albums>
    <albums id="4">
        <name>Test Collection x6</name>
        <stampCollectionRef>1</stampCollectionRef>
    </albums>
</list>


Not sure what I am missing here, but I need to address this at some point along with modifying my entity IDs from wrapper types to primitives.




EclipseLink logging with Arquillian in Eclipse

I have been doing some experimentation with Arquillian for testing my refactored J2EE application in embedded glassfish. I really like it so far, but I ran into an issue where me test-persistence.xml file use did not see to provide anything more than SERVERE level logging from eclipselink. I had tried setting the property
<property name="eclipselink.logging.logger" value="DefaultLogger"/>

to no avail. I also set this value to JavaLogger without results. I finally stumbled across an answer while looking into changing the embedded glassfish http port - and that is to use the following value:

<property name="eclipselink.logging.logger" value="org.eclipse.persistence.logging.DefaultSessionLog"/>

This allows me to see the SQL statements (assuming your set the other log values) in both Eclipse and through the maven command line test execution.