Thursday, December 01, 2011

Ad-Hockery functional testing : UI Elements and Selenium experience

This post is a response to Rob's posting on functional testing that just became too long to post in a comment...

At my job, we've used the UI elements approach pretty extensively and at this time we're moving away from it on a new project.

Conceptually, I've also enjoyed the idea that the location logic is sitting in the same tier as the elements that are being located. The support in S-IDE is also quite nice and impressive (e.g. code completion, documentation), the built-in support for unit testing the locators is quite nice.

Now, the downsides....

  • UI Elements development / future : although it is technically a part of Selenium, in practice UI Elements support don't actually evolve in lockstep with it. On a conference call/presentation w/ one of Selenium's maintainers a few months ago, when I asked about UI elements, he indicated that they might be moved out of the core (he didn't seem to be particularly fond of them, I almost had the feeling that he was tempted to just drop the feature). There isn't much of a community using them in the first place, thus if something breaks they can remain broken for a while until someone notices (when that happened to us, the turnaround was pretty quick; although I'm not sure if that is the way it is or because the author of UI elements for Selenium was a former employee).

  • The skills gap : the fact that the UI element definitions are defined outside the main "test case authoring" environment (which in our case is Groovy), makes it another distinct skill that team members (often QA folks) need to acquire. When it comes to Javascript, being a user is easy, being an author who can sling Javascript and write nifty unit tests (typically not a QA skillset) inside of the UI elements is not that easy. At least in our case it led to a bunch of not very well maintained UI Elements, w/ most of the Unit tests commented out and the knowledge of writing UI Elements withing the QA team considered almost a Dark Art. As a result of all this

  • Development cadence issues : although the usage of UI elements in a test case seems very sexy, the same "development cadence" for UI Elements is kinda slow (this could be my own ignorance as well). Basically, if you wanted to enhance a UI element, you would write some Javascript to set up the UI Element definitions and then close S-IDE and open it up again. If your unit test failed, then you would get a bunch of popups about the failing tests, the UI Elements would be disabled and you're back to editing the javascript file until the unit test passes. Only after that, you can try using the UI element in S-IDE or in your test case. Finally, if you were using selenium-server to run your "real" test cases (like we did), you need to restart selenium server (RC) so that it can pick up the updated UI Elements.

  • UI Element organization : this could be an artifact of our own failings w/ UI elements and not necessarily a problem w/ the framework itself; however, the lack of a well defined approach to structuring them has led us into a massive set of pagesets that contains UI elements for a whole bunch of different pages. Thus, when you try to pick up a UI element there isn't a good organizing principle ("Oh, yeah, I'm on the Login page, lemme see what I can locate there") - instead we have something along the lines of ("I'm in feature 'foo', let me see which one of the several hundred UI element in the fooFeaturePageSet might fit").

Although the Page object approach is not a silver bullet it addresses a bunch of the issues above:
  • Skillset - the same skills that test authors use to write test cases, they use to write the abstraction (page objects)
  • A clear organizing principle - people get it, pages have stuff in them, putting the location logic for an element on a page inside of the corresponding page object makes sense. Then, using that is just a method call
  • Deployment / Cadence issues - there is no longer the impedance mismatch of "I want to run this test case, did I re-deploy the UI elements that it depends on?". You just run the test, the testing framework will recompile the modified page objects and the test case runs, no need to do anything w/ Selenium server or anything else.
Anyway, just my 2c on the subject.

Wednesday, November 30, 2011

My Scala (and Tapestry 5) experience

Since both a couple of coworkers asked me about Scala in the last few weeks, I thought these might be interesting to read through (something that emerged over the last few days)


I have obviously not used Scala to the extent that the guy describes, but I can testify at how annoying it is the constant translation between Java and Scala (especially, since my small app, , was using Tapestry 5, which is obviously a Java framework). The conversion between the two was made worse (at least for me, a total Scala newb) by the following:

  • Classes with the same names but completely or subtly different usages and intents . The fact that the Scala library used the same class names as the Java library (java.lang.Long and scala.Long) , the default imports use them, and there are some magical conversions that occur between the eponymous types. On a few occasions I was totally baffled about something not working only to find that in the end, the code was getting the wrong type . Thus, in a bunch of my classes, I ended up explicitly having to import the Java classes that the framework expected to work with , e.g. 
         import java.util.{List => JList }
         import org.slf4j.Logger
         import scala.collection.JavaConversions._
         class FooPage {
     private var catPieces:JList[ArtPiece] = _
            // explicitly declaring the returning types as the Java types
     def onPassivate():JList[String]= {
                    // and using the Scala provided conversions
      return seqAsJavaList(List(category,subCategory))
    and then explicitly use the java specific types. A similar but different situation exists with java.util.List and the scala List classes (e.g.  scala.collections.immutable.List) although they have the same name they have a completely different purpose (e.g. the Scala list is not necessarily intended to be created, and manipulated like the Java list); the equivalent of the java list is the recommended Scala ListBuffer
  • Null handling -  because I was interacting w/ a Java framework, there was an expectation that nulls are OK and at different places, the framework does expect methods to return nulls in order to behave in certain ways. Scala goes for the whole Option pattern (where you aren't supposed to use nulls to make it all better) and has some conversion (that I obviously, don't fully understand) between null and these types. However, because of the interaction w/ the Java framework, I had to learn how to deal with both. It kinda sucked.
  • Tapestry 5 and Scala interactions -  because Tapestry 5 pushes the envelope on being a Java framework w/ a whole bunch of annotation processing, class transformations, etc. , in some cases there were clashes between the T5 approach and Scala. In some respects, Tapestry 5 manages to be a respectable and succinct Java framework by adding a whole bunch of metaprogramming features, which when used with Scala make the scala code less attractive, e.g: 
    • Page properties that would otherwise be set up as private fields in regular Tapestry 5, now have to be declared as private fields and initialized. If you didn't declare them as private, then T5 would complain (since pages can't have non-private members as it is managed by T5) , e.g. :
        class Foo {      
       var logger:Logger = _
       var pm:PersistenceManager = _
    • Sometimes the T5 and Scala approaches seemed to clash in ways that make things complicated. For example, in the persistent objects in the class, I often annotated the private fields w/ @BeanProperty (so that Scala generates proper getters/setters for those fields).
        import scala.reflect.BeanProperty
        import javax.jdo.annotations.Persistent;
        class PersistentFoo {      
       var title = ""
      Yet, when I accidentally did the same for some page properties at weird points the application would start failing (on application reload with Tapestry's live class reloading) until in pages I replaced the approach w/ Tapestry's @Property annotation (although they're supposed to do the same it's quirky w/ BeanProperty)
        import org.apache.tapestry5.annotations.Property; 
        class FooPage {      
       private var category:String = _

When I was working on the app, a few times I had to just stop for a day because I couldn't figure out how to do something massively simple (e.g. how to succinctly join a list of Strings into a comma separated string - stuff that would have taken me 30 seconds to do in Java, 2 seconds in Groovy and the proposed Scala solutions seemed like massive overkill). I originally started wanting to write some tests in Scala for this app, because I thought, "wouldn't it be nice to have something a little more flexible and less verbose than Java", but that still has nice static typing. Later I decided to try the whole Scala+T5 approach, and I have to admit I was pretty mad at myself when I would get stuck .

Obviously, many of my problems described above were due to my own weak Scala-foo (e.g. I had read through at least 2-3 books in order to be brave enough to try this just to learn that until I try things hands on, it doesn't stick too well), and other issues that I had were due to the interaction w/ the specific Java framework that I chose (Tapestry 5). Yet, in some ways, the experience was somewhat disappointing - having worked w/ Groovy for the last few years there is a massive difference in the approaches of the two languages. Where Groovy would often sacrifice some "internal beauty" in order to make a Java developer's life sweet and pleasant, e.g. :

  • Joining a list of strings
  • string formatting using $ inside of strings
       "Blah blah $fooVar"
  • Null safe dereference
... whereas Scala somehow gets stuck in an ideological mode e.g.

One part of my setup that worked very  well and I enjoyed quite a bit was the Continuous Compilation and Tapestry's Live Class Reloading. Whereas for prior Tapestry pure-Java projects I had to rely on IDE magic to do some Compile-on-Save so that Tapestry can reload the changed classes, w/ the Scala setup it was much nicer.  I set up a Maven project w/ the Scala Maven plugin , and then kick off the scala:cc goal  to make it compile the changed page classes into my project. Thus, I had a completely IDE-independent setup that gave me a live-reloading experience on-par (and possibly beyond) the reloading experience with Grails.

In the end, after I managed to work through some of the issues described above, it ended up being a pretty reasonable set up and I was able to make pretty decent progress in getting the app out the door (for my wife's birthday). At the same time, I wasn't really able to leverage any cool Scala features that would magically boost my productivity, or make the codebase significantly cleaner or smaller (in some respects, it feels like the Scala based code is more verbose because of all the conversions and casting into Java types). I feel that if I knew more about Scala and was more knowledgeable about Tapestry internals, I might be able to write a Tapestr5 - Scala adapter layer that would plug into some of T5's extensions points to make it work more naturally with Tapestry (e.g. working w/ Scala lists in views, different handling of null values, etc). As a learning experience - I learned a lot, both about things that were interesting and useful (a bit of functional programming, Java/Scala integration), and some things that I really didn't want to know that much about (how Scala and T5 munge the Java classes to make the things tick).

In any event, the advice to people who like to try this kind of integration would be to allow yourself plenty of time for learning and experimentation w/ Scala and not giving up too early ( as I was almost ready to do on a few occasions). Fanf's blog has a few blog entries and a project on GitHub that is  an excellent starting point.

Wednesday, June 15, 2011

Grails, Web Flows, and redirecting out

If you read the Grails Web Flow documentation it all seems pretty straightforward - start,end states, transitions, actions, events, it's all good. However, just like any other technology that seems like magic, there always is some kind of a catch once you start using it.

One of the little 'gotchas' that I ran into was how to properly complete the flow. Now, reading up the documentation, it would seem easy - at the end of your wizard/flow, you just redirect to a different controller+action and it's all good. It all makes sense - often time, the wizard walks through multiple steps, collects some information, and when it's all done (you save your brand new Foo), you can just redirect to the details page for foo (e.g. /foo/show/1).

Well you thought it would be that easy. Not that quick...

The Grails web flow documentation is kinda deceptive like that. It shows you a simplistic example that works; however, when you try to do something more realistic, then you start getting into trouble. So, the example form the docs looks like this:

  def fooFlow = { 
       def startState { }

       def endState {
             redirect(controller:'foo', action:'bar')


The catch is that in their examples, the redirect is to a static URL that doesn't take any parameters.The problem comes up when in the end state you try to pass in some parameter from the flow (e.g. which often is the case e.g. at the end of the flow you want to display the details page (e.g. redirect(controller:'foo', action:'bar', id:flow.fooId) . The problem manifests itself in a weird way - the web flow would store a particular value of flow property( e.g. flow.fooId (under onditions that I couldn't figure out), and even though your current wizard might have stored a particular value from the current flow, for whatever reason it ends up redirecting to a value stored from a previous flow. So, the wizard 'kinda' worked in that it redirected to a details page at the end of the wizard, but in a large percentage of the time, it would redirect to the wrong details page. From what I could gather, the issue is that in the end state, the redirect cannot use any values from the flow, session, or flash and as a result uses some cached value (possibly from the first flow execution)

The solution to this (which is somewhere on the Grails mailing lists) is as follows: add an explicit empty "end" state (including an empty gsp to match the end state name), and in the states that transition into the end flow, issue the redirect from the penultimate state, e.g.

   def fooFlow = {
       def startState { }

       def beforeEnd {
             action {
                  redirect(controller:'foo', action:'bar', id:flow.fooId)
             on("success").to "end"

       def end( 
          /** note that this requries an end.gsp to be present
              in the flow subdirectory, but it never gets rendered after the
              redirect **/

Now, with this trick at hand, the end.gsp never gets rendered, and the client browsers do get redirected to the detail pages that you want to display, outside of your web flow.

As a more Web Flow centric alternative, you could always store the relevant object (Foo) inside the flow and display any relevant details about the object in the end state (end.gsp)

Sunday, April 03, 2011

NetBeans Database Explorer API and databases in NetBeans

At the recent NetBeans Platform Certified Training organized by Visitrend we were discussing how to work w/ the built in database functionality. NetBeans ships w/ a pretty decent database/SQL functionality out of the box - you can connect to any JDBC compliant database, add your drivers, sling queries, edit the results. It even has code completion for SQL queries - up until SQL Server 2005 it even provided better support for SQL code authoring than the built in Management tools ( and it is still better than the standard Mysql console).

Now, it turns out that for data driven applications, it is a pretty common occurrence that the users need to connect to a database. Sometimes it makes sense to hide the details of the database; at the same time, when you're dealing with sophisticated that have intimate knowledge of the underlying database schema and need to be able to deal with the underlying data, hiding the fact that they're dealing with a database just doesn't make sense. In our case, we have a team of QA Engineers who need to be able to look into all aspects of the database behind the application, so the best a tool can do is to make the setup and access to the database as easy and transparent as possible.

Thus, to solve this problem, my users need the following :
1. Make sure that the IDE has the proper drivers set up to access our test databases
2. Easy setup of the database connection with the details for a specific project/system under tes

Automatically setting up JDBC driver

Unfortunately, Microsoft's SQL Server is not one of the JDBC drivers that ships with the IDE. A new user could just navigate to the "Drivers" node in the "Services" top component and walk through the wizard to register a new driver. This can certainly be an extra step in the list of "setup instructions". But, why should a user have to remember to do that if we can do it in a module. Thus, the first hurdle we need to overcome is have a module that automatically registers the JDBC driver in the IDE :

1. First, we need to  provide the MSSQL JDBC driver.

For that, I created a new Library Wrapper module. I wanted to mention this because for whatever reason when I tried providing the JDBC driver and the XML registration below in the same module, it failed to find the JDBC driver.

In the general case for a pure JDBC driver, this should be enough. However, in order to support windows authentication, the JDBC driver needs to have a   dll included on the path. In order to sutpport jars that need native libraries, the native library needs to be placed in the release/modules/lib as indicated on Modules API Javadoc

2. Create a second module for the actual driver registration.

Add an XML descriptor for registering new drivers (named SQLServer2008Driver.xml). For MSSQL it looks like this :

<?xml version='1.0'?>
<!DOCTYPE driver PUBLIC '-//NetBeans//DTD JDBC Driver 1.1//EN' ''>
  <name value='SQLServer2008'/>
  <display-name value='Microsoft SQL Server 2008'/>
  <class value=''/>
      <url value="nbinst:/modules/ext/sqljdbc4.jar"/>

I am not entirely sure of the meaning of the nbinst: prefix, but this works.

3. Add an entry into the layer.xml file of the second module to add the driver registration XML : 


Unfortunately, in the case of the MSSQL JDBC driver just adding the dll into the module doesn't cut it as it appears that the SQL Server driver also depends on other DLLs, so it actually needs to be in c:\windows\system32 . Thus, adding the jdbc driver dll would have been not needed, as the dll needs to be copied to windows\system32 directory. To do that, register a module installer  :

public class MssqlDriverInstaller extends ModuleInstall {

    public void restored() {
        File mssqlJdbcDll = new File(System.getenv("windir"), "system32\\sqljdbc_auth.dll");
        boolean foundDll = mssqlJdbcDll.exists();
        if (!foundDll) {
            FileOutputStream system32MssqlDll = null;
            InputStream bundledDll = null;
            try {
                system32MssqlDll = new FileOutputStream(mssqlJdbcDll);
                bundledDll = MssqlDriverInstaller.class.getResourceAsStream("sqljdbc_auth.dll");
                System.out.println("Copying sqljdbc_auth.dll to windows system32");
                FileUtil.copy(bundledDll, system32MssqlDll);

            } catch (IOException ex) {
                Logger.getLogger(MssqlDriverInstaller.class.getName()).log(Level.SEVERE, null, ex);
            } finally {
                try {
                } catch (IOException ex) {
                    Logger.getLogger(MssqlDriverInstaller.class.getName()).log(Level.SEVERE, null, ex);
                try {
                } catch (IOException ex) {
                    Logger.getLogger(MssqlDriverInstaller.class.getName()).log(Level.SEVERE, null, ex);

Automatically connecting to the database

Now, the last step is, how to automate the creation of the database connection node now that we can now be sure that the IDE has the right driver to connect to the database: (the APIs used below are from the Database Explorer API, org.netbeans.api.db.explorer)  

public void createProjectConnection(DatabaseConfig dbc) {

        DatabaseConfig dbConfig = dbc;
        if (dbConfig == null) {
            dbConfig = DatabaseConfig.getDefault();

        JDBCDriver sqlSrvDrv = findSqlServerDriver();

        if (sqlSrvDrv != null) {
            try {
                DatabaseConnection dbConn = createDbConnection(dbConfig, sqlSrvDrv);

                final ConnectionManager connMgr = ConnectionManager.getDefault();
                DatabaseConnection foundConn = findSameConnection(dbConn);
                if (foundConn == null) {
                    foundConn = dbConn;

                final DatabaseConnection dbConn2 = foundConn;
                RequestProcessor.getDefault().post(new Runnable() {

                    public void run() {
                        try {
                        } catch (DatabaseException ex) {
                            Logger.getLogger(Ats3ProjectDataService.class.getName()).log(Level.SEVERE, "Failed to connect to database", ex);

            } catch (DatabaseException ex) {
                Logger.getLogger(Ats3ProjectDataService.class.getName()).log(Level.SEVERE, "Failed to connect to database", ex);

    private DatabaseConnection createDbConnection(DatabaseConfig dbConfig, JDBCDriver sqlSrvDrv) {
        DatabaseConnection dbConn;
        String url = null;
        String userDb = dbConfig.getName();
        if (userDb != null) {
            if (userDb.contains("${username}")) {
                userDb = userDb.replace("${username}", System.getProperty(""));
        } else {
            userDb = System.getProperty("") + "_sb_rc";
        if (!dbConfig.getUseSqlAuth()) {
            url = String.format("jdbc:sqlserver://%s:1433;databaseName=%s;integratedSecurity=true", dbConfig.getServer(), userDb);
            dbConn = DatabaseConnection.create(sqlSrvDrv, url, "", "dbo", "", true);
        } else {
            url = String.format("jdbc:sqlserver://%s:1433;databaseName=%s", dbConfig.getServer(), userDb);
            dbConn = DatabaseConnection.create(sqlSrvDrv, url, dbConfig.getUser(), "dbo", dbConfig.getPassword(), true);
        return dbConn;

    private JDBCDriver findSqlServerDriver() {
        JDBCDriver sqlSrvDrv = null;
        JDBCDriver[] drivers = JDBCDriverManager.getDefault().getDrivers("");
        // we know that there should be at least one as this module registers it
        for (JDBCDriver drv : drivers) {
            if ("SQLServer2008".equals(drv.getName())) {
                sqlSrvDrv = drv;
        return sqlSrvDrv;

    private DatabaseConnection findSameConnection(DatabaseConnection dbConn) {
        DatabaseConnection foundConn = null;
        ConnectionManager connMgr = ConnectionManager.getDefault();
        for (DatabaseConnection dbc : connMgr.getConnections()) {
            if (dbc.getDatabaseURL().equals(dbConn.getDatabaseURL())) {
                foundConn = dbc;

        return foundConn;