Tapestry Training -- From The Source

Let me help you get your team up to speed in Tapestry ... fast. Visit howardlewisship.com for details on training, mentoring and support!

Friday, May 28, 2004

HiveMind home page updated

I've put up the updated HiveMind home page, which now has the Forrest look to it, replacing the old Maven look.

Forrest has its own issues, but at least those issues are localized to documentation, mostly navigation. For example, I just could not get the tabbing thing working; I'd like each of the modules (hivemind and hivemind.lib) to be its own tab, but after struggling for a long time, I've decided to punt. Forrest falls very far short on simplicity, consistency, efficiency and especially feedback ... but at least I can get pretty much the results I want, the way I want them, which I could not manage using Maven. The Maven guys blame Jelly but it's all the same to me!

Despite some pointers to other sets of Ant build tools, I've continued to develop my own home brew stuff, and it's fitting my needs quite well. Even when the rest of HiveMind goes into beta, the build scripts will still be alpha for a while.

Thursday, May 27, 2004

Tapestry Test Assist

A frequent criticism of Tapestry, from the point of view of the Test Driven Development crowd, is that Tapestry is too hard to test ... because all your classes are abstract.

As a stop-gap measure, I've finally gotten around to creating Tapestry Test Assist. This is a simple class, that can be used inside test suites, to instantiate abstract Tapestry pages and components (or any other abstract class, for that matter).

Like Tapestry itself, the AbstractInstantiator will create new fields and methods in the subclass. Unlike Tapestry, it isn't driven by an external specification; it just finds each property that is abstract (i.e., has an abstract getter and/or setter method) and implements the property in a subclass, with a field and pair of accessor methods. Unlike Tapestry, these accessors are very simple, with no hooks into Tapestry persistent page property logic ... and that's fine for testing.

The source code is available as a zipped-up Eclipse workspace. The easiest thing is just to copy the couple of source files into your own test suite. Eventually, this will be part of the actual Tapestry distribution.

Tapestry in Action on java.net

Coincidentally, java.net is also running a discussion of Tapestry in Action. I don't think they are giving away a clopy of the book. I'll be monitoring this forum as well as the JavaRanch. The java.net discussion runs for a month.

Examples from NFJS Denver

I finally remembered to upload the examples from my presentations at NFJS. This is a ZIP file of the Eclipse project. Expect this to change over time as I add more examples for different situations (I also use this application with clients).

Tuesday, May 25, 2004

JavaRanch Radio - Giveaway of "Tapestry In Action"

Just a note:JavaRanch Radio - Giveaway of "Tapestry In Action"

I'm monitoring the forum and answering questions. Any additional help and postings by the Tapestry "faithful" would be most welcome.

Monday, May 24, 2004

Tapestry at NFJS Denver

Just got back from the Denver No Fluff Just Stuff, where I gave two Tapestry presentations and a HiveMind presentation. Matt Raible attended the basic Tapestry session and was impressed with Tapestry.

I didn't stick to my presentation at all; Before the session, I took my finished examples application, gutted the two pages (Login and AddAddress) back down to plain HTML, and either trimmed or removed the page specifications and the Java classes. I then put them back together live! during the session.

Most people really liked this, they saw how quickly the exception report page gets you to fix problems, and just how little goes into the HTML (and the page specification and the Java class for that matter). One evaluation claimed that "watching someone code is boring", but that's the exception to the rule ... the clueful people could see how easily Tapestry would fit into their development cycle, which is the whole point.

Also got to demo some great features in Spindle while I was at it. I did have some stumbling points ... mostly the same problem getting me Dell laptop to work with Jay's projectors (I can't synchronize my screen to the projected view, so I have to code while staring up at the screen). Matt also suggests creating templates for the code, rather than laboriously typing in everything and that's a great idea ... I'll just create templates for each method I'll add to each of the classes.

Had a little fun on the expert panel ... David Geary was pretty cantankerous about JSF vs. Tapestry, and brought out that tired line about "It's a Standard". Results Not Standards folks! More comments on this subject later ...

My other two sessions were underattended ... right now, competing against sessions on Spring and Groovy is a non-starter. In fact, for the Tapestry components session, we just gathered around my laptop, which was a lot of fun.

I'm going to be retooling my presentations and hopefully will have a Tapestry and Hibernate session ready soon. In addition, we will probably combine the two Tapestry sessions together into a "Tapestry Kickstart" double (three hour) session.

Friday, May 21, 2004

JAM -- Another alternative to Maven

I've been learning a lot about Ant 1.6 features, but others may have beaten my to the punch: JAM seems to do all the things I'm planning and, of course, it already exists. Need to check it out a bit more carefully, but if it's less work ... it's less work!

Moving Away from Maven

I've gotten some comments asking why I'm moving away from Maven.

I started to use Maven initially as part of the HiveMind experiment. Fundamentally, I liked two specific features:

Maven does those two things pretty well, though the documentation part has a large number of bugs that makes keeping the documentation up-to-date problematic.

Anyway, measured by my Four Principals, Maven falls shorts:

  • Simplicity - this isn't even on the horizon in Maven-land. If you deviate even a tiny bit from Maven's one-true-path, you are lost! The complexity of plugins, class-loaders, XML documents that are Jelly programs in disguise, lack of documentation, etc., etc. means that doing something trivial can take hours of guesswork. Some parts of this "release candidate" have obviously not even been given a cursory test. Certainly the internals of most plugins are a maze of Ant, Jelly, properties and such, seemingly without end.

    This could be somewhat addressed by documentation, but there is a pitiful amount and what there is, is out of date. Understanding multi-project (the whole point of Maven you would think) is a total challenge, addressed only by endless experimentation.

  • Consistency - I guess this is hard to guage; do you write your own plugin? Write ad-hoc Jelly script? Write and use an Ant task?
  • Efficiency - Maven is sluggish, chews memory, tends to repeat operations needlessly. I could comment on the volume of downloads that occur (for plugins, for libraries the plugins depend on) but that actually isn't an issue, after you run Maven the first time. However, for HiveMind, I've taken to leaving the room while I perform my dist build ... and it's only two small projects.

    One concrete example: when using Maven, I could only get my unit tests to work by turning fork on. With Ant, I'm able to run the unit tests without forking. That's a huge difference.

  • Feedback - Can you say "NullPointerException"?

What I've accomplished in two days using Ant 1.6 will serve me well for HiveMind and for Tapestry ... and beyond. I think we'll be able to get a significant amount of Maven's functionality in a small, finite, understandable package, and be able to use it on the vast majority of pure Java projects.

For example, Here's the build.xml for the framework:

<project name="HiveMind Framework" default="jar">

	<property name="jar.name" value="hivemind"/>
	<property name="javadoc.package" value="org.apache.hivemind.*"/>

	<property name="root.dir" value=".."/>
	<import file="${root.dir}/common/jar-module.xml"/>
	<import file="${common.dir}/javacc.xml"/>								
					
	<target name="compile">
		<ibiblio-dependency jar="commons-logging-1.0.3.jar" group-id="commons-logging"/>
		<ibiblio-dependency jar="javassist-2.6.jar" group-id="jboss"/>
		<ibiblio-dependency jar="werkz-1.0-beta-10.jar" group-id="werkz"/>
		<ibiblio-dependency jar="servletapi-2.3.jar" group-id="servletapi"/>				
		<ibiblio-dependency jar="oro-2.0.6.jar" group-id="oro"/>
		<ibiblio-dependency jar="log4j-1.2.7.jar" group-id="log4j"/>
				
		<ibiblio-dependency jar="easymock-1.1.jar" group-id="easymock" use="test"/>
			
		<run-javacc input="${javacc.src.dir}/SimpleDataLanguage.jj" package-path="org/apache/hivemind/sdl/parser"/>
		
		<default-compile/>
	</target>

</project>

And here's the the project.xml, project.properties, and maven.xml.

I hate to bash the Maven project ... but what I see in Maven is a good, simple, core idea that's spiralled down the wrong path by trying to be everything to everyone. That's a lesson I'm taking to heart as I build something that suits my needs better.

Thursday, May 20, 2004

Maven-like downloads for Ant

One of the key features of Maven that I like is that it will download dependencies for you automatically.

I'm just starting to convert HiveMind from Maven back to Ant and this was very hard to do. I want to download a file, just if it doesn't exist locally (or is out of date), compute the MD5 sum while it downloads, and compare that to an MD5 sum stored on the server.

There was an existing project, greedo that may have done some or all of that ... but it has stalled in that "nearly-done" open source state so many projects reach. No activity in the last nine months. Broken home page. No documentation. I got it to build, but I had to hack their broken Ant build files. Also, I think the <macrodef> features of Ant 1.6 trumps a lot of the functionality in greedo, and I wanted more flexibility with respect to where I get files and how they are stored.

Anyway, I took a peek at the existing Get task of Ant and created a Grabber task. That's a start and it will be necessary in order to build HiveMind in the future. For the moment, it is available as http://howardlewisship.com/downloads/AntGrab.zip. This includes source and a JAR, ant-grabber.jar, that must be placed in ANT_HOME/lib.

A request has come in to discuss how it is used. Now, Grabber is super-alpha, but here's a portion of my Ant-based build environment to demonstrate how it is used:

	<available classname="org.apache.ant.grabber.Grabber" property="grabber-task-available"/>
	<fail unless="grabber-task-available" message="Grab task (from ant-grabber.jar) not on Ant classpath."/>
	
	<taskdef classname="org.apache.ant.grabber.Grabber" name="grabber"/>

	<!-- macro for downloading a JAR from maven's repository on ibiblio. -->
	
	<macrodef name="download-from-ibiblio">
		<attribute name="jar" description="The name of the JAR to download."/>
		<attribute name="group-id" description="The Maven group-id containing the JAR."/>
		
		<sequential>
			<mkdir dir="${external.lib.dir}"/>

			<grabber
				dest="${external.lib.dir}/@{jar}"
				src="${maven.ibiblio.url}/@{group-id}/jars/@{jar}" 
				md5="${maven.ibiblio.url}/@{group-id}/jars/@{jar}.md5"
				/>


		</sequential>
	</macrodef>

Later I use the macro as follows:

		<download-from-ibiblio jar="commons-logging-1.0.3.jar" group-id="commons-logging"/>
		<download-from-ibiblio jar="javassist-2.6.jar" group-id="jboss"/>
		<download-from-ibiblio jar="xml-apis-1.0.b2.jar" group-id="xml-apis"/>
		<download-from-ibiblio jar="servletapi-2.3.jar" group-id="servletapi"/>
		<download-from-ibiblio jar="werkz-1.0-beta-10.jar" group-id="werkz"/>
		<download-from-ibiblio jar="oro-2.0.6.jar" group-id="oro"/>
		<download-from-ibiblio jar="easymock-1.1.jar" group-id="easymock"/>
		<download-from-ibiblio jar="log4j-1.2.7.jar" group-id="log4j"/>

Wednesday, May 19, 2004

Comments enabled for the blog

I've enabled comment on my blog ... not exactly sure how Blogger implements comments, so we'll see what happens! I think you need to be registered with Blogger to post (I'd really not like to see my blog filled up with penis-enlargement ads).

Why separate bin and src distributions?

Something that struck me as I was preparing the latest HiveMind release just now. Why do we in the open-source world bother with separating the binary and source distributions?

Take HiveMind. The binary distribution follows standard procedure: it includes all sorts of documentation. Because of the use of Maven, the documentation set is out of control, but even so, what we have is a 281KB (uncompressed) JAR distributed inside 16,526KB (uncompressed) of documentation. Meanwhile, the source code is just another 1,257KB (uncompressed).

The binary distributions are 3.1MB/1.5MB (.zip vs. .tar.gz) and the source distributions are 556KB/229KB. In other words, adding the source to the binary distribution would not be particularly noticeable ... just an additional second or two at broadband speeds.

If I had my say (which, count to think of it, I largely do) I would produce a combined binary/src distribution and have the documentation as the add-on. A combined binary/source distribution would be approximately 50%/100% larger (since the JAR file is already itself compressed). If you assume that most people download the binaries and source together but largely read the documentation on-line (at least until they get serious about a package) ... then a combined bin/src distro is a win.

Certainly when I've used other packages, I've wasted a lot of time unpacking the binary distribution, using the jar, then having to get the source jar and connect it up inside Eclipse to I could actually debug code that uses the library.

This approach would be better for slow connection users as well; they would get what they need to work (the binary and the source) and could cherry pick the documentation they need from a live web site. Certainly, anyone serious about a package would want the full documentation on their own hard drive ... but why pay that cost just to take a peek? Distributing binaries with (full) documentation makes every user pay that download cost ... or keeps some users from bothering to evaluate the package at all.

It's open-source. The point is to buck tradition and think for ourselves.

HiveMind 1.0-alpha-5

I've just tagged the release, and will have downloads available shortly.

Lots of cool stuff between alpha-4 and alpha-5.

  • Simple Data Language
  • Improved HiveDoc
  • Initializable interface is gone, replaced with an initialize-method attribute on the construct element passed to BuilderFactory
  • Some minor renames and refactorings ... more work to seperate the "public face" (in org.apache.hivemind) from internals that user code shouldn't care about
  • Ability to define service models via hivemind.ServiceModels configuration point
  • Ability to define translators via hivemind.Translators configuration point
  • hivemind.Startup extension point for executing code when Registry is constructed
  • hivemind.EagerLoad extension point for forcing services to be instantiated early
  • Registry.cleanupThread() as a convienience for invoking ThreadEventNotifier

I believe HiveMind is ready to go forward; I would like to see a short beta period and a ramp up to GA release. During that period I hope to devote some time to converting from Maven to Ant and Forrest. Fixing up various link errors in the Javadoc would be nice as well. Generally, documentation is in excellent shape.

HiveMind will currently do everything I need it to do for Tapestry 3.1. That's *my* standard.

There are still a few debates out there; I've seen people strongly pro-and-con SDL, XML, Scripting ... but nobody's stepped up to the bat to do any work or even provide a really solid proposal. I'm still strongly in the declarative vs. procedural camp (i.e., no scripting) and vastly prefer SDL syntax to XML syntax.

Learning to love EasyMock

I've finally started using EasyMock with the HiveMind testing ... that's a huge amount of power in a tiny, little package!

If you haven't heard of this, the idea is that you can create mock implementations of services easily. First you create your control, and obtain the mock from it.

Next, you "train" your mock object, by invoking methods on it. The mock object and the control work together to remember the order of methods you invoke, and the argument values passed in. You use the control to specify return values.

Finally, you use the control to replay the mock and then test like it a real object.

Here's an example for the HiveMind suite:

    public void testSetModuleRule()
    {
        MockControl control = MockControl.createStrictControl(SchemaProcessor.class);
        SchemaProcessor p = (SchemaProcessor) control.getMock();

        Module m = new ModuleImpl();
        Target t = new Target();

        p.peek();
        control.setReturnValue(t);

        p.getContributingModule();
        control.setReturnValue(m);

        control.replay();

        SetModuleRule rule = new SetModuleRule();

        rule.setPropertyName("module");

        rule.begin(p, null);

        assertSame(m, t.getModule());

        control.verify();
    }

Here I'm testing a SetModuleRule, which is dependent on the SchemaProcessor. SchemaProcessor are complex to create, and tied into the whole framework ... a lot of work for the two methods that the SetModuleRule will invoke on it!

This is great stuff, because it lets me easily mock up parts of the framework that are normally pretty inaccessible. Some of my tests use two or three mock/control pairs. This is still a big improvement over my existing approach, which is to feed a HiveMind module descriptor into the framework and test that it does the right thing. That's important, but its more of an integration test than a unit test ... it can be hard to tell precisely what failed.

Monday, May 17, 2004

HiveMind -- ready for beta?

I've done some moderately involved refactorings of HiveMind lately and, in my opinion, everything is just about in place for HiveMind to go beta. I want to clean up some stuff in the Registry, RegistryInternal, Module triad of interfaces, and that will allow me to add a configuration point for eagerly (instead of lazily) initializing services. But onces that's in place, I think its finally time to move from rapidly adding features to finding gaps and fixing any holes. I don't think there are going to be too many (famous last words), but really, I've been writing tests and keeping documentation 95% up to date right on through the process. I want a stable HiveMind so that I can get more work done on Tapestry 3.1.

HiveMind work

Squeezed around the edges of my work in Germany, I got a bunch of work on HiveMind done. I've been doing a bit of refactoring, moving code around and splitting the Registry interface into two interfaces (Registry and RegistryInternal).

I changed <configuration-point> and <service-point> to not take a <schema> (or <parameters-schema>) element, but instead have a schema-id and parameters-schema-id attribute. I then made <schema&;gt; top-level only, and made its id attribute required. I found that a bit ponderous, though, and made changes to allow <schema> inside <configuration-point> (without an id attribute, and likewise for <service-point>/<parameters-schema>). So you can do it "in place" or "top level" but not mix the two.

Some big improvements to HiveDoc as well. The new HiveDoc splits the documentation across more files; separate files for each top-level schema, each service-point and each configuration-point, as well as for each module. Much less cluttered.

I also did an experiement; I copied-and-pasted the hivemind.sdl descriptor 26 times (as a.hivemind.sdl, b.hivemind.sdl, etc.) to see how well the XSLT would go with a fairly large input. The combined registry.xml (built by reading and combining all the descriptors) was half a megabyte but the generation of HTML was still under five seconds (to generate about 2.5 MB of HTML).

Since I was on the road, only some of this has been checked in. I'm in the middle of adding some more AOP-lite functionality; the ability to choose, with the LoggingInterceptor, which methods get logging. It'll look something like:

interceptor (service-id=hivemind.LoggingInterceptor)
{
  include (method="get*")
  exclude (method="*(foo.bar.Baz,int)")
  exclude (method="set*(2)")
  include (method="set*")
  exclude (method="*")
}

This will cause all methods with names starting with "get" to be logged, as well as most methods starting with "set". Methods with certain parameters, or a certain number of parameters, will be excluded.

Back from Germany

Just back from a quick visit to Germany and startext, an IT shop that is getting heavily into Tapestry. They brought me out for 3 1/2 days of training and mentoring and it was a blast. We went through my available presentations quickly, but the fun started with live coding ... by them and by me. They learned a lot about Tapestry and I learned a lot about teaching Tapestry ... such as, dive right into the code as fast as possible!

We hit a lot of subjects quickly, getting right into things like creating new components, and generating JavaScript dynamically. They had some interesting requirements, such as having disabled text fields submit anyway (we had to hook the form's onsubmit event handler to re-enable the fields just before the form submitted). Then things got even wackier when we tried to combine that, with ValidFields using client-side validation, and a drop-down list that forced a page refresh (and caused a second drop-down list to update to a different set of values). In fact, some of the stuff I learned can be rolled into Tapestry 3.1.

I felt bad that I didn't have a lot of time to study up on Tree and Table; as it turns out, they really liked seeing me puzzle it out as I went; they picked up from me some tips about how to do it themselves. All in all, a succesful trip, but getting to Bonn and back (via train and jet) was brutal (and took longer than I had power in my iPod).

Now I'm home for a couple of days to try and resynchronize my internal clock, then off to Denver.

By Request: How Line Precise Error Reporting is implemented

Hi Howard,

I read your blog and I'd like to make a request.  I'd really like to read 
more technical details on techniques you might use to get line precise 
error reporting.

This is one of the best features of tapestry and if you can pass on some of 
the details of how to go about it and get people excited about doing it 
themselves that could only be a good thing.  This is typically an area 
where most other open source projects fail badly.  The normal error 
reporting seems to be the null pointer exception.

Regards,


Glen Stampoultzis
gstamp@iinet.net.au
http://www.jroller.com/page/gstamp

I agree with all of this, but it's not just open source projects which fall flat in this area ... and this is a vitally important area: Feedback, one of my four key aspects of a useful framework (along with Simplicity, Efficiency and Consistency). Without good feedback, the developer will be faced with a challenging, time-consuming puzzle every time something goes wrong in the framework code. It's not enough to say "garbage in, garbage out" ... if the framework makes getting the job done harder, or even just makes it seem harder, then it won't get used, regardless of what other benefits it provides.

Line precise error reporting is not magic, but it is a cross-cutting concern (Dion is thinking about how to make it an aspect), so it touches a lot of code.

It starts with the Resource interface, which is an abstraction around files; files stored on the file system, at a URL, or within the classpath. Tapestry extends Resource further, adding the concept of a file within a web application context. The Location interface builds on this, combining a resource with lineNumber and columnNumber properties.

The XML, HTML and SDL parsers used by both HiveMind and Tapestry carefully track the location (in most cases, by making use of the SAX Locator to figure out where in a file the parser currently is).

All the various descriptor (in HiveMind) and specification (in Tapestry) objects implement the Locatable interface (having a readable location property), or even the LocationHolder interface (having a writable location property), typically by extending from BaseLocatable. As the parsers create these objects, they are tagged with the current location provided by the parser.

Later, runtime objects (services and such in HiveMind, components and such in Tapestry) are created from the descriptor/specification

You can see how my naming has been evolving. Descriptor is a better term, I don't remember where "specification" came from, but it's now entrenched in Tapestry terminology.
objects. The runtime objects also implement LocationHolder, and the location of the descriptor object is copied into the location property of the runtime object.

The next piece of the puzzle is that exceptions need to have a location as well! When an exception occurs in a runtime object, the runtime object throws an ApplicationRuntimeException that includes the correct location.

There are a couple of utility methods on the HiveMind class used to help determine what the correct location is:

    /**
     * Selects the first {@link Location} in an array of objects.
     * Skips over nulls.  The objects may be instances of
     * Location or {@link Locatable}.  May return null
     * if no Location can be found. 
     */

    public static Location findLocation(Object[] locations)
    {
        for (int i = 0; i < locations.length; i++)
        {
            Object location = locations[i];

            Location result = getLocation(location);

            if (result != null)
                return result;

        }

        return null;
    }

    /**
     * Extracts a location from an object, checking to see if it
     * implement {@link Location} or {@link Locatable}.
     * 
     * @returns the Location, or null if it can't be found
     */
    public static Location getLocation(Object object)
    {
        if (object == null)
            return null;

        if (object instanceof Location)
            return (Location) object;

        if (object instanceof Locatable)
        {
            Locatable locatable = (Locatable) object;

            return locatable.getLocation();
        }

        return null;
    }

This method is particularly handy to the ApplicationRuntimeException class, since it may want to draw the location from an explicit constructor parameter, from a nested exception, or from an arbitrary "component" associated with the exception:

    public ApplicationRuntimeException(
        String message,
        Object component,
        Location location,
        Throwable rootCause)
    {
        super(message);

        _rootCause = rootCause;
        _component = component;

        _location = HiveMind.findLocation(new Object[] { location, rootCause, component });
    }

That's pretty much all there is to it ... but it's all for naught if all this location information is not presented to the user. The location will generally "bubble up" to the top level exception, but you still want to be able to see that information. Tapestry's exception report page does a great job of this, as it displays the properties of each exception (including the location property), and then tunnels down the stack of nested exceptions (this is actually encapsulated inside the ExceptionAnalyzer class).

Line precise error reporting isn't the end all of Feedback. Malcolm Edgar, a one-time Tapestry committer, has been working on his own web framework (I believe for internal use at his company) ... it goes one step further, actually displaying the content of his equivalent to an HTML template and highlighting the line that's in error. That's raising the bar, but perhaps Tapestry will catch up to that some day.

Further, simply reporting locations isn't enough. If I pass a null value into a method that doesn't allow null, I want to see a detailed exception (You must supply a non-null value for parameter 'action'.) rather than a NullPointerException. A detailed exception gives me, the developer, a head start on actually fixing the problem. Explicit checking along these lines means that the location that's actually reported will be more accurate as well, especially considering that there's no way to attach a location to a NullPointerException.

Monday, May 10, 2004

Thinking about Flex

I was very impressed by the Flex presentation at TheServerSide.

Flex is a rich client toolkit. Flash and ActionScript on the client. Java and XML on the server. RMI or Web Services in between. The client contains all the state, the server is wonderfully stateless.

I first learned a bit about Flex from Matt Horn, who works for Macromedia; they hired a bunch of J2EE developers in one of their acquisitions, and none of them took to Flash. Flex is their take on how to have their cake and eat it too. It's another XML-based user interface scripting language but seems to have a lot more going on inside, especially with respect to data binding and client-server communication.

Here's a good intro to it: Building Rich Internet Applications with Macromedia Flex: A Flash Perspective.

I've been browsing the docs and playing with the samples. It was easy enough to set up inside Eclipse; I just created a new project, created a content folder, and expanded the flex.war into the content folder; this gives a web.xml, and the necessary libraries and such. Next, I used Geoff's Jetty Launcher to serve up the context folder and could start creating the .mxml files. Flex uses servlet filters to converts .mxml into .swf (Flash movies) in much the same way that .jsp files are converted into compiled Java classes.

Running this on my laptop (Dell Inspiron 8200, 512mb ram, Pentium 4 2gz) generally worked well; yet many key presses had a noticeable (though minute) delay ... but at the same time, there was also a good amount of fading, zooming, sliding images. It's as if user-input in Flash is given lower priority. Anyway, it was all still quite impressive and I expect to do more experimentation.

The documentation I've read so far was very good. Rich hypertext and PDF and lots of detail and examples. If I can get my head around the data binding and communication to the server, I could probably put stuff together right now. The default skin is clean and simple. The default components do a better, easier job of creating simple, clean interfaces than Swing or AWT (though the XUL variants there probably help). Again, it isn't just Flash and ActionScript and XML ... the data binding was given significant thought (I'll be able to tell if that thought was worthwhile at some point soon).

My initial reservations:

  • All your client-side logic is written in ActionScript (really, ECMAScript). Unit testing this is going to be at least as much of a challenge as unit testing Tapestry pages.
  • There's some form of debugger for ActionScript, but it doesn't look integrated ... it looks like it might be command-line oriented.
  • Applications are ultimately monolithic; you can break your .mxml files into smaller pieces, and create components as well (a clever use of XML namespaces), but there's just the one application object. How long is the initial creation of the .swf file for complex apps? How much of that .swf must be downloaded before the user sees the initial page? How will teams of developers work together?

So why is the Tapestry guy interested in this stuff? Because HTML is, ultimately, a dead end. Tapestry squeezes the most out of HTML that can be done, but I firmly believe that in three to five years, some form of rich client will supplant HTML for the zero-delivery cost web applications that are currently created using Tapestry, JSP, ASP or whatnot. I predict that a lot of stuff now down exclusively with HTML will be done using a mix of HTML and Flex (or whatever client-side technology emerges, should Flex fail). This could be good news for Tapestry ... the sites I envision dominating the market will consist of "boutique" HTML (HTML created by non-Java developers) and Tapestry shines at integrating that kind of HTML into a dynamic application. The HTML parts of applications will be much more focused on readable documentation (news sites, some form of community sites, blogs and the like) where the ability to print and book mark are important. The kind of applications currently shoe-horned into the HTML world (help desks, all kinds of corporate infrastructure, CRUD applications) will be easier and better using a rich-client alternative.

At TSS, people were dismissive because of licensing costs ($25K per server, give or take). Well, licensing costs come down (think WebObjects, which went from $50,000 to $750). And software licensing is the smallest piece of the development cost compared to developer time and hardware.

More of concern is the single-vendor aspect. This is anathema in the Java world ... and there's the looming possibility of Microsoft buying Macromedia and killing Java support within it. I don't know what the solution to this is ... perhaps Macromedia needs to open-source Flex with a mixed GPL/proprietary license like Sleepycat's ... for-profit users pay Macromedia, non-profit don't. Alternately, seed flexible APIs into the product to ensure that third-parties will be able to provide Java support regardless of ownership of Macromedia and Flex.

In any case, the concept is exciting, regardless of which vendor finally makes it all work. Factoring out the presentation layer from web-deployed applications would be a great thing in differentiating J2EE from .Net. A stateless server-side (mated to a richly stateful client-side) means that simple, efficient solutions (based on HiveMind, Spring, and Hibernate) will have their power multiplied.

A short break between TSS and Germany

So I'm back from TheServerSide Symposium and have just enough time to catch my breath before heading out to a multi-day engagement in Germany.

I was "in character" even before I arrived in Las Vegas, kibitzing with a group of developers across the aisle from me on the flight in.

My sessions, as well as a "TSS tech talk" video shot went pretty well. The HiveMind presentation still went a bit roughly; I'm beginning to think that, up against the Goliath of Spring, my little David had better differentiate itself quickly ... the distributed configuration (for both data and services) is the key distinguishing feature, and a solution that mixes HiveMind with Spring is a likely winner.

Much more interest in the Tapestry presentation, which is the advanced one convering component creation. Again, this is really an area where Tapestry differentiates itself most strongly from the other similar frameworks (if such things exist). I did a bit more live presentation, which was a chance to show and discuss Spindle and the great Tapestry exception reporting page (and line precise error reporting).

Both sessions filled the small room I was in; I counted about 55 attendees in each session. The Tapestry session might have overflowed that room had it now been up against the very contentious "What's in EJB 3.0?" talk.

I thought the sessions I attended were very good. The keynotes were, alas, ignorable (except for the Flex presentation, which rocked). Most presenters were quite good; I think Rod Johnson did a very good job and was quite gracious towards me personally and towards HiveMind, to the point of quoting my HiveMind presentation during his "J2EE without EJB" session. We also talked frequently between sessions; overall, we respect that our frameworks address different needs and that combining them should be made as painless as possible. Meanwhile, I'm jelous that he's in a position to use his framework in production, something that identifies problems and limitations fast. It really underscores how I was languishing at WebCT; I need to get involved in some real work on a real project and be in the technical architecture driver's seat again; it's been too long.

I talked to so many people over the course of a few days and lost track of it all quickly ... I have to start jotting down notes on business cards. I know I promised copies of the book to Jason Carreira and Kito Mann, anybody else had better send me a reminder! Also, a lot of people are really pushing for Tapestry to support the Portal API somehow, someway.

People have been talking themselves blue over everything that went on and I don't have any additional, deep insights to add. Supposed "thought leaders" (like myself) have (publically and privately) questioned the status quo for quite a while, challenging the usefulness of EJBs, the practicality of separating the layers so profoundly, and identifying the needless complexity as a platform threatending problem (Rod has examples of major banks throwing away $20 million investments in Java and J2EE in favor of .Net). This philosophy has now gone completely mainstream. One has to question the relevance of EJB 3.0 at this time (and more so when it is ready for release). A phrase I came up with while talking to Rod is "Results Not Standards". Tapestry users are getting great results, even though Tapestry is not a standard (though it is compliant with the useful and reasonable parts of the J2EE standard). Likewise, WebWork, Spring, Hibernate (which is trying to become a standard) and so forth.

Tuesday, May 04, 2004

SDL, Testing, and the Scripting Debate

I'm really loving the effort I invested in SDL, it's very clean, very useful stuff. I'm beginning to productize Tapestry test framework and I'm targetting SDL, not XML, as the language for the scripts. Background: for Tapestry 3.0, I developed a "mock unit test" suite, more of an integration suite, where I simulate a servlet container around Tapestry pages and components. There's no HTTP involved, but none of the Tapestry objects know that ... they see the Servlet API and, for Tapestry's needs, it works just like the real thing.

Each script consists of a few definitions, and a series of sequential requests. Each request passes up query parameters, and makes assertions about the result of the request: mostly in terms of asserting text in the output (sometimes in the form of regular expressions).

However, the tests are pretty ugly; because a lot of the assertions are looking for HTML, there's lots of CDATA sections. The execution and parsing code is all twisted together and built on top of JDom. Line-precise error reporting came later, so it can be a challenge to find where, inside a test script, a failure occured. In addition, all the code is inside the junit folder, not part of the Tapestry framework itself, so it can only be used by Tapestry itself.

I'm currently starting to rebuild this support for use by Tapestry and for end-user applications. I'm building a better parser, using SDL as the script language, and building tests for the testing framework itself as I go (Who watches the Watchmen? Who tests the Testers?). Lots of work up front, but will easily pay for itself when we start adding more complicated tests for some of the 3.1 features ... I also expect it to run much faster.

Meanwhile, the debate about replacing XML and SDL in HiveMind with scripting rages on. I don't see the advantage to the scripting approaches ... they're all more verbose than the equivalent SDL. It's more code to do the same thing that you'd normally do by referencing builder factories. It won't document as HiveDoc, it raises many issues about multithreading. It adds unwanted dependencies to the HiveMind core framework. No one has made a compelling argument ... certainly not compelling enough for me to spend any time on that when I have other priorities, and so far, nobody else is checking code into the HiveMind CVS. So ... we have a fairly active HiveMind community but I'm still the only developer ... do I like this, or not?

Monday, May 03, 2004

Goodbye, Digester!

Ah, the evolution of XML parsing in Tapestry. Tapestry is very much driven by validated XML files (for page and component specifications, application specifications, and library specifications). In the earliest days, Tapestry was tied directly to Xerces. Later, it switched over to JAXP. I had reams of code that would walk the DOM tree and construct the specification objects from the XML.

As a nod to efficiency, I switched over in 3.0 to use Digester, but that's caused a lot of grief in its own right. It seems like the version Tapestry uses was always in conflict with whatever version was in use by the servlet container, especially Tomcat. That caused a lot of grief.

Meanwhile, Digester drags along some of its own dependencies, jakarta-collections and jakarta-beanutils. More JAR hell, keeping all those JARs and versions straight.

No more; I replaced Digester with an ad-hoc parser derived from (and sharing code with) the HiveMind module deployment descriptor parser. It uses a stack to track the objects being constructed (that's borrowed from Digester), but uses a simple case statement and some coding discipline, to deal with recognizing new elements and processing them. I haven't done any timings, but comparing this code to the Digester code leads me to think that this will have a substantial edge ... which will be even more important once Tapestry supports reloading of page templates and specifications. In addition, inside the monolithic SpecificationParser class, it's a lot clearer what's going on. The old code had to create some number (usually three, sometimes six) Digester rule objects for each Digester pattern (patterns are matched against elements on Digester's stack to determine which rules fire). The new code is almost entirely just private methods: beginState() methods that decide what state to enter based on the current element, enterState() methods that create new specification objects, push them onto the stack, and change to a new parser state, and endState() methods invoked when a close tag is found, to finalize created objects and pop them off the stack, and return the parser state to its earlier value.

Next up; the JDom-based parser for the Tapestry mock unit test suite. First, I want to use SDL, not XML, for these scripts. Second, I want them to run much, much faster and I suspect that a lot of time is being spent in JDom. Third, I want thrown assertion exceptions to have line-precise error reporting.

And fourth ... part of Tapestry 3.1 will be to productize this approach to testing Tapestry applications.

I'm the Seven of Clubs

On the just published Who's Who in Enterprise Java List (by The Middleware Company), I'm in the Clubs ("Pot Pourii") category, as the Seven of Clubs. If they do this again, I'd love to be recognized in the Hearts ("Contribution") category, along with Gavin King, Rod Johnson, Craig McClanahan and many others.

It's interesting just how many names on the list I don't recognize!

Someday, I'll have to find another picture of myself that I like ... this one was taken in 1999 atop Mt. Haleakala, Maui, Hawaii.

Sunday, May 02, 2004

Introduction to Jakarta Tapestry

And interesting link from Object Computing, Inc.: Introduction to Jakarta Tapestry.

The author, Rob Smith, is very positive on Tapestry ... he find's it fun! That should be the fifth goal of Tapestry (after simplicity, efficiency, consistency and feedback).