Thursday, December 27, 2007

JRuby on Rails - Book review

So what do you put in a book on JRuby? After all, for the programmer JRuby is 'just' another Ruby implementation. The question must have come up when Ola Bini decided to write this book. The result is materialized in Practical JRuby on Rails Web 2.0 Projects - Bringing Ruby on Rails to the Java Platform. The title is well chosen and also answers the question: JRuby on Rails is a practical book; it guides you through the implementation of several Rails sites and along the way it shows you neat tricks that are only possible in JRuby.

Contents
The first half of the book is all about JRuby on Rails. If you already know Rails it is funny to consistently see jruby instead of ruby, but otherwise this is part is boring for the initiated. If you do not know Rails and like to learn from examples, this chapter is well worth your read.

Right after that is an interesting part for all us that want to mix Java and Ruby.

Later in the book, Ola goes through the trouble of explaining how to use JRuby with all the non interesting things in Java: session beans, message beans, JMX, XML processing and SOAP. I have been in Java business for 7 years and have been luckily able to avoid these for most of the time. The integration provided by Spring could have been a very nice replacement for these subjects.

A bit hidden between this stuff, the book again shows it practicality and goes through all the options of deploying JRuby on Rails applications.

The book ends with some convenient JRuby specific references.

Conclusion
I strongly recommend this book when you are a Java programmer (of any skill) that knows some Ruby and want to start working with (J)Ruby on Rails. If you know Rails well, but little Java and you want to start with JRuby on Rails, the book is probably too heavy and will teach you only a few useful things. Do not start on this book with zero Ruby knowledge. Though appendix A helps, for seriously learning Ruby, the old Pickaxe (dead tree version) is an excellent first read.

Sunday, December 16, 2007

WebBeans, the JSF cover up

At JavaPolis I saw very good presentation on WebBeans by Bob Lee. Bob did a very good job in explaining the concepts while thankfully still giving lots of code examples. WebBeans provides a way to use ordinary Java beans in the JSF environment. You do this with annotations.

Praise for WebBeans, and its main predecessor Seam; they really make developing JSF applications simpler. In particular, the conversation scope is a brilliant invention. But how simple does it get? Well lets see. For every aspect that can be managed there is an annotation. In one of the presentation examples I counted about 5 lines of ordinary Java code plus at least 12 annotations! Now there are ways to group annotations, but you do this by introducing more annotations! So instead of having to deal with JSF stuff, you now how to cook annotation soup.

I have always hated JSP programming, and now that I see that even talented people like Bob Lee and Gavin King are wrestling to fix JSF, I finally know why. The problem is that the whole idea of having contexts to store page data is flawed. It breaks all kinds of encapsulation rules and breaks the beloved type safety.

So what are the alternatives? Actually there are a few very nice web frameworks that do not use contexts. The one I am most familiar with is Wicket. Another often named contender is Tapestry, but of course there are many more. Wicket, like Swing, uses a component tree to build pages. Each component is completely responsible for its own markup (html) and data (the model). No encapsulation rules are broken. Another addition is GWT, not exactly a web framework, but nevertheless useful for making componentized web applications.

The demise of Java, long live the JVM!

The Java language is dying. Most people do not yet realize this yet, but it is inevitable. What do I mean by dying? Why do I think it is dying, and why is there lots of hope?

Dead languages
So what do I mean by dead? First of all, languages do not die just like that. There are still people programming in Cobol, and likewise, there are still people that can read old Greek fluently. But just like Cobol and every other programming language, Java will at some time be left to the dinosaur programmers. I think that Java is currently at its peak (or will soon be), and will go downwards from here. It will take some time, but the signs are there.

The signs
First of all, there is the rise of dynamic languages. They have been around for quite some time, but all of sudden there was Ruby on Rails. Many Java programmers bit the bullet and switched to be rewarded with a up to 10 times productivity boost. There are those that say this can be attributed to dynamic typing. I believe this is not the case. I think the boost is possible because the Java language is quite weak compared to Ruby; the same problem can be expressed in a lot less lines of Ruby then lines of Java. This has nothing to do with typing, but everything with how the language is structured.

Secondly, there is the way Java is extended over time. The extensions have been done with great care. This has paid off: Java is still mostly a clean and simple language. Even with the tiny changes, each new version brought its problems. The largest change so far has been the introduction of generics. I see generics as an improvement, but as I wrote earlier, they can be a pain to use.
And now there is the closures debate. Some people (like James Gosling) want to have them at every price. Others (like Joshua Bloch) say that the complexity budget has been used and that Java is not ready for big changes like closures. I agree with the latter.

Despite the care with which Java was and is being extended, the possibilities for doing so are reaching zero rapidly.

There is hope!
If there is one gem coming out of the Java world, it is the Java Virtual Machine. With each release the JVM has brought performance improvements. And this is likely to continue for the next versions. The JVM is still climbing and is nowhere near its peak. This can be proved by looking at the many languages that now run on the JVM. Everything from PHP, Lisp, Cobol to Ruby and Python. Some of these (in particular Ruby) are very well supported by several IDEs.

The Java language does not need closures or other more advanced stuff, there are many languages on the JVM that already provide those things: Ruby, Python, Lisp and Scala.

Yes, now there is Scala! Scala is a language for both the JVM and the CLR. Scala's syntax is as concise as Ruby's. It provides a very smooth transition from the Java language, but nevertheless is a complete functional language while at the same time staying purely object oriented and statically typed. And, as I learned at Javapolis, its library is already excellent, and even better: without any particular optimizations Scala programs outperform Java programs on the same JVM! What is not to like about Scala?

Disclaimer
Many statements in this article are based on personal observations and anecdotal evidence.

Update 2007-12-24: Despite the tone of this article, I am still writing Java and foresee to do so for quite a bit longer. And note: having fun doing it!

Thursday, November 22, 2007

Double-Checked Locking found in JVM

I just found an implementation of the broken double checked locking in the java 6 runtime library!

Here is a snippet of the offending code:

package java.lang; import java.util.Random; public final class Math { private static Random randomNumberGenerator; private static synchronized void initRNG() { if (randomNumberGenerator == null) randomNumberGenerator = new Random(); } public static double random() { if (randomNumberGenerator == null) initRNG(); return randomNumberGenerator.nextDouble(); } }

Amazingly, it used to work correctly, but the extra synchronization was removed in java 1.3. You can track the progress of this bug in report 6470700 and report 6633229.

Friday, November 16, 2007

Howto extend LDAP in java with JLDAP

LDAP is a protocol that is wonderfully extensible. You can augment existing messages by adding 'controls', and you can define complete new messages. Extensions are identified by a universal OID, so that even code that does not know about an extensions can still work properly. For this each extension has a criticality flag to indicate whether the receiver may ignore unknown extensions. As a bonus, the content of controls and messages are all defined by a common syntax (ASN.1) and common encoding (which is BER with restrictions).

Writing you own controls and messages is a kind of under documented thing. In addition, not all LDAP libraries support all kinds of messages. In this small howto I show how to implement custom controls and messages based on my experiences with implementing a RFC4533 (synchronization) client. I used JLDAP as it is the only java library I could find that supports IntermediateMessages, a requirement for RFC4533. And before you ask: no sorry, I can not open source the results.

In case you actually want to start with JLDAP, you get a lot of knowledge from at the examples that are provided by Novel. You can find them through the JLDAP site. The javadoc is also useful at times.

Decoding a control

Lets take a look at the SyncDoneControl from RFC4533. The control's OID is 1.3.6.1.4.1.4203.1.9.1.3 and its content value is defined with ASN.1 as:

syncDoneValue ::= SEQUENCE { cookie OCTET STRING OPTIONAL, refreshDeletes BOOLEAN DEFAULT FALSE }

Read this as: the value is a sequence that contains 2 other values. The first, named cookie is optional and has binary content. The second is named refreshDeletes and has boolean content. The default of refreshDeletes is false. See RFC4533 for the semantics.

Lets map this to java. All controls must have the same constructor signature so that JLDAP can instantiate it. We'll start with:

public class SyncDoneControl extends LDAPControl { public static final String OID = "1.3.6.1.4.1.4203.1.9.1.3"; private byte[] cookie; private boolean refreshDeletes; // add getters for cookie and refresDeletes public SyncDoneControl(String oid, boolean critical, byte[] value) { super(oid, critical, value); ...see below } }

The byte array value contains the BER encoded value of the control. The LDAP restriction put on the BER encoding mean that optional values and values that are equal to the default value must be omitted. With other words: when there is no cookie (allowed because it is declared OPTIONAL), and refreshDeletes is FALSE (which is the default), constructor argument value is null! Just to be robust we'll check for the empty array as well:

if (value == null || value.lenght == 0) { cookie = null; refreshDeletes = false; } else { ...see below }

If it is not null/empty, we'll use the decoder as provided by JLDAP to decode the bytes. As the ASN.1 value is defined to start with a SEQUENCE (one of the native ASN.1 types), the LBERDecoder will instantiate an object of type ASN1Sequence:

ASN1Sequence asn1 = (ASN1Sequence) new LBERDecoder().decode(value);
The decoder can decode all native ASN.1 types. These native types are called "universal". Other important universal types are BOOLEAN, OCTET STRING, CHOICE and SET. Type information is available on every ASN1Object through the ASN1Object#getIdentifier() method.

We can examine the sequence further by calling the ASN1Sequence#size() and ASN1Sequence#get(int) methods. Again, we must take into account that each element may be omitted. You can do this by examining the type of ASN.1 value you get out of the sequence. First, extract a value from the sequence:

ASN1Object asn1Obj = asn1.get(0);
When this is the cookie, the value must be from the type-class UNIVERSAL, with as type OCTET STRING:
boolean isCookie = asn1Obj.getIdentifier().getASN1Class() == ASN1Identifier.UNIVERSAL && asn1Obj.getIdentifier().getTag() == ASN1OctetString.TAG;
If it is, we can safely cast the object to an ASN1OctetString and extract the cookie:
cookie = ((ASN1OctetString) asn1Obj).byteValue()

We can do the same for the value refreshDeletes and JLPAP class ASN1Boolean. After we have moved this very verbose code to the utility class Asn1Util (exercise for the reader) we'll get the following code:

ASN1Sequence asn1 = (ASN1Sequence) new LBERDecoder().decode(value); for (int i = 0; i < asn1.size(); i++) { ASN1Object asnSeqObj = asn1.get(i); if (i == 0 && Asn1Util.isOctetString(asnSeqObj)) { cookie = Asn1Util.getByteValue(asnSeqObj); } else if (i == (cookie == null ? 0 : 1) && Asn1Util.isBoolean(asnSeqObj)) { refreshDeletes = Asn1Util.getBooleanValue(asnSeqObj); } else { throw new IllegalArgumentException("Parse error at index " + i + ", parsing: " + asnSeqObj); } }

Tada! Your first JLDAP extension. All we have to do is make JLDAP aware of the extension and it will be parsed automatically when the control is present in a received LDAP message.

LDAPControl.register(SyncDoneControl.OID, SyncDoneControl.class);

One small warning: when there is an exception in the control's constructor, JLDAP will silently ignore your class and do its default thing.

Encoding a control

To start a sync operation, one must add a SyncRequestControl to the search constraints. Here is the ASN.1 definition of the control value:

syncRequestValue ::= SEQUENCE { mode ENUMERATED { refreshOnly (1), refreshAndPersist (3) }, cookie OCTET STRING OPTIONAL, reloadHint BOOLEAN DEFAULT FALSE }

First the ASN.1 enumeration is translated into a Java enumeration:

public enum SyncRequestMode { REFRESH_ONLY, REFRESH_AND_PERSIST }

The we'll start the control with

public class SyncRequestControl extends LDAPControl { public static final String OID = "1.3.6.1.4.1.4203.1.9.1.1"; private SyncRequestMode mode; private byte cookie[]; boolean reloadHint = false;

As we will construct this control ourself, and not JLDAP, we can give it any constructor we like. For example:

public SyncRequestControl(SyncRequestMode mode, byte cookie[], boolean reloadHint) { super(OID, true, null); this.mode = mode; this.cookie = cookie; this.reloadHint = reloadHint; setValue(encodedValue()); }

In the last line we set the BER encoded value. Here is a complete implementation of the encode method. Note how we follow the ASN.1 definition, but skip optional values and values that have the default value.

private byte[] encodedValue() throws IOException { ASN1Sequence asn1 = new ASN1Sequence(); asn1.add(new ASN1Enumerated(mode == REFRESH_ONLY ? 1 : 3)); if (cookie != null) { asn1.add(new ASN1OctetString(cookie)); } if (reloadHint) { asn1.add(new ASN1Boolean(reloadHint)); } ByteArrayOutputStream baos = new ByteArrayOutputStream(); new LBEREncoder().encode(asn1, baos); return baos.toByteArray(); }

More complex example, decoding a message

The JLDAP BER decoder can only decode ASN.1 universal types. As soon as you define your own types, you must help the decoder. Lets look at the decoding of the SyncInfoMessage to see how this works. The value of SyncInfoMessage is defined with the following ASN.1:

syncInfoValue ::= CHOICE { newcookie [0] OCTET STRING, refreshDelete [1] SEQUENCE { cookie OCTET STRING OPTIONAL, refreshDone BOOLEAN DEFAULT TRUE }, refreshPresent [2] SEQUENCE { cookie OCTET STRING OPTIONAL, refreshDone BOOLEAN DEFAULT TRUE }, syncIdSet [3] SEQUENCE { cookie OCTET STRING OPTIONAL, refreshDeletes BOOLEAN DEFAULT FALSE, syncUUIDs SET OF OCTET STRING (SIZE)16)) } }

The ASN.1 defines that the value can have one or four values. We'll represent the chosen value with a Java enumeration. By defining the enum values in order, we can abuse that the ordinal value of the enum values corresponds to the tag (defined between brackets []) value.
public static enum SyncInfoMessageChoiceType { // Note: order is important NEW_COOKIE, REFRESH_DELETE, REFRESH_PRESENT, SYNC_ID_SET }

As SyncInfoMessage is an intermediate response, we'll start the message implementation as:

public class SyncInfoMessage extends LDAPIntermediateResponse { public static final String OID = "1.3.6.1.4.1.4203.1.9.1.4"; private SyncInfoMessageChoiceType syncInfoMessageChoiceType; private byte[] cookie; private Boolean refreshDone; private Boolean refreshDeletes; private List syncUuids; // add getters ...

Not all fields will always get a value. For example field syncUuids will only be set when syncInfoMessageChoiceType == SYNC_ID_SET. This is the most simple implementation, and the user of this class must know about the CHOICE type anyway.

Intermediate messages must always have the same constructor, so that JLDAP can construct it for us:

public SyncInfoMessage(RfcLDAPMessage message) { super(message); ...

The choice is represented by an instance of type ASN1Tagged. The identifier of the tag indicates the choice. Instantiate field syncInfoMessageChoiceType so:

ASN1Tagged asn1Choice = (ASN1Tagged) new LBERDecoder().decode(getValue()); int tag = asn1Choice.getIdentifier().getTag(); syncInfoMessageChoiceType = SyncInfoMessageChoiceType.values()[tag];

Now comes the tricky part. As JLDAP has no clue about the ASN.1 definition, it does not know about the choice, and it can not decode any further. What we can do is get the contents of asn1Choice as an OCTET STRING, get its byte array, and decode that again with the JLDAP decoder.

So for most choice types we need to decode the tag's contents to a SEQUENCE. Here is a utility method we can add to the AsnUtil class:

public static ASN1Sequence parseContentAsSequence(ASN1Tagged asn1Choice) throws IOException { ASN1OctetString taggedValue = (ASN1OctetString) asn1Choice.taggedValue(); byte[] taggedContent = taggedValue.byteValue(); return new ASN1Sequence(new LBERDecoder(), new ByteArrayInputStream(taggedContent), taggedContent.length); }

With this tool we'll decode the choice refreshPresent. Notice how we decode the contents of the tag, and how refreshDone is set to its default value when we did not see it in the sequence.

if (syncInfoMessageChoiceType == SyncInfoMessageChoiceType.REFRESH_PRESENT) { ASN1Sequence asn1Seq = Asn1Util.parseContentAsSequence(asn1Choice); for (int i = 0; i < asn1Seq.size(); i++) { ASN1Object asnSeqObj = asn1Seq.get(i); if (i == 0 && Asn1Util.isOctetString(asnSeqObj)) { cookie = Asn1Util.getByteValue(asnSeqObj); } else if ((i == (cookie == null ? 0 : 1)) && Asn1Util.isBoolean(asnSeqObj)) { refreshDone = Asn1Util.getBooleanValue(asnSeqObj); } else { throw new RuntimeException("Parse error"); } } if (refreshDone == null) { refreshDone = Boolean.TRUE; } }

When the choice is newCookie things are a bit simpler. The content of asnChoice is already an OCTET_STRING, so we can use that directly:

if (syncInfoMessageChoiceType == SyncInfoMessageChoiceType.NEW_COOKIE) { ASN1OctetString taggedValue = (ASN1OctetString) asn1Choice.taggedValue(); cookie = taggedValue.byteValue(); }

Again, we have to make JLDAP aware of the new message. it will be parsed automatically when the control is present in a received LDAP message.

LDAPIntermediateResponse.register(SyncInfoMessage.OID, SyncInfoMessage.class);

Conclusion

In this article I showed how to get started with extending JLDAP. The example are functional but not always complete. Nor are they always according to best practices (for example, I would normally not declare so many exceptions). I shared some of the pitfalls you will encounter when encoding and decoding messages and controls.

Monday, October 29, 2007

Getting started with Ruby on Ubuntu 7.10

 

Update 20100922: Apparently this article is so out of date that I even get e-mail to get this article down (or replace it).

Therefore: please do not use this article, in fact don't even rely on the Ubuntu packages as they are not all maintained properly.

One approach that may be fine (I did not try, don't blame me), is on http://krainboltgreene.github.com/l/3/. Good luck!

 

 

So now I have a new server, and a new 10Mbit/s internet connection. Its time for some applications! Here are my experiences with installing Rails, Radiant and Camping on a fresh Ubuntu 7.10 server installation.

First of all, if your server is running in a closet somewhere (like mine) or worse: in a remote data center, you need to disable apt-get asking for the installation CD. Edit /etc/apt/sources.list and comment the line that refers to the installation cdrom.

The next step of course it to install Ruby and Ruby Gems: sudo apt-get install ruby rubygems ruby1.8-dev Without the ruby1.8-dev you can not do much, don't forget it!

Where would you be without irb? The default install however does not link it! sudo ln -s /usr/bin/irb1.8 /usr/bin/irb

Any serious Ruby web application uses Mongrel. However, Mongrel compiles some of its stuff during the install. So you first need to get the compilation tools:
sudo apt-get install make gcc libc6

Mongrel is a trusted enterprise application nowadays, so you can install it with a certificate:

wget http://rubyforge.org/frs/download.php/25325/mongrel-public_cert.pem
gem cert --add mongrel-public_cert.pem
rm mongrel-public_cert.pem
gem install mongrel --include-dependencies -P HighSecurity
Select the highest version followed by (ruby).

Luckily, installing Rails is still a simple: sudo gem install rails --include-dependencies

If you want Camping, you'll first need to install sqlite: sudo apt-get install libsqlite3 libsqlite3-dev

Now you can:

sudo gem install camping --source http://code.whytheluckystiff.net
sudo gem install camping-omnibus --source http://code.whytheluckystiff.net

Radiant is a simple: sudo gem install radiant

For some reason the gem executables do not get added to the path. Make sure they do by adding /var/lib/gems/1.8/bin to your path. For example by adding the following line at the end of /etc/bash.bashrc: export PATH="$PATH:/var/lib/gems/1.8/bin".

The favorite database amongst Railers is of course MySQL. I assume you selected LAMP during the Ubuntu server installation, that means you already have MySQL installed. To use MySQL with ActiveRecord, its best to install the MySQL native adapter. Before you can install that, you first need to install all the compile stuff from above and some more: sudo apt-get install libmysqlclient15-dev sudo gem install mysql Again, for the gem select highest version number followed by (ruby).

Have fun!

Wednesday, October 24, 2007

How to really test RAM, or My search for system stability

I recently bought a Mac because my previous system kept crashing. It would just beep, and reboot without a visible cause. Since I still wanted to use the old system for my Linux firewall, I needed to find out what the culprit was.

Over time I found out that:
- Reboots started intermittently after placing new memory. However, Memtest86 reported no problems whatsoever.
- Over time the reboots occurred more and more often.
- The problem occurred more often during heavy disk activity, sometimes after only 2 minutes. I could not even finish a long Cygwin install session.
- According to SpeedFan my processor heated up quite a bit (up to 65ºC), after putting a bit of heat sink paste between the CPU and the heat sink, that problem was solved.
- The same SpeedFan reported that my harddisk went to high temperatures as well, reaching 50ºC but still rising when the system went down. Some searches taught me that this is high but acceptable. A full copy of the harddisk (as a USB drive) did not give any problems.
- Replacing the power unit did not help.

When I had moved everything to another motherboard an interesting thing happened: once, just once out of many reboots I got a memory failure. Got you!

I finally was able to pin the wrong RAM module using an old memory test from Doug Ledford.

Since the shown script can not be used as is, here is what I did to make it work on Ubuntu 7.10:
- Download a Linux kernel from kernel.org (we are not going to compile a kernel, we just need a large zip file): wget http://kernel.org/pub/linux/kernel/v2.6/linux-2.6.23.1.tar.bz2
- Transform it to a gzipped tar:
bunzip2 linux-2.6.23.1.tar.bz2
gzip linux-2.6.23.1.tar
cp linux-2.6.23.1.tar.gz /tmp

- Download the adapted memtest.sh. My changes auto-detects files named linux-*.tar.gz, and uses the file name to predict the name of the root folder in the tar.
- One by one place a memory module in your computer and run memtest.sh for each configuration.

The original memtest.sh site has more information on how the script works and why Memtest86 is actually useless. The point is that a modern CPU can not put enough load on your memory. With concurrent DMA transfers more errors are detected.

Tuesday, October 23, 2007

Help! Looking for a Java LDAP client library!

I often read that Java is very mature and that you can find Java libraries for everything. Wrong!

At least 5 complete working days I have spend investigating how to implement the LDAP synchronization protocol as supported by the OpenLdap server (RFC4533) in our Java product. Here are the libraries I investigated. None of them support RFC455 out of the box.

JNDI
Sun's implementation is alright for most things, and now that the JVM is being open sourced, you can actually see the com.sun classes you need to program with (BerEncoder and BerDecoder) for new LDAP controls. Unfortunately, JNDI is not actively developed anymore. As far as I can see the required LDAP Intermediate Response Messages (RFC4511, the most recent definition of LDAP) is not supported. No idea on how to add this to JNDI either.

OpenDS
Another Sun initiative, the open source directory server. The code shows support for the LDAP Intermediate Response Message however, all code is written from a server perspective. I did not see how this code could be used in a client.

ApacheDS
The code of this directory server has clearly separated code that could be used by both client and server. However, again, no support for the Intermediate Response Message.

JLDAP
I could not download the sources as our firewall does not allow CVS to go through. I'll investigate later. I doubt that Intermediate Response Messages are supported as there is not much development going on here. I think Novel's resources are all tied to their newer products.

So, still no go. What should I do? Any synchronization supported by OpenLdap will do. Help!

Update 2007-10-23: It seems that JLDAP has support for Intermediate Response Messages after all! Meanwhile a friendly colleague at another location has downloaded the code for me.

Wednesday, October 10, 2007

Announcement: Version 99 Does Not Exist

Last update: 2011-12-19

Update 2011-08-15: DNS for Version 99 is offline.

In a previous post I announced no-commons-logging (a.k.a. commons-logging version 99.0-does-not-exist). After a request to add no-commons-logging-api I immediately realized that this can be generalized. So here it is: Version 99 Does Not Exist.

Features
Version 99 Does Not Exist emulates a Maven 2 repository and serves empty jars for any valid package that has version number 99.0-does-not-exist. It also generates poms, metadata files and of course the appropriate hashes.

For example the following links will give an empty jar, its pom and the maven metadata for commons-logging.

Why?
To get rid of dependencies that maven 2 insists on including on your classpath (like commons-logging while you want to use jcl-over-slf4j).

How do you use it?
First of all: if you were using no-commons-logging before, you do not need to change anything! Version 99 Does Not Exist is fully backward compatible with no-commons-logging. Otherwise, read on.

In your pom.xml declare the following 2 things: 1) the Version 99 Does Not Exist repository, and 2) for each jar that you get but do not want, declare a dependency with version 99.0-does-not-exist.

So, for example, if you do not want to be bothered with commons-logging, include the following in your pom.xml:

<repositories>
    <repository>
        <id>Version99</id>
        <name>Version 99 Does Not Exist Maven repository</name>
        <layout>default</layout>
        <url>http://no-commons-logging.zapto.org/mvn2</url>
    </repository>
</repositories>
<dependencies>
    <!-- get empty jar instead of commons-logging -->
    <dependency>
        <groupId>commons-logging</groupId>
        <artifactId>commons-logging</artifactId>
        <version>99.0-does-not-exist</version>
    </dependency>
</dependencies>
When? Right now. As I received questions on how stable this service will be, I hereby promise to keep this maven repository on-line for at least 5 years (or until no-ip stops offering their free dns service). Read further if you want to run Version 99 Does Not Exist yourself.

How?
Version 99 Does Not Exist is implemented in a single file as a Camping application.

Where is the code?
If you want to run it yourself you'll need the following: Version 99 Does Not Exist download (rb file, 4Kb, MIT license), Ruby, Ruby Gems and Camping. Version 99 Does Not Exist is started with a simple camping version99.rb. Good camping!

Update 2007-11-06: The empty jar is no longer completely empty as 'mvn site' fails on it. Thanks to Stefan Fußenegger for the report.

Update 2007-12-14: The repository was off line for a day or so because I goofed while switching internet provider. Now, it is not only working again, but reachability is better then ever: 10 Mbit/s up and down!

Update 2008-02-09: Version 1.2: another update to the empty jar. Sasha Ovsankin reported that the compiler could not open it. The jar now contains a valid manifest. Thanks Sacha!

Update 2009-05-01: During my move from Amsterdam to Haarlem the server has been off line for a day or so. I am now on ADSL so my internet connection is a lot slower. If anyone want to run a mirror, I am happy to set up a rotating DNS.

Update 2009-07-24: Version 1.3: artifacts with a groupid that contain a dot are now supported. Éric Vigeant, thanks for the bug report!

Update 2009-10-17: Version 2.0: Éric Vigeant's problems were still not over. Now removed metadata support, this will hopefully make some proxies behave better.

Update 2011-08-15: DNS for Version 99 is offline.

Update 2011-12-19: I just found an alternative with almost the same name: version99.qos.ch. It is a static maven repository with a limited number of version 99 jars.

Monday, October 8, 2007

Watch where you place those Camping constants!

Finally got that bastard Camping application to work properly. I kept getting a ERROR: wrong number of arguments (0 for 3) on the first request. Subsequent requests would sometimes succeed, sometimes not. The funny thing was, the get method was not even called yet!

With deep gratitude for ruby guru Remco van 't Veer who found out that you can not use constants in a Camping Controller (actually, I already knew it from a previous camping app, but I had stupidly forgotten it). Move it up to the main module and you're good to go.

Wrong

Camping.goes :Nowhere module Nowhere module Controllers MY_CONSTANT = 'wrong' # Camping going nowhere end end

Correct
Camping.goes :ForPresident module ForPresident MY_CONSTANT = 'fine' module Controllers end end

See my next post to see what this is all about.

Monday, October 1, 2007

Camping: optional group in regular expression

Recently I finished a Camping application where I wanted to handle URIs with an optional part. Like so:
class Jar < R '/(.*)\.jar(\.sha1|\.md5)?' def get(jarname, hash) # does not work ... end end
Unfortunately this does not work. (Does this look familiar? See my next post!) In the end it turned out to be as simple as:
class Jar < R '/(.*)\.jar(\.sha1|\.md5)?' def get(jarname, hash = nil) ... end end
At RailsconfEurope Manfred taught me another trick:
n = 4 class DigitParty < R "/" + ("(\\d)?" * n) def get(*args) args.length # -> 4 args[0] # -> first digit or nil args[n - 1] # -> last digit or nil end end
This matches anything from 0 to n digits where a group is created for each digit. Using * in the parameter list will make method get work for any value of n. For URI's with less then n digits, the corresponding array elements are nil.

Wednesday, September 5, 2007

Multi user mac?

Well who would have thought? I have become a Mac user. After seeing Windows Vista at my moms computer, I decided I had enough of MS. And I love it! No noise, very useful software included, very quick to start and stop, and my girl-friend and I are simply always logged in. Switching takes 2 clicks and 2 seconds. Then an idea struck me. I still have a spare monitor and attaching a second keyboard and mouse is not so hard either: why can we not work simultaneously? Unfortunately this is not supported. But why not? My question of the month: Why is the Mac still a single user system?

Wednesday, August 1, 2007

Having fun with Mule

Mule, a very nice open source ESB implementation kept me busy in a not very nice way for 2 days.

In the Mule configuration you can inject properties that are created from another containers, in my case: Spring. You do so with the <container-property>. However, it seemed that these properties were not set on the services that need them. A full day of debugging Mule configuration parsing code gave no results. The properties were correctly loaded from the Spring container but somehow they were not used.

When we found out that another deployment unit of our project did work correctly we started comparing the configurations. It turned out that the configuration was not at all the problem. The problem was a rouge spring bean.

In my setup Mule is started by Spring. From the Spring configuration you simply instantiate Mule with some bootstrap code as follows:

<bean id="muleManager" class="org.mule.extras.spring.config.SpringMuleBootstrap">
  <property name="configResources" >
    <list>
      <value>classpath:mule-config.xml</value>
    </list>
  </property>
</bean>

Unfortunately, one of the other beans that are created by Spring instantiates a MuleClient in its constructor. The new MuleClient starts a Mule core (the MuleManager) if it can not find a running one. I guess you see the problem, a bit later the bootstrap code creates another MuleManager, but by now it is too late and somehow all the container properties get ignored.

Update 2007-08-01: More container property problems: the first transformer gets injected nicely. Somehow another instance is used later on. Not injected with container properties of course.

Thursday, July 19, 2007

No more commons-logging

Update 2007-10-10: No-commons-logging has been superseded by (backward compatible) Version 99 Does Not Exist.

Disclaimer
COMMONS-LOGGING VERSION 99.0-does-not-exist IS NOT IN ANY WAY AFFILIATED WITH THE ORIGINAL DEVELOPERS OF COMMONS-LOGGING NOR WITH APACHE.

Why no-commons-logging?
If you are using Maven you'll know it is practically impossible to move away from commons-logging (with its class-loading problems) and migrate to SLF4J. About every second pom declares it is dependent on commons-logging. Unfortunately Maven does not provide an easy way to exclude a certain package throughout your project. You will have to exclude commons-logging on each and every dependency you need (including transitive dependencies).

No-commons-logging is a Maven hack that allows you to exclude commons-logging from your application with a single piece of configuration.

How does no-commons-logging work?
No-commons-logging is a Maven2 package that mimics a commons-logging package with a high version number, but without any actual java code in the jar. This trick works because Maven allows you to specify a specific version for a dependency, and that version will then be used regardless of other dependency specifications.

Update 2007-07-22: Added package to mimic commons-logging-api as requested by Olivier Lamy.

How do you use no-commons-logging?
In your pom.xml include the following piece of xml:

<repositories>
    <repository>
        <id>no-commons-logging</id>
        <name>No-commons-logging Maven Repository</name>
        <layout>default</layout>
        <url>http://no-commons-logging.zapto.org/mvn2</url>
    </repository>
</repositories>
<dependencies>
    <!-- use no-commons-logging -->
    <dependency>
        <groupId>commons-logging</groupId>
        <artifactId>commons-logging</artifactId>
        <version>99.0-does-not-exist</version>
    </dependency>
    <!-- no-commons-logging-api, if you need it -->
    <dependency>
        <groupId>commons-logging</groupId>
        <artifactId>commons-logging-api</artifactId>
        <version>99.0-does-not-exist</version>
    </dependency>
    <!-- the slf4j commons-logging replacement -->
    <dependency>
        <groupId>org.slf4j</groupId>
        <artifactId>jcl104-over-slf4j</artifactId>
        <version>1.4.2</version>
    </dependency>
    <!-- the other slf4j jars -->
    <dependency>
        <groupId>org.slf4j</groupId>
        <artifactId>slf4j-api</artifactId>
        <version>1.4.2</version>
    </dependency>
    <!-- using log4j as backend -->
    <dependency>
        <groupId>org.slf4j</groupId>
        <artifactId>slf4j-log4j12</artifactId>
        <version>1.4.2</version>
    </dependency>
    <dependency>
        <groupId>log4j</groupId>
        <artifactId>log4j</artifactId>
        <version>1.2.14</version>
    </dependency>
</dependencies>

Disclaimer
COMMONS-LOGGING VERSION 99.0-does-not-exist IS NOT IN ANY WAY AFFILIATED WITH THE ORIGINAL DEVELOPERS OF COMMONS-LOGGING NOR WITH APACHE.

Friday, July 13, 2007

(Pre-)announcement: transport-smtpin, an embedded e-mail server for Mule

I am proud to (pre-)announce the open source project: transport-smtpin. Transport-smtpin acts as an embedded e-mail SMTP server for the Mule ESB implementation. Under the cover transport-smtpin uses SubEthaSMTP Mail Server (the last freaking java SMTP implementation).

I'll post a download location as soon as I am convinced the implementation is stable (should not take long). Please let me know if you are interested. It will be released under something like the Mozilla Public License Version 1.1.

Update 2007-07-16: This code was sponsored by Uzorg BV.

Update 2007-07-17: A project proposal was made on MuleForge.

Update 2007-8-12: The project has been accepted by MuleForge: mule-transport-smtpin project page. There is no code there yet. It seems to work for me, but not all features are thoroughly tested, nevertheless I'll try to upload the code this week so that more people can play with it.

A bit of history
Transport-smtpin came about when I was searching for a way to directly receive e-mails. In the Dutch health-care a much used standard (mis-)uses e-mail as a request-response protocol: the OZIS standard. The actual messages are in an attachment and are defined by one of many Edifact variants.

In the system I am currently working on, there is actually a person waiting for the answer in such a OZIS request-response cycle. So receiving the responses by polling a POP server would incur an unacceptable overhead.

In an earlier attempt to get e-mail directly we used Apache James. Our mailet installed in James passes received e-mails immediately to our ESB with a Hessian web service. Though writing a mailet is quite easy, this set-up has some serious disadvantages:

  • Installing and configuring James for this task is not easy nor straightforward.
  • In a cluster, we need to either install one James per cluster node, or devise some way to redirect e-mails to the right cluster node.
In an attempt to get rid of James I stumbled upon SubEthaSMTP Mail Server. SubEthaSMTP is a quite easy to use SMTP server implementation.

As we are using Mule, it would be neat if SubEthaSMTP could be configured as a transport. It took me 3 days (mostly to read Mule documentation), but the result is there: my very first open source project.

Friday, July 6, 2007

Making a site within 1 hour

Pardon my little pause on this blog. I just got a son: Milo. Feel free to look at Milo's website. I build it with Radiant. It took me, including the first content, less then 1 hour!

Thursday, May 24, 2007

Wicket for BSCs, part II

So you are in a Big Slow Company (BSC) and want to switch to Wicket. I wrote about this before, but today's e-mail from Mark Stock on the Wicket user list backs up the arguments with some hard figures. Here is an abstract with some key phrases:
In my experience, it's taken me about two weeks to get up to speed on Wicket. ..... Now, the prototype that I have so far would have probably taken me at least two weeks in Struts 1 and I already know Struts 1 very well. The difference in productivity between the two frameworks is pretty dramatic.
So? What are you waiting for!

Friday, May 11, 2007

Comparing XML in a JUnit test

Today I tried to compare 2 XML documents in a JUnit test. One was created with Altova's MapForce, the other was the result of a new XmlBeans document (BTW, both are nice products). Notice that these XML documents use a slightly different notation for the main namespace:

Document one:

<?xml version="1.0" encoding="UTF-8"?> <Message xmlns="http://www.a.nl/a10.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="C:/longpath/a10.xsd"> <MessageHeader>....

Document two:

<?xml version="1.0" encoding="UTF-8"?> <a:Message xmlns:a="http://www.a.nl/a10.xsd"> <a:MessageHeader>....

Here is what I tried:

1. org.w3c.dom.Document.equals. Well, that goes nowhere.

2. org.dom4j.Document.equals. Same.

3. XMLUtil's XMLAssert.assertXMLEqual. Bummer, works alright, but it says that Message and a:Message are different and they are not (they're in the same namespace!).

4. Juxy's XMLComparator.assertXMLEquals. No go, same result.

5. I took a short look at the site of XSLTunit. It says that XSLTunit is a proof of concept. Furthermore, this one is also targetted at XSLT testing. So I decided to skip it.

6. Reading a bit closer I noticed that XMLUtil 1.0 released in April 2003 (wow, that's old), has a followup: XMLUtil 1.1beta1 released in April 2007 (wow, that's new). The website says they fixed the namespace thing! Unfortunately they didn't (yet, I hope).

7. The final solution: with some String.replaces, I just removed the namespace stuff and the schema location from the documents. XMLUtil 1.0 now works nicely with very good diff messages.

Update 2007-05-24 I was quite wrong. XmlUnit does notice the differences in namespace usage (and puts a message in the exception), but it does not fail until it sees a real difference. The real difference turned out to be whitespace. By adding the code added below, the differences disappear.

XMLUnit.setControlParser("org.apache.xerces.jaxp.DocumentBuilderFactoryImpl"); XMLUnit.setTestParser("org.apache.xerces.jaxp.DocumentBuilderFactoryImpl"); XMLUnit.setSAXParserFactory("org.apache.xerces.jaxp.SAXParserFactoryImpl"); XMLUnit.setTransformerFactory("org.apache.xalan.processor.TransformerFactoryImpl"); XMLUnit.setIgnoreWhitespace(true);

Tuesday, May 8, 2007

Firewall gone crazy!

Today I saw the solution to one of my weirdest problems ever. We have been searching for about a whole week why our JEE application would suddenly stop. The problem was that it took at least an hour to reproduce it (if at all), and even then we could find next to nothing in the logs. We only started to see the light when we noticed the difference between environments where the error was reproducable, and where is was not reproducable: the firewall.

Turns out the firewall between the application server and the database server would stop all trafic on the JDBC connection after an hour if idling, without actually killing the connection. Weblogic, and all other applications we tried, totally freak out when this happens. What on earth did the creators of that firewall think? When you kill a connection, then kill it! Do not just stop the data flow!

Well anyway, the solution? We now let Weblogic 'test' the connection every 10 minutes. Apparently, the firewall is happier when there is some traffic now and then.

Saturday, May 5, 2007

Agile Web Development with Rails, 2nd edition - Book review

About time for another book review: Agile Web Development with Rails, 2nd edition, written (mostly) by Dave Thomas.

The book shows you all facets of developing a Rails application. This is actually a big improvement over the first edition. That one did not even talk about migrations. There have been a lot of changes in Rails, and these are reflected in the second edition.

Just like the first edition, the second edition is well written and has the same structure: the first chapters walk you through creating a new application, the latter chapters provide more insight in the separate building blocks of Rails. Unfortunately the book never really goes to the bottom of things (though its pretty good in the Active Record area); there is simply too much to cover. So this book is probably not a good buy if you are already an experienced Rails developer and are prepared to find information about new Rails features on the internet.

So although it took me a couple of months to read this book from cover to cover, my conclusion on this book is short: this is a must-have for any new Rails developer.

Friday, May 4, 2007

Aslak Hellesøy to speak on RubyEnRails 2007!

I just heard that Aslak Hellesøy will speak at the RubyEnRails 2007 conference! If want to know more about Behavior driven development (BDD), or love to hear more about RSpec, come to the conference!

Wednesday, May 2, 2007

RubyEnRails 2007 site is live! (Dutch)

Vanaf vandaag kunnen mensen zich aanmelden voor RubyEnRails dag 2007! Op de website http://2007.rubyenrails.nl kan je je aanmelden met je OpenID.

Het sprekers gedeelte op de pagina is nog niet volledig. De lijst met sprekers zal zo spoedig mogelijk worden aangevuld. Er zullen diverse presentaties plaatsvinden over o.a. RESTful ontwikkeling, Behaviour Driven Development, Radiant en het Camping Framework.

Dus ben jij een Ruby On Rails specialist of ben je gewoon geïnteresseerd in Ruby of Ruby on Rails? Meldt je dan nu aan! Deelname is overigens geheel gratis!

Tuesday, May 1, 2007

Wicket article translated to French

My article Backward compatible AJAX development with Wicket has just been tranlated to French by ZedroS! Nice work Joseph!

New layout

I liked the little dots, but they were also a little too psychedelic. With 9 days to go for my blog's first anniversary, I decided it was time for a new layout: Tictac blue.

Thursday, April 26, 2007

Spring integration test hacking

Update 2007-05-01: You probably can not follow this article if you did not work with Spring tests yet. The key to this article is at the end, where I show how to call an injected object while it has been proxied by Spring.
A colleague (Levi Hoogenberg) showed me a working example of integration tests with Spring. The key is to make a JUnit test that inherits from AbstractAnnotationAwareTransactionalTests (they like long names at Spring). Simply override getConfigLocations() and a complete spring context will be loaded. Any setters in your test class will automatically be called with beans from the context (called auto-wiring). In addition, you can execute some SQL to initialize a database (for example by calling executeSqlScript() in onSetUpBeforeTransaction()). Each test gets a fresh view at the filled database and runs in a fresh new transaction.

My main spring configuration (in spring.xml) sets up a context with all the services and its Hibernate backend. The MySQL database connection is set up in a separate file (spring-db.xml). The test however, is not using MySQL but an in-memory H2 database so that it can easily be run from Continuum. This is configured in another file (spring-test.xml).

A big advantage of this setup is that it allows you to test most of the real application wiring. Disadvantage is that it uses Hibernate against H2 and not the MySQL target database. I am not too worried about this, I am not using advanced Hibernate features and I do not have to test Hibernate!

So I was happily writing tests until I found out that one integration test went much too far. The service I was trying to test (lets call it AbcService) did more then just save something in the database; it also called another service to schedule a task. In a real integration test I would need to assert that the task would have been scheduled. I realized that I was actually misusing the integration test to also do a unit test on AbcService. Instead of writing a proper unit test, I decided to leave it at that.

So how was I going to assert that the scheduler service was called? Here are my attempts:

Solution 1: Test specific config files
Since you can not override a small part of the configuration, this solution requires you to duplicate a lot of configuration files. Furthermore, you loose the ability to test the actual application configuration files. As soon as I realized this I gave up on the idea.

Solution 2: Override the configured service by changing the setter in the test class
The test class (AbcServiceImplTest) has a setter to inject the service under test like so:

public void setAbcService(AbcService abcService) { this.abcService = abcService; }
Pretty standard. So my idea was to override the used scheduler service like this:
public void setAbcService(AbcService abcService) { this.abcService = abcService; // Override the scheduler service with a mock SchedulerService mockSchedulerService = ... ((AbcServiceImpl) abcSerive).setSchedulerService( mockSchedulerService); // Cast fails! }
Unfortunately, the passed in abcService is not the real thing. It has been proxied by Spring to add transaction support. The proxy that Spring uses (the standard JDK proxy) can only be casted to the implemented interfaces, and not to an actual class.

After a lot of searching and looking with the debugger, I finally found a solution. Be warned: this is a big hack. Do try this at home, but don't complain when it suddenly fails.

public void setAbcService(AbcService abcService) { this.abcService = abcService; // Override the scheduler service with a mock SchedulerService mockSchedulerService = ... InvocationHandler invocationHandler = Proxy.getInvocationHandler(abcService); try { invocationHandler.invoke( abcService, AbcServiceImpl.class.getMethod( "setSchedulerService", new Class[] {SchedulerService.class}), new Object[] {mockSchedulerService}); } catch (Throwable e) { fail("setSchedulerService failed"); } }
Incredible, isn't it?

Friday, March 23, 2007

Ruby and Rails conference update

The preparations for Ruby and Rails 2007 in Amsterdam are progressing quite well. We are proud to have Dr. Nic fly in to give a presentation. Also the rest of the conference is already well filled with many speakers. Mind you, the official announcement has not been done yet!

We are only searching for more people that want to present a quickie. A quickie is a presentation of 5 minutes sharp. You could use it for example to show some cool stuff. So if you are around in June, and you made something new, or saw a nice new feature somewhere, feel free to e-mail us at danny at rubyenrails dot nl.

Update 2007-04-23: Unfortunately Simon Willison canceled. He felt too much out of place on a Ruby and Rails conference.

Thursday, March 8, 2007

Turmoil in Wicket land

The impossible task faced by the Wicket team is finally coming to an end. For months and months work was done on 2 branches (1.3 and 2.0), with a third in maintenance mode (1.2). Three branches is realy too much for a group of volunteers. For the uninitiated: the 1.3 and 2.0 branches differ in one major big way: 2.0 has one extra argument to each and every component. Another large difference is that 2.0 supports generic models. For the rest, most new features and fixes were added to both 1.3 and 2.0.

So what are the options? It seems that the 'constructor change' will be dropped. This means that the 2.0 branch will be abandoned. All other 2.0 only features will be ported to 1.3. Users that use the 2.0 beta code from svn will have to migrate to 1.3.

Is this good news? I think so! If this turns out to be true, the excellent work of the Wicketeers will be much more focused again. Furthermore, all the good new features will be available for Wicket users of all branches!

Update 2007-03-19 And yes, it did turn out to be true:

I think it is time to close the vote and count our blessings. [...]

7 +1 binding votes, [...] 4 abstainees, and no non-binding votes.

This wraps it: we will drop the constructor change and migrate all features of trunk to branch 1.x in 2 releases: everything non-Java 5 goes into 1.3 and java 5 specifics into 1.4

Martijn

Wednesday, March 7, 2007

Vooraankondiging Ruby En Rails 2007 (Dutch)

Binnenkort is het weer zover. Een nieuwe Ruby En Rails dag. Vorig jaar was het een zeer geslaagde, zonnige en gezellige dag waar veel Rails en Ruby enthousiastelingen aanwezig waren.

Ook voor dit jaar zijn we bezig om weer een dag te organiseren. Er wordt druk gezocht naar een lokatie (waarschijnlijk Amsterdam) en we zijn aan het inventariseren welke sprekers uitgenodigd worden. Waarschijnlijk zal het allemaal plaatsvinden op 31 mei, maar het is 7 juni geworden.

Heb je zelf nog goede ideeën of verzoeken m.b.t. sprekers op deze dag, surf dan naar Ruby en Rails en laat iets achter in de comments. Je kunt ook een e-mail sturen naar danny at rubyenrails.nl.

We zijn op zoek naar sprekers voor:

  • Presentaties van 45-50 minuten;
  • Lightning Quickies™ van 5-15 minuten.
Mogelijke onderwerpen:
  • Handige Ruby libraries (gems)
  • Rails plugins
  • Interessante real-world toepassingen van Rails
  • Tips & truuks, leuke vondsten, enzovoorts
Update 2007-03-20 De datum is intussen vast gezet op 7 juni. Ruby en Rails 2007 wordt gehouden in het gebouw van de Hogeschool van Amsterdam nabij het Amstel station.

Tuesday, March 6, 2007

Mule patch accepted

Another product that accepted my patch. Till today I was proud to have patches in the following products:
- Xml2Db (specification for an aspect of the API, no code, long time ago)
- Struts (something with generating session ids, not long ago enough)
- Wicket (MixedUrlEncoding, other patches pending)
- Spring (a small documentation error)

Today this is extended with: - Mule (MailMessageAdapter potentially adds "null" string)

These were all small things, but I am still proud I was able to move open source further :)

For the future: I wrote a Camping application to manage a virtual e-mail domain for Postfix. My plan is to convert this to a Radiant extension and make it available under the MIT license.

Wicket to the recue!

My previous site (for the municipality of Amsterdam) is barely finished, and I have already convinced my next client (where I work as a consultant) that Wicket is more suited for them then Spring MVC. Life is sweet! Spring MVC has a lot of Spring quality, but the programming model is just not what I want to work with. Wicket to the rescue!

Tuesday, January 30, 2007

More Ubuntu experiences - continued and discontinued

Che promised and delivered. Almost.

The boot time was indeed fast. Che had compiled the wacom-linux kernel module so the Wacom tablet worked fine. Even dual head worked, well almost...

Problem 1: The screens were in the wrong order. The second screen is standing to the left of the main screen. No problem, just one one word changed in xorg.conf, reset and done.

Problem 2: The Wacom tablet driver still thinks that the second screen is to the right of the main screen. No idea how to solve this. Che thought this would require a code change and recompilation of the wacom-linux kernel module.

Problem 3: The external screen gets the wrong resolution. Whatever we tried, we could not get it to run on 1850x1050, the optimal Dell 2005fpw resolution. Bummer.

So again, after 2 additional hours of hacking, I am back with MS Windows. Perhaps, if time permits, I'll erase the Ubuntu partition and try familiar SuSE this week.

Note: fear not if you have one of the newer Dell Latitude D620s with nVidea graphics. Dual head works a lot better on these then on my older D620 with Intel graphics.

Friday, January 26, 2007

More Ubuntu experiences :(

Having little to do for a couple of days, I decided it was time to move to a real OS on my work laptop, a Dell Latitude D620. One of my colleagues very enthusiastically recommended Ubuntu. Having read a whole lot of other enthusiastic stories about Ubuntu, it was time to give it another try. This time it was to be Ubuntu 6.10.

Stupid me.

First of all GParted just gave me an "Error" while resizing my Window's NTFS partition. That was it, just "Error", no details whatsoever. About an hour later I had tried to downloading the latest GParted iso. Luckily this did work.

Second problem was the line "BUG: soft lockup detected on CPU#0!" during startup of Ubuntu. After this the system froze up. No where to go for, my enthusiastic colleague did a google and find out that the wireless driver crashed when the hardware kill switch for the radio was on. Bummer. I flipped the switch, rebooted and Ubuntu now started. It did take a good 2 minutes longer, just to find out there is no access to a wireless network. (Why do you think the switch was off?)

Third problem was the mouse. Well, actually I do not have a mouse but a very nice Wacom Graphire3 tablet. The tablet kind of works (I could move the mouse pointer) but you can not always reach the edges of the screen. You know, like the place where the menu is located. This really annoyed the hell out of me. Luckily Firefox was already installed so I searched the internet for solutions. I found several and none of them worked. There was one guy with the same setup as me who had worked on it for 3 days! After 4 hours of messing around I found a workaround by setting the mouse acceleration a lot higher.

Today I gave it another try. The older Latitude D620s have a really bad completely sucking low contrast LCD screen that is totally unfit for anything serious. So I decided to configure my kick-ass 20" wide-screen Dell 2005FPW. I found a post on how to configure dual head but to no avail. Boy what a bummer.

Now, I am back on Windows XP and Cygwin. Don't get me wrong, I am a big Linux fan, running it for ages already, and usually avoid MS programs (even on Windows). I liked the looks of Ubuntu and I still do. But as long as it does not run with my hardware, I will not use it.

Sunday, January 7, 2007

Backward compatible AJAX development with Wicket

(Also published in Dutch and French, French translation by ZedroS).

Despite the general acceptance of AJAX, there is still some reticence in some sectors. The rich interaction provided by AJAX can not be used by people that, because of their handicap have to use a browser that does not support Javascript or CSS. For a sector like the government it is not acceptable to exclude these people and I believe this is an attitude that should be practiced more often.

Many developers today can or will only create web applications that work solely in Internet Explorer. Building an AJAX application that also works without Javascript is simply too much for many companies. To do this anyway, a number of techniques are known. A more general approach is given by Unobtrusive Javascript, but this article presents a different, more flexible approach that uses Wicket.

Wicket is one of the few fully component based web frameworks. In such a web framework one combines components to larger components until you have a web page. The advantage of components is that it is a lot more comfortable to develop and modify components separately. With a page oriented web framework one must usually develop the whole page simultaneously. Struts for example prescribes that you first collect all information for the whole page before a JSP page will render the (whole) page.

Composing Wicket components (a Wicket introduction)

Once creates a component in Wicket by writing an HTML fragment (the template) and by writing Java code that couples more components to the template. Creation and coupling of components happens during the construction phase. During the render phase the components can add or change the fragment or even completely replace it.

Lets look at an example. Here is an HTML template and the associated Java code:

<h1 wicket:id="title">_Template title</h1>
add(new Label("title", "The real title"));
The Label component is coupled to an h1 element. Label will put the real title in the HTML template during the render phase. The result is:
<h1>The real title</h1>

Composing components is just as simple. Suppose we want to use a title with subtitle on many places. We will create a component for that, a Panel to be precise:

<wicket:panel>
  <h1 wicket:id="title">_Template title</h1>
  <h2 wicket:id="subtitle">_Template subtitle</h2>
</wicket:panel>
class TitlePanel extends Panel {
  public TitlePanel(String id, String title, String subtitle) {
    super(id);
    add(new Label("title", title));
    add(new Label("subtitle", subtitle));
  } }
The panel can now be used (for example in the template of another panel) with:
<span wicket:id="titlepanel"></span>
add(new TitlePanel(
  "titlepanel", "The real title", "with a subtitle"));

Linking between pages is done with the Link component:

<a href="#" wicket:id="detaillink">Book details</a>
add(new Link("detaillink") {
  void onClick() {
    setResponsePage(new DetailPage(bookId));
  } });
The Link component will put a Wicket generated href attribute on the a element during the render phase. When the link is clicked the Wicket servlet will call the onClick method. In this example the response page is changed to a page that is constructed on the spot (pages are of course also components). After this the response page is rendered and sent to the browser. If the onClick method was left empty, the response page would not have changed and the current page is rendered again.

Dynamic pages in Wicket

Links are not only for jumping to other pages. In Wicket it is just as easy to change a part of the page by replacing a component by another one. Lets extend the example a bit:

final BookDetailPanel bookDetailPanel = ...;
add(bookDetailPanel);
add(new Link("detaillink") {
  void onClick() {
    bookDetailPanel.replaceWith(
      new BookDetailPanel(bookId));
  } });
Clicking the link leads to a change in the current page. After this the current page is rendered again, and another book is displayed. Note that exceptionally little code is needed. In many other web frameworks all information of the complete page must be collected again.

The observant reader will have noticed that replacing a piece of a page is a trick that is nowadays mostly done with AJAX. In the example however, we have not used a single line of Javascript. Since the whole page is send to the browser again and again, lets change the example a bit more:

final Component bookDetailPanel = ...;
bookDetailPanel.setOutputMarkupId(true);
add(bookDetailPanel);
add(new AjaxFallbackLink("detaillink") {
  void onClick(AjaxRequestTarget target) {
    Component newBookDetailPanel =
      new BookDetailPanel(bookId);
    newBookDetailPanel.setOutputMarkupId(true);
    bookDetailPanel.replaceWith(newBookDetailPanel);
    if (target != null) {
      target.addComponent(newBookDetailPanel);
    }
  }
});
During the render phase the component AjaxFallbackLink generates both a href and an onclick attribute on the coupled a element. Furthermore it makes sure that a number of Wicket Javascript files are added to the HTML header. Depending on whether Javascript is supported, the browser will request either a normal URL, or it will do an AJAX call. In both cases the Wicket servlet will call the onClick method. In the first case the argument of onClick is null and everything will work exactly as in the previous example. When an AJAX call is done, Wicket will only render the components that were added to the AjaxRequestTarget and the result of that is sent to the browser. On the browser side, the Wicket Javascript searches for the element that needs to be replaced by using the id attribute. To make sure it is set, method setOutputMarkupId(true) is called.

With just a few lines of code we have created an AJAX application that even works in browsers without Javascript.

Conclusion

This article shows only a small piece of Wicket's AJAX capabilities. For example it is fairly easy to let the user know an AJAX call is in progress, to quickly execute some Javascript before an AJAX call (for example to disable a submit button), or to validate a form input when it changed.

Wicket is not only a fantastic framework to easily build modern and maintainable web applications, it is even possible to do it in such a way that older and special browsers can deal with them.