Category Archives: java

Ranting about a rant

First I stumble upon a rant. I often read these, write a long comment about all their faults or misunderstandings leading to their outburst of hatred, but this time it turned out that the site hosting the article has a botched comment form. It tries to load WYSIWYG editor which in turn hides the original <textarea>. Sigh.

Continue reading


Waiting for Java 8: JSR-310, the threeten

For Java 7 the Fork/Join looked created, but I guess no one was eager to use it as Lambdas were still missing. Also interesting was the new filesystem API, which I’m yet to try out. I was really disappointed when JSR-310 was not included in Java 7. The current JDK time/date API’s are a shame. I thought to take a look at what’s the status of JSR-310.

Continue reading

Maven release plugin and skipping tests

Shortblog: I never seem to remember this. I’d rather not have maven release plugin run any tests, so:


Asserting generic implementations in Java

A while back I ended up creating a simple interface like Predicate<T> as follows:

public interface Predicate<T> {
  boolean evaluate(T state);

Now when you create a bean that requires a predicate on some object, you’d code a setter for it:

package org.example.foos;

public class Foo {
  private Predicate<Bar> guardPredicate;

  public void operate(Bar bar) {
    if (!guardPredicate.evaluate(bar)) {
      throw new IllegalStateException();

    // do stuff with bar

  public void setGuardPredicate(Predicate<Bar> guardPredicate) {
    this.guardPredicate = guardPredicate;

And embed this bean in your Spring ApplicationContext like:

<beans ...>
	<bean class="org.example.foos.Foo">
		<property name="guardPredicate">
			<bean class="" />

From the name of “SomeBarPredicate” you’d expect it to implement Predicate<org.example.foos.Bar>, but there’s no guarantee on that; an implementation of Predicate<java.lang.String> could be passed in as well. There’s no automatic runtime checking because of type erasure.

Class metadata to the rescue!

Whenever your class implements or extends a generic interface or class, this data is recoverable through Class.getGenericInterfaces() or Class.getGenericSuperclass().

Getting the information out is rather painful tough, and I’m not going to re-iterate the examples out there for this. For one, please see Reflecting generics by Ian Robertson.

Luckily Spring 3.0 has a utility class for this: GenericTypeResolver.

For our example, the setter should be rewritten as:

  public void setGuardPredicate(Predicate<Bar> guardPredicate) {

	Class<?> param = GenericTypeResolver.resolveTypeArgument(guardPredicate.getClass(), 
	if (param != null // if null, it was unrecovarable, treat it as Class<Object>
		  && !param.isAssignableFrom(Bar.class)) {
		throw new IllegalArgumentException();
	this.guardPredicate = guardPredicate;

So lets test this with JUnit 4 and Spring testing utitilities:

package org.example.foos;

import static;

import org.junit.Test;
import org.springframework.test.util.ReflectionTestUtils;

public class FooTest {

	private Foo foo = new Foo();
	public void testSetGuardPredicate() {
		setGuardPredicateScenario(new Predicate<Bar>() {
			public boolean evaluate(Bar state) {
				return state.toString().contains("Bar");
	public void testSetGuardPredicateWithWrongTyped() {
		setGuardPredicateScenario(new Predicate<String>() {
			public boolean evaluate(String state) {
				return state != null && !state.isEmpty();
	public void testSetGuardPredicateWithUntyped() {
		setGuardPredicateScenario(new Predicate() {
			public boolean evaluate(Object state) {
				return state != null;
	private void setGuardPredicateScenario(Object predicate) {
		ReflectionTestUtils.invokeSetterMethod(foo, "guardPredicate", 
				predicate, Predicate.class);
		foo.operate(new Bar());

Voila, tests pass. Be sure to test it without our assertions as well; look at “testSetGuardPredicateWithWrongTyped” result to see a not so nice ClassCastException:

java.lang.Exception: Unexpected exception, expected<java.lang.IllegalArgumentException> but was<java.lang.ClassCastException>
Caused by: java.lang.ClassCastException: org.example.foos.Bar cannot be cast to java.lang.String
	at org.example.foos.FooTest$2.evaluate(
	at org.example.foos.Foo.operate(
	at org.example.foos.FooTest.setGuardPredicateScenario(
	at org.example.foos.FooTest.testSetGuardPredicateWithWrongTyped(

With assertions we catch that configuration problem in earlier refresh stage.

Notes for registering bean defs at runtime

When registering bean definitions at runtime via BeanDefinitionRegistryPostProcessor or the alike, always set the AbstractBeanDefinition.setResourceDescription(String). It will be shown you on the kilometre long exception reports when everything fails, thus helping you debug it. Even the following will tear less hair from your head:

private void registerFoo(String beanName, BeanDefinitionRegistry registry) {
  GenericBeanDefinition def = new GenericBeanDefinition();
  def.setResourceDescription(toString()); // or getClass().getName()
  registry.registerBeanDefinition(beanName, def);

The above will result in:

Exception in thread “main” org.springframework.beans.factory.CannotLoadBeanClassException: Cannot find class [] for bean with name ‘foo’ defined in your-toString(); …

Instead of:

Exception in thread “main” org.springframework.beans.factory.CannotLoadBeanClassException: Cannot find class [] for bean with name ‘foo’ defined in null; …

I first thought that it might be nicer to define beans in straight java, but I’ve now realized it gets more trickier (and non-obvious); save yourself and just do it the plain xml style!

Customization of Spring beans

From time and time again, I’ve explained people (at work and at how to do bean customization with the xml configuration.

The problem people seem to face (I know I did) is:

  • you have a “vanilla” or default configuration (what you ship to clients A, B, C)
  • you have a customer D, who would need/like things X, Y, Z be different

If you have coded spring-style beans, you will have a lot of properties (at least setters) you can modify in them — lets call this scenario A. If you’d need whole different implementation say, for example a strategy pattern you’d need to replace the definition in “vanilla” with a new one — scenario B.

Scenario B is more straightforward than scenario A. Basically what you’ll need to do is define the customized bean definition after the original (“vanilla”) definition. Remember that the two bean definitions (vanilla, customized) must have the same id attribute.

For web applications with web.xml this is achieved by including the vanilla.xml before customerspecific.xml.

Once you’ve done this, by turning up the logging for org.springframework for the refresh phase of ConfigurableApplicationContext you will see logging about this happening; of course you should try this out in a unit test to convince yourself that it actually works. (I’ll post a unit test example highlighting this solution once I figure a proper way of including snippets in this wordpress.)

Scenario A requires the same loading/inclusion procedure as Scenario B; you must load the vanilla before customized bean definition. However in this scenario you’ll often (or at least later on in development) run into a problem with the customized and vanilla definition sharing some properties (or even all properties, if you have two implementations with the same setters). You shouldn’t be forced to copy+paste those common properties as that will be an instant configuration nightmare.

Better is to have three bean definitions:

  • id=”abstractCustomizableBean”, abstract=”true”, with all the common properties
  • id=”actualCustomizableBean”, parent=”abstractCustomizableBean”, this is the vanilla bean definition
  • id=”actualCustomizableBean”, parent=”abstractCustomizableBean”, this is the customer specific bean definition
<!-- vanilla.xml: -->
  <bean id="abstractCustomizableBean" abstract="true" class="com.example.Foo">
    <description>Example of a simple parent-child customizable bean</description>
    <property name="bar" value="abbacd"/>

  <!-- concrete definition: note that this inherits all props from parent -->
  <bean id="customizableBean" parent="abstractCustomizableBean"/>

<!-- customerspecific.xml: -->
  <bean id="customizableBean" parent="abstractCustomizableBean">
    <!-- customer needs this message to have a meaning -->
    <property name="bar" value="ACMEBar" />
    <!-- they also required that the interval must be precisely 5 seconds -->
    <property name="interval" value="5" />

In best case scenario, your “abstractCustomizableBean” definition will contain class and all other required attributes and most, if not all, properties for the vanilla.xml’s “actualCustomizableBean”. This will render vanilla.xml “actualCustomizableBean” to define just the parent.

Then in the last definition you’ll only specify the customer specific properties and their respective values.

For added “customizability” you might want to keep vanilla’s “actualCustomizableBean” definition right under the “abstractCustomizableBean”, so that all your coders will quickly find the “defaults” as they grep through for “actualCustomizableBean.”

(As said above, I’m in the process of trying to figure how to add properly formatted code/xml in these posts, will update it later on.)

EDIT: Added at least some XML, as promised.

Managing Hibernate logical deletes

Recently we discovered a need to apply a “logical delete” on our business entities to avoid screwing up history details and to leave the audit trail intact. “Logical delete” means “mark deleted” instead of issuing physical sql DELETE update.

To implement logical delete on hibernate (3.3.2) you’ll basically need the following ingredients:

  • add @SQLDelete for your entity to update an existing column (property)
  • (optional) setup a filter to filter out all your logically deleted by default, rig @Filter to your entity

After this your Session#delete(Object) will only update the field and if you added the @Filter and apply it your searches (note, not primaryKey loads or gets) will not return the logically deleted.

If by now you have extensive unit tests, which test the behavior on your entity’s relations, you might have noticed that Collections owned by your entity are all deleted. This is because Hibernate does not know whether your @SQLDelete updated or deleted the row, and assumes that it was deleted.

To fix that, you need to roll your own CollectionPersister, most likely by extending
org.hibernate.persister.collection.BasicCollectionPersister. The important method here is #remove(Serializable, SessionImplementor) and it will be called:

  • when the owner entity has been deleted and org.hibernate.event.def.DefaultFlushEntityEventListener notices this
  • when the collection has become empty while it wasn’t previously

Latter point means that just overriding the #remove(..) with empty method will not allow the collections using this persisted be cleared after they have been persisted with at least one entity.

To fix this you need to call super.remove(..) when the owner’s org.hibernate.engine.EntityEntry#getStatus() != org.hibernate.engine.Status.DELETED.

You can find the owner through SessionImplementor#getPersistenceContext()‘s #getCollectionOwner(id, this) and get it’s EntityEntry through PersistenceContext#getEntry(Object).

Add the @Persister(impl=YourCollectionPersister.class) for each of your collections you’d like prevent from being deleted and that’s done.

If you’d like to see built-in logical delete support in hibernate go and share your opinions at HHH-6072.

Spring presentation notes

Usually you should be sleeping at this hour but I just couldn’t, so I stumbled upon a Spring Security 3 presentation from SpringOne 2010 by Mike Wiesner. (Great presentation by the way.)

Quick notes:

  • still a lot of focus on url-based control (surprise) — still cannot see how we could benefit from expressions in our app
  • UserContextService (why didn’t I think of that!! — 00:24:00) — should store only simple serializable token (like username) in Authentication and have UserContextService provide us with the User entity, perhaps the related entities, sids, stuff like that
  • more specific about roles and rights than with 2.0, which is very good — must’ve confused many
    • rights = business actions
  • @PreAuthorize for rules, not acl 00:54:00, with expressions … abstracting “what the permission is” behind “do this permission check” helps with deployment-time/customer customizations; acl is just one way to implement the check
    • DefaultMethodSecurityExpressionHandler
    • PermissionEvaluator 00:59:00
  • looks better and better with PermissionEvaluator
  • yet I’m guessing there’s nothing on database searches with permissions….
    • yep, @PostFilter, lets see if anyone asks the question what to do with 50000 entities
    • Mike Wiesner has good jokes
    • extends PermissionEvaluator to express itself in sql?
  • using groovy for evaluators, less code, more visibility, powerassert 01:08:00 (extermly nice assert error)
  • deleteable discussion 01:11:00 — good stuff; do not push security properties into entities
    • throw in a mix-in … with groovy…
    • basically groovy supports pushing some kind of map store as “mixins”
    • can access from (jsp) expressions — surprise
    • aha, introduce a SecureEntity interface, push it with @Transient using @AspectJ mixin — looks nice, requires compile/load time weaving
    • if we use hibernate session#load, loadable model wrapper to push values might be the best place, or rewrite loaded proxy to use service layer, push behaviour with annotation to service
    • remember to do service first, ui last
  • kerberos/spnego at 1:21:00
    • default on windows, thank god for no more ntlm
    • at the time of presentation at milestone2
    • missing mostly documentation, hardest part should be kerberos environment setup
  • spring security 3.1 at 01:24:00
    • want to have better/easier right mapping (?)
    • better Active Directory integration

And I also used my night on another great presentation by David Syer and Mark Fisher: Concurrent Distributed Applications with Spring.


  • first SEDA explanation I’ve understood, with the same old Cafe example
  • way too much JCIP repeating

exploring osgi, maven

I’ve started writing, hopefully a series of blogs about actually building a virgo app. Problem with greenpages is that nowhere are you shown how to roll your own.

So what I aim to do accomplish here is

  • a simple hello world app
  • built in eclipse w/ m2eclipse, sts
  • deployed from eclipse to virgo

Later on what I am most curious about is

  • managing online upgrades to existing deployment (as in, have a single deployment running at all times, do incremental upgrades on it)
  • eventually managing the deployment time user friendly
    • building safeguards against simulated application states (views) where data loss would happen, if we’d happen to upgrade for example before user submits a form
    • arranging so that safeguarded states cannot be entered after a upgrade preration has been initiated
    • could it be possible to give users a read-only access to the application while the new instance is starting up? (google code style)

Hello world tutorial

Starting out, for simplest hello world I found this, rather old tutorial which presents how to build the simplest bundle, and deploy it in an equinox instance. To update the original tutorial by Aneesh Kumar KB:

I will not referate that (good) tutorial here; go read it and try it. For me, the activator class was quickly compiled and bundled in, my first ever OSGi bundle is ready, easily started and stopped. That was not so hard.

You should note that if you kill the java process (CTRL-C) and restart it (after having the bundle installed and started) it will now start at launch. See the directory configuration under the directory you placed osgi runtime in. More about that in runtime options manual.

Hello world, maven style

Oki, so now lets do that with maven. As far as I know there are two ways of doing OSGi with maven:

  • Tycho — for what would seem like MANIFEST.MF first, pom.xml second development
  • maven-bundle-plugin — which does MANIFEST.MF generation from pom.xml

For me the obvious choice is to use later, the maven-bundle-plugin. Obvious because so far I’ve used maven to build many released and unreleased software products, both standalone and webapp and most importantly, I’ve finally realized how to use it effectively.

Creating maven project, no eclipse yet! Project structure:

  • test-parent (groupId=osgi.test, artifactId=test-parent, packaging=pom)
    • hello-bundle (artifactId=hello-bundle, packaging=jar)

Set up the directory structure, throw the previous in test-parent/hello-bundle/src/main/java/osgi/test and add the following plugin:

      <!--<Export-Package>we do not export</Export-Package>-->
      <!--<Private-Package>no privates either</Private-Package>-->

When you launch maven to compile it, from the parent project, you’ll find that of course we need to add osgi framework dependency. Now… How does one find a prepackaged publicly downloadable equinox for maven2 dependency.

It took a while, but of course SpringSource has the bundle! As it’s so not nice to include code in wordpress blog, copy-paste the repository from faq question 8. The closest depedency to current 3.6 was 3.6.1* (more copypasting).

For what scope to use on the dependency, I’d guess “provided”, as it does not need to be included in any wars we might decide to build. (Looking how fast the bundle downloads, now might be good time to invest some energy on local artifactory or nexus instance.) Next up, compile again and it works now!

Even package works! A quick look at the generated archive reveals that maven-bundle-plugin was not bothered by our configuration; there are no OSGi specific MANIFEST.MF entries. This is because we specified packaging=jar, whereas it should be packaging=bundle.

Now lets see what happens.. At first it’d seem like maven downloads all the plugins in the world, but it’s finally ready. Again we have our jar in target/ and now it even has proper (I think) MANIFEST.MF entries. Now might be good time to read up how Bundle-SymbolicName is generated by the plugin (unless specified).

If you now attempt to launch the bundle and you notice that there’s a missing dependency: Import-Package: osgi.test; version=”0.0.0″, it’s because you mistyped the activator package name. maven-bundle-plugin will think you need to import that package, so that the activator can be run from there.

So, there you have it. Too bad wordpress does not allow me to link upload jars or tar.gz. Please comment if you’d like download final files. Original copyright of the amazing belongs to

Next time I’ll try to modify this into HelloWorld servlet and try to get it up on virgo.

Injecting “foreign beans” into a new ApplicationContext

While moving towards full OSGi supported application I’ve started to restrict my application modules by Spring context to confine them almost as if they were OSGi modules. As we don’t have any static lookups anywhere this is practically OSGi without any package-imports and less class loader security; but we are getting there.

I need to give some services away to my modules and now, how do I do that? Quickly browsing through Pro Spring 2.5 and Spring Recipes and not finding anything I just started to look at GenericApplicationContext api after failing to access it’s BeanFactory before refreshing it.

There’s a method addBeanFactoryPostProcessor() which allows you to register beans into the newly initialized ConfigurableListableBeanFactory through various register* methods.

As in my case, the dependencies I needed to publish were autowired to my context building bean by it’s ApplicationContext and so adding them to any instantiated contexes was a breeze.

Another way to accomplish almost the same would be to simply set the parent ApplicationContext to the new ApplicationContexes but that would had allowed my modules to fetch other “non-specified” beans as well.

Not sure how ApplicationEvents would be propagated up if you set the parent application context, but as the GenericApplicationContext.addApplicationListener(..) methods documentation says that the new application listeners will only get events after the next refresh those might not get propagated in both directions; most likely only from childContext -> parentContext.

If you however choose to somehow specify what events can be and cannot be transferred between the contexes you might opt to add your custom ApplicationEventListener to propagate only the supported events.