Weblogic classloading

Getting a java.lang.NoSuchMethodError is usually the beginning of great exploration of your platform – in this case weblogic. Javadoc says:

Thrown if an application tries to call a specified method of a class (either static or instance), and that class no longer has a definition of that method.
Normally, this error is caught by the compiler; this error can only occur at run time if the definition of a class has incompatibly changed
.

What’s the hack going on here! Libraries used are embedded into the final archive I did verified that! If you don’t know simply suspect classloaders, publicly known enemies of java developers ūüôā As rule no.1 which says: “Verify your assumptions”. The fact that the class is in archive doesn’t necessary mean that it gets loaded, so to verify that simply pass -verbose or -verbose:class argument to weblogic’s JVM in startUp.sh/bin and you will get the origin of loaded classes.

Class loaded from WL_HOME/modules, how’s that possible? To understand that general understanding of classloading is essential and then understand your J2EE standard implementation e.g. Weblogic, JBoss, … This post is not going to pretend an expert detail knowledge level on this topic so I will rather stay with general principles with reference to details documentation.

Java has several class loaders (bootstrap, extension, …) the important fact is that they work in some hierarchy (parent-child relationship) with some delegation scheme which says when to load a class and from where. Java elementary delegation principle says: Delegate finding classes and resources to their parent before searching own classpath. Only if the parent cannot find it child is allowed to load it. So far so good. To complicate the matter a bit more – java servlet specification recommends look at child classloader before delegating to parent (if this recommendation were taken you need to check with documentation of J2EE implementation you are using, as you can see you know nothing based on those rules ūüôā ) So in my case of Weblogic J2EE implementation

as you can see system classloader is the parent of all the application’s classloaders, details can be found here.¬†So how the class get loaded from¬†WL_HOME/modules ?¬†The framework library must be on system classpath. On the system classpath is just weblogic.jar not my framework library?
Weblogic 10 in order to better modularity included components under WL_HOME/modules and weblogic.jar now refers to these components in the modules directory from its manifest classpath. So that means that other version of library sits on system classloader Рthe parent of all the application classloaders, so that means that those libraries included in application archives will be ignores based on the delegation scheme. (That was probably the idea why was recommended in J2EE classloading delegation scheme Рchild first). However weblogic does offer other way how to solve this case by so called classloader filters/interceptors defined in weblogic specific deployment descriptor either on ear level or war level.
weblogic-application.xml
<prefer-application-packages>
<package-name>org.apache.log4j.*</package-name>
<package-name>antlr.*</package-name>
</prefer-application-packages>
weblogic.xml
<container-descriptor>
      <prefer-web-inf-classes>true</prefer-web-inf-classes>
</container-descriptor>

Java class version

Time to time it might happen that you need to know which version the class files were compiled for. Or to be more specific what target were specified while running javac compiler. As target specifies VM version the classes were generated for. This can be specified in maven as follows:

               <plugin>
                    <artifactId>maven-compiler-plugin</artifactId>
                    <configuration>
                         <target>1.6</target>
                    </configuration>
               </plugin>

It is not a rocket science, right. To find out the version the code were generated for we use javap (java class file disassembler). The following line do the trick:

javap -verbose -classpath versiontest-1.0.jar cz.test.string.StringPlaying

Compiled from "StringPlaying.java"
public class cz.test.string.StringPlaying extends java.lang.Object
  SourceFile: "StringPlaying.java"
  minor version: 0
  major version: 50
  Constant pool:
const #1 = Method       #12.#28;        //  java/lang/Object."<init>":()V
const #2 = String       #29;            //  beekeeper
const #3 = Method       #30.#31;        //  java/lang/String.substring:(II)Ljava/lang/String;

Major version matches java version based on following table


Table taken from Oracle blog

Build Number

One of the most important thing during the SDLC (for sure apart from the other stuff) is to keep control over deployed artifacts to all environments at any given time. Lack of control leads to chaos and generates a lot of extra work to the team, degrades throughput, morale and motivation. No need to even mention that arguments among team members regarding deployed features or fixes definitely do not contribute well to the team spirit.
One of the common approaches mitigating this risk is generating a build number to every single build fully automatically. Let’s take a look at how to accomplish this in common project set up – maven project build on build server e.g. TeamCIty. Sample web application follows.
Common place where to store such kind of info is MANIFEST.MF file. All kind of archives have this file located in /META-INF/MANIFEST.MF. Various technologies like OSGi use this location for various metadata. Taking advantage of maven-war-plugin the content of MANIFEST.MF can be easily customized as follows (${xx} are maven variables):
Manifest-Version: 1.0
Archiver-Version: Plexus Archiver
Created-By: Apache Maven
Built-By: ${user.name}
Build-Jdk: ${java.version}
Specification-Title: ${project.name}
Specification-Version: ${project.version}
Specification-Vendor: ${project.organization.name}
Implementation-Title: ${project.name}
Implementation-Version: ${project.version}
Implementation-Vendor-Id: ${project.groupId}
Implementation-Vendor: ${project.organization.name}

To set up a maven project pom file is pretty easy:

         <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-war-plugin</artifactId>
                <configuration>
                    <archive>
                        <manifest>
                            <addDefaultImplementationEntries>true</addDefaultImplementationEntries>
                            <addDefaultSpecificationEntries>true</addDefaultSpecificationEntries>
                        </manifest>
                        <manifestEntries>
                            <Build-Number>${build.number}</Build-Number>
                        </manifestEntries>
                    </archive>
                </configuration>
            </plugin>
        </plugins>

Where build.number variable gets supplied by build server in arbitrary format, e.g. for TeamCity build server:

Build number is visible in build queue status page as well:
To access these project build specific information simple jsp page can be created:
The controller accessing these information using Spring MVC (simplified example) can look like:
@Controller
 public class ProjectInfoController {

     @RequestMapping("/info")
     public ModelAndView getProjectInfo(HttpServletRequest request, HttpServletResponse response) throws IOException {

         ModelAndView modelAndView = new ModelAndView("projectInfoView");

         ServletContext servletContext = request.getSession().getServletContext();

         Properties properties = new Properties();
         InputStream is = servletContext.getResourceAsStream("/META-INF/MANIFEST.MF");

         properties.load(is);

         modelAndView.addObject("buildBy",properties.getProperty("Built-By"));
         modelAndView.addObject("buildJdk",properties.getProperty("Build-Jdk"));
         modelAndView.addObject("specificationVersion",properties.getProperty("Specification-Version"));
         modelAndView.addObject("specificationTitle",properties.getProperty("Specification-Title"));
         modelAndView.addObject("implementationVendor",properties.getProperty("Implementation-Vendor-Id"));
         modelAndView.addObject("buildNumber",properties.getProperty("Build-Number"));

         return modelAndView;
     }
 }

Accessing MANIFEST.MF in JAR file has a different approach. Motivation taken from Spring source code:

Package  package = someClass.getPackage( );
String version = package.getImplementationVersion();

JSP page or other presentation layer shouldn’t be a problem for anyone.

Prague Java Developer Day 2012 Highlights

Java philosophy from very first beginning¬†¬†is¬†“compile once and run everywhere” this seems to be strengthen even more for the next version of java. The key message for java 8 is “write code once and run everywhere” which implies¬†blurring the edge between Java SE and Java ME. The move of Java towards smartphones and tablets etc. is clear. The approach and impact to language constructs will be described briefly¬†as was presented at the conference. As presented nothing is cut in stone at the moment but the main objective is clear.
Huge effort is being spent on a new java modularization system which would reduce an amount of consumed memory by JVM, reduce the size of final archives etc. The solution should be backward compatible with some question to current organization of JDK and potential reorganization. The solution relies on creating of new logical units composed of existing packages, classes etc. Details can  be found on project pages РProject Jigsaw.
JavaFx as a client rich platform went through a huge rewrite with version 2.0. Now supports full interoperability with Java Swing library.JavaFx scene builder released for major platforms.
Java 7 made next step towards better parallelization with fork-join framework which helps you take advantage of multiple processors. Java 8 should move the matters even further with embedding functional style programming with lambda expressions Рproject Lambda.
The last main feature presented for Java 8 was Type Anonotations as @Nullable, @NotNull etc. This feature is highly desirable by community as this allows better static code analysis. More info can be found here.
The afore mentioned list is neither an extensive list of features nor a final list of enhancements in java 8 but rather a plan.

BPMS in production environment

I couldn’t find a better topic than “BPMS (Activos 6.1) in production” to close up the whole series.
Although ¬†ActiveVOS is certainly a cool product there is as usual a space for future improvements. Production environment is something special and as something special it should be treated. If the production environment is down there is simply no business. Empowering the business is the main objective of BPMS, isn’t it? So technology should be ready to cope with that kind of situations. To cut a long story short. Every feature which supports¬†maintainability,¬†reliability, security and sustainability in day-to-day life is highly appreciated.
During the development life-cycle it can be hard (especially in the early stage of the development) to foresee  how the system will be maintained, what the standard procedures looks like, etc. The goal is to mitigate the probability of a process, human or a technical error as low as possible  taking into consideration an ease of problem detection as well.
The following pieces of functionality were found as highly desirable. Some of them are possible to avoid or at least lower the impact ¬†during the design time. For the rest of them some developers’ effort need to be taken into¬†consideration.
  • Different modes of Console – there are no distinct modes neither for development nor production environment. This comes in handy when you need to grant an access to operations for their ¬†day-to-day routine and you don’t wanna let them modify all server settings. For example you just wanna restrict the permission to deploy new processes, start and stop services.
  • Reliable fall over – maybe this question is more on the side of infrastructure.¬†As BPMS fully lives in a DB typical solution consists of cloning a production DB to a backup DB instance. In case of a failure, this instance is started. ¬†If some kind of¬†inconsistency gets into the DB during the crash of a main instance then it is¬†immediately replicated to a backup instance. Does it make sense to start a backup instance?
  • Lack of data archive ¬†procedures – the solution itself doesn’t offer any procedure how to archive completed¬†processes. Because of legal restrictions specific to business domain you are working in you cannot simply delete completed processes.¬†¬†As your DB grow in size ¬†the response time of BPMS grows as well. You can easily get into trouble with time-out policy. Data growth 200GB per month is feasible. You cannot simply work this problem out by using some advanced features of the underlaying DB like¬†partitioning¬†because you wanna have processes which logically belongs¬†together in one archive. You will be¬†struggling to find out¬†¬†such a partitioning criteria which could be used in¬†practice¬†and fulfills mentioned requirement.
  • Process¬†upgrade – one of the killer features, ¬†process migration of already running processes to an upgraded version works only in case of small changes of ¬†the process. More over what if your process consumes an external WS which lives¬†completely¬†on its own? What if someone enhance that service and modify that interface? Yea,¬†versioning of the interfaces comes to attention. Having process upgrade feature without versioned interfaces is almost nonsense or at least need a special attention while releasing. Even with versioned interfaces it is not applicable in all situations, eg. sending new data field which presence in the system is not¬†guaranteed. ¬†In large companies this feature is a must. Otherwise it is hard to manage and coordinate all the releases of all connected application.
  • Consider product road map – actually this item belongs to project planning phase where we make decisions about what technology to use. In some environment like banking, insurance etc. there can be legal requirements to have all products from production environment supported by a vendor. If the vendor’s release strategy is a new major version every half a year and support scope is current major version plus two major back than this could pose a problem for a product¬†maintenance¬†team during product life cycle. Migration of all non terminated processes may not be a trivial thing and as such this represents an extra cost.

Testing BPMS component

I do remember a discussion with one of our QA guys regarding BPMS testing I want to share. I was asking QA¬†for requirements on a system and curious what¬†methodology is being used for this component.¬† The answer I got and ¬†I will probably never forget was: BPMS is a minor part of the system hence we are not supposed test it at all. The motivation behind this article is simply based on fact that this approach wasn’t correct an provide some insight what’s going on. There is no ambition to provide complete methodology or best¬†practices¬†regarding testing of BPMS component. That is the role of skilled QA.
As BPMS is a solution for orchestration of your business services inside the house. Simply it drives the work flow. BPMS isn’t usually a¬†decision maker. Decision making rules¬†are¬†typically¬†required to be flexible, expect frequent changes. It should reflex business changes as quick as possible. So because of that it is not a good¬†practice¬†to hard-code them into processes in a form of “spaghetti code structure” (structure of if-else in several levels) which is¬†error-prone and¬†hard to maintain . Those are reasons for having a separate component responsible for¬†decision¬†making – BRE (business rule engine). So the QA task can be divided into two main objectives for¬†¬†functional testing. Verify¬†for given input data:
  • all the¬†necessary¬†data ¬†for making a¬†decision present at specified point? This can be difficult because of large amount of incoming path to the decision point. Despite the execution path you are¬†verifying that all the data needed were gathered in the system.
  • based on decision results are the steps actioned in the correct order? Verification of the required business process.
  • are the fault recovery procedures working correctly?¬†Switching the system to fault recovery mode and verification that the system stored all data correctly and data completeness.
For sure there can be more aspects but those are considered as the main ones. The main problem of the testing is that those aspects cannot be tested in isolation. By isolation I mean that you cannot use standard methodologies (e.g. black box, white box, … whatever it is) and point somewhere in the system. BPMS is a system component that has “memory”. That means ¬†you cannot simply¬†arbitrarily divide the process to parts which are you going to test in separation. Some systems can have something like “point of synchronization” (despite the execution path the system has defined data set) but this depends on a design and hence it isn’t mandatory.
Let’s have a look at¬†possibilities. Product itself offers feature called BUnit what is alternative to JUnit in java world. It is feature¬†facilitating process unit testing. All invoke activities within the process are mocked – the xml reply is recorded. Xml manipulation expressions and ¬†gathering data within flow ( aspect 1) can be tested this way by¬†correct choice of recorded data. But the tests are still taking place in artificial conditions. Aspect 3 – testing fault recovery can be tested relatively easily by this approach if there is no¬†awkward decision during design phase.¬†Test analyst is the key role during this process. No need to talk about documentation of the system itself. Unit testing of BRE is completely separate chapter not discussed here.
Having verified basic functionality of the blocks – processes and subprocesses we can continue with integration testing. Usually this kind of systems are systems with high degree of integration so it is really handy to have all back-end systems under your control. Reason no. 1 – data driven system – behavior of your data depends on a data in those systems. Reason no. 2 – BPMS has “memory” (it is “state full”). If you wanna test from certain point in process you have to bring the system in this point. You need to do it¬†repetitively¬†and in a well-defined way. Approach used in web application testing – modification of data in DB to bring the order, application and etc. to certain state is not sufficient here. Having¬†simulators of real beck-end systems was proved as really good practice. This way you simply isolate your system and time to error localization is significantly lower. This way you can conduct integration testing of ¬†bigger functional blocks up to end-to-end testing. There is no doubt that higher level of automation is a must.

K-V pairs to java bean re-map

Time to time you need on your projects to remap Key-Value pairs to regular java beans. One really nasty solution to this task is to do it manually. Yea, it works but this approach is not flexible and more over it is error prone. Mapping each field is hard coded and when adding, remove or modify them you have to correct all hard-coded mappings what is really awkward.
Another approach is to use some K-V to bean re-mapper for example ObjectMapper from JASON Jackson library or commons beanutils offer some possibilities as well.
If for some reason you cannot use these libraries e.g. legal problem or simply you don’t find implementation which suits your needs then it is time for your implementation.
Following example implementation re-map to primitives and enums from string representation. Some highlights: java 1.6 doesn’t offer any way how to find the wrapper class for primitives, wrap any problem (Exception) to base RuntimeException is not a good approach. In case of real usage it is suggested to change this. In the context of this example I think it’s fajn.

public class KVBeanRemaper {
    private static final Map wrappers = new HashMap();

    static {
        wrappers.put(byte.class, Byte.class);
        wrappers.put(short.class, Short.class);
        wrappers.put(int.class, Integer.class);
        wrappers.put(long.class, Long.class);
        wrappers.put(float.class, Float.class);
        wrappers.put(double.class, Double.class);
        wrappers.put(boolean.class, Boolean.class);
    }

    public static  T remap(Map keyValue, final Class classMapTo) {

        final Set dataToMap = new HashSet(keyValue.keySet());
        final Field[] fields;

        T res;
        try {
            res = classMapTo.newInstance();
            fields = classMapTo.getDeclaredFields();

            for (Field f : fields) {
                if (!dataToMap.contains(f.getName())) {
                    continue;
                } else {
                    final String key = f.getName();

                    if (f.getType().isEnum()) {
                        findAccessMethod(true, f, classMapTo).invoke(res,
                                Enum.valueOf((Class) f.getType(), keyValue.get(key).toString().toUpperCase()));
                        dataToMap.remove(key);
                    } else if (wrappers.containsKey(f.getType()) || f.getType() == String.class) {
                        Class c = f.getType();
                        if (c.isPrimitive()) {
                            c = wrappers.get(c);
                        }
                        findAccessMethod(true, f, classMapTo).invoke(res, c.cast(keyValue.get(key)));
                        dataToMap.remove(key);
                    }
                }
            }
        } catch (Exception ex) {
            throw new RuntimeException("Error while remapping", ex);
        }
        if (dataToMap.size() > 0) {
            throw new RuntimeException("Complete fieldset hasn't been remapped");
        }
        return res;
    }

    private static Method findAccessMethod(boolean setter, final Field field, final Class klazz) throws IntrospectionException {
        PropertyDescriptor pd = new PropertyDescriptor(field.getName(), klazz);
        if (setter) {
            return pd.getWriteMethod();
        } else {
            return pd.getReadMethod();
        }
    }
}