Thursday, December 22, 2016

The king is dead, long live to the new iOS Java king!

When presenting Darwino, I'm often asked the question how do we run Java natively on iOS, or other platforms that are not known to be Java friendly? Well, we started this journey a few years ago, but the landscape of cross platform Java evolved a lot since that. Let me go through the history and explain where we stand now, as a new king has just been proclaimed. If you thing the story is too long, then just go to Multi OS Engine for the answer.

Back to the dark age, the most promising solution was provided by Google, through a transpiler called J2OBJC. Basically, j2objc converts the Java source code to Objective-C source code, that can later be compiled using Apple XCode as any native Objective-C application. Google is successfully using this technology internally for a set of projects, including popular ones like Inbox. With this, along with GWT, they claim that 70% of the code is shared between Android, iOS and the web. This is more than decent, but it has some issues:
  • It works at the source code level, while most of the libraries are currently provided as compiled binaries, through a maven reposioty
  • The runtime libraries (e.g. the set of classes available at runtime, like*,*...) is a small subset of the standard JRE libraries. That limits what you can use in your application. Even if you pay attention to only use the less common denominator, it prevents you from using libraries depending of the missing pieces
  • Objective-C (and Swift) memory management. While Java uses a garbage collector, Objective-C uses references count. I can write a whole article on the pros and cons on each of them, but the fact is that Java memory management does not translate properly to Objective-C, leading to hard-to-debug issues
  • Finally, it does not provide an easy access to the native APIs like Cocoa, or even JNI (which we use for to access SQLite, for example)
Ok, despite the all the great things it carries, j2objc didn't fit with our objective of "write once, run everywhere".

Fortunately, a beta from a new product called RoboVM appeared soon after. It was not a transpiler but a real compiler that generates true native code from the Java classes. It removed most the limitations of j2objc, by bundling the Android runtime class libraries (minus the specific UI classes), giving access to the whole iOS API through a Java-to-native bridge, full support of JNI... With RoboVM, we migrated our existing Android applications to iOS in a few hours.
Another tool, called Migeran, was also emerging with a very similar value proposition. We also moved our apps to Migeran. But, after talking to both teams, we decided to go with RoboVM, mostly because RoboVM was open source (well, partly). By the meantime, Migeran was acquired by Intel to extend their INDE offerings. They called it Multi OS Engine, aka MOE . That reinforced the validity of our choice.

But we knew that RoboVM was also a potential target for an acquisition. And what had to happen happened. Xamarin, the .NET based competitor, acquired it and changed the rules: raised the price up, stopped to deliver to the open source project... Without any inside knowledge of the acquisition terms, I believe that the Xamarin move to Java was not to make the Java community happier, but to awake and fear its main supporter: Microsoft. Which quickly reacted by acquiring Xamarin a couple of months later, and just killed RoboVM. Man, we are orphan!

One of the solution was to restart from the open source version of RoboVM, and contribute enhancements. Actually, there is a team of knowledgeable LibGDX developers that cleaned-up the code (see: and maintains it. That was the best place to start from. We even contributed the maven plug-in, thanks to Jesse. But the effort to make this project an enterprise grade tool is still big. It misses a decent debugger, support of LLVM and a few other things. Despite the value of the people behind it, it lacks the support of a real organization dedicating resources to its development.

But events are often chained. Another long awaited event happened: Intel decided to release MOE as an open source project. More than that, they gave the leadership back to the original creator, Migeran! That's the difference between Intel and Microsoft: instead of just killing and erasing the project, they decided to give it for free to the community. Many thanks to you Intel, you deserve it!

This changed the game for us. Instead of investing in the opensource RoboVM, we decided to move to MOE. The switch was actually easy, as the two tools are close enough. We made sure with Migeran that what we need is part of the platform, for example Eclipse and Maven support. Clearly, this team is great and they got the features in very quickly. As a result, since Darwino 1.5.0, we moved to the latest MOE 1.2.x and it appears to be very stable. We now have the same experience we had with the commercial version of RoboVM, including a full debugger! Yes, you can debug the Java code you deployed to your device.
Moving forward, we'll support whatever comes from Migeran: 1.3 is currently in a beta state, while 2.0 with come with a bunch of great new features. In particular the support of LLVM, latest ART runtime, maybe native windows support... Well the dream comes true.

Last but not the least, we have great relationship with Migeran and in particular its CTO, the mighty Gergely Kis. Man, this guy is really skilled and he knows what he talks about. We had another opportunity to talk face to face last month, as he came to the first Darwino workshop in Germany. It came out that we are very aligned on the vision, so the future looks very promising. We also exchanged on future projects, like bringing a Java based, fully portable ReactNative to mobile devices.

For more information about MOE now and future, Gergely gave us a presentation during the darwino workhop. The slides are now available on slideshare: Slides

If you are a Java developer who'd like to develop for iOS, then MOE is definitively what you're looking for. Start from Multi Os Engine web site, install the studio and enjoy.

Tuesday, December 6, 2016

The very first Darwino workshop in Germany was a success!

It happened in Cologne, Germany, where 8 different companies joined us to better understand Darwino and get their hands dirty with the code. It reminded me the first XPages workshop we did in Westford in 2008, before we launched the technology at Lotusphere. Same spirit, with highly motivated people, ready to enhance their existing Domino applications!We had several IBM champions, OpenNTF board members, and even the CTO of the great Multi-OS-Engine.

If the crowd learnt about the technology, we learnt a lot about the use cases and how the platform could be used. Moreover, as Darwino is leveraging a lot of technologies, we figured out that we should make the tool installation more seamless, in particular regarding the required mobile SDKs. We are working hardly on this and we will provide a 1.6 release with even easier instructions.

For this event, we produce 250+ slides that we made available today on SlideShare. If you missed the workshop and want to know more about Darwino, or if you were there and want to go back to the slides, here is the link:

We introduced a Slack channel ( where we can all collaborate together. Please, ask us to if you want to join.We also introduced a new tag, F4Domino, that I used in SlideShare and Twitter: it means "Future For Domino", as this is exactly what Darwino provides to this platform and the community behind it. The life line for Domino customers and developers...

Last but not least, I would like to thank you all the participants who made this event a success. We had a terrific time, during the workshop, but also afterwards when we shared some great German meals and beers. Thanks you also to IBM who hosted us these 3 days, and in particular Christian Holsing who made it possible.

Thursday, September 22, 2016

When your Domino app meet the cloud, IBM Connections or Microsoft Azure!

I'm proud to announce that we leveled off our support of cloud integration. In previous posts, I showed how you deploy an application to IBM BlueMix or Microsoft Azure. Now, not only you can deploy to these environments, but you can use their core services. In particular, you can seamlessly leverage their authentication capability and use their directory service.
Before I go further into the technical details, let's watch the nice, short video recorded by our propaganda minister, Mr John Tripp: Darwino Pluggable, Pervasive Authentication

Neat, right? With no coding, just a configuration change, one can switch from a local LDAP, form based authentication to cloud OAuth against IBM Connections cloud or Microsoft Azure Active Directory! Again, with no code change.

To enable this magic, Darwino uses 2 main components

  • A user directory service.
    This service gives a generic access to all the directory functions: authentication, user data (profiles...), queries (user search, typeahead...) in a platform agnostic way. It can connect to virtually any directory, ranging from LDAP (Domino, Active Directory, Tivoli...) to IBM Connections or Microsoft Azure Active Directory (including the Azure Graph API). Plus it allows the extension of the user information through the use of data providers. For example, the user information can come from the Microsoft Graph API, extended with information coming from LinkedIn!
    All of this from a single, comprehensive API. But even if the API is generic, you can still access some platform specific information like, for example, the IBM Connections profile payload as XML.
  • A JEE filter that provides the authentication service to the application.
    This filter uses the current directory configuration to authenticate the user using the best method for a particular directory described above: Basic, Form, OAuth, SAML... Or just use what coming from the web application server (SPNEGO, WebSphere VMM...). Finally, the filter makes available a User object representing the current user, with all its attributes. As metioned earlier, the attributes can be an aggregation from multiple directories. Plus, it allows dynamic roles based on the current application and/or the current tenant.
    This object is user all over the platform for security purposes: access to service, database access control, workflow...
In conclusion, Darwino isolates you from the actual directories you want to use, by providing a consistent, generic API connecting to virtually any existing service, on the cloud or on-premises.

Tuesday, August 2, 2016

Speaking Java at MWLUG this year!

I'm honored that the MWLUG crew selected my session this year. Actually, when I look at all my talks in the past decade, they have been mostly product oriented, whenever it was for IBM or more recently for Triloggroup.

But this one will be different. It is around using Java to write portable, high performance enterprise applications. I'll be first digging into the Java platform (languages, runtimes, compilers...) to understand what is the Java platform. Then I'll illustrate portable Java applications from real use cases using state of the art tools. I'll be targeting web, mobile and more generally IOT devices. If time permits, I might escape from the enterprise software and invade a little bit the consumer market.

Java has been around for more than 20 years. During these years, it evolved a lot to match the new requirements, which makes it the #1 development platform.

Let's have fun altogether @MWLUG!

Wednesday, July 20, 2016

Darwino, and Java applications, on Microsoft Azure!

In a prior blog post, I showed how we deployed a Java based application to IBM BlueMix. But as you may know, the Darwino platform is pervasive and can run applications on many different applications servers, while connecting to many different relational databases.
So let's deploy a Darwino app to Azure, using TOMCAT and MS SQL Server.

Similarly to IBM Bluemix, Microsoft offers an Eclipse plugins but, unfortunately, I cannot get it to work with large Eclipse Workspaces (see issue: Let's deploy the application manually, until this is fixed.

1- Create an account

My experience if you plan to keep your application running more than 30 days, then forget about the trial subscription. I initially used it but was not able to convert it to a pay-as-you-go subscription when the trial expired without involving Microsoft support. I gave up, deleted all the already deployed assets and redeployed them with the proper subscription...

2- Create your Java application

For a basic WAR deployment, then simply create an app and use the application settings, as explained here. Once done, you have an empty web site at, that you can already try:

3- Create a SQL Server database

Microsoft Azure features the latest SQL Server 2016, including the new JSON capabilities.
A database in Azure requires 2 resources:
  • A database server
  • A database instance, created within a server
The 2 resources can be created in a single step: when you create the database, it asks for a server or let you create one dynamically. 

You can test your new database from a remote machine using tools like DbVisualizer. To do so, you first need to enable access to your client IP address in the firewall. This is again an easy operation: from the database, click on the server URL then on Settings for the server and you have the firewall options. Finally, use the JDBC URL revealed by clicking 'Show database connection strings':

4- Accessing TOMCAT using FTP

Contrary to IBM Bluemix, Azure let you access your VM through FTP, which is very convenient. The connection URL and the user are available in your application information panel.
Now, this deployment model only exposes by default the TOMCAT HOME directory via FTP. But this is enough for deploying a WAR, or consult the log files.

Tip:a Darwino app is using some configuration files located outside of the WAR file. On Azure, we copy them to this home directory, and find the directory location in the code using:
String tomcat8 = System.getenv("AZURE_TOMCAT8_HOME");

5- Deploy your WAR file

Simply copy your WAR file to /site/wwwroot/webapps and TOMCAT will automatically deploy it in a handful of seconds

Now it is ready!

As you saw, deploying a Darwino application on Azure was easy. The whole steps took less than 15 mins. The Eclipse tools should make it even easier and faster, which will try when the bug mentioned earlier will be fixed.

Similarly to IBM Bluemix, I would advise developers to develop using a local TOMCAT environment and then deploy the app to Azure when ready: the developer experience is far better this way.

Thursday, June 23, 2016

M2e log while debugging Eclipse plug-ins

This post is targeting Eclipse plug-in developers..If you are creating Eclipse plug-ins and debugging them using the PDE, you might have seen that recently the console is full of garbage coming from m2e (Maven/Eclipse integration), like:
2015-03-09 21:38:43,210 [Worker-17] INFO o.e.m.c.i.l.LifecycleMappingFactory - Using org.eclipse.m2e.jdt.JarLifecycleMapping lifecycle mapping for MavenProject: com.darwino:dwo-jre-jdbc-pool-dbcp:0.0.1-SNAPSHOT @ C:\phildev\Workspace-luna\darwino-platform\darwino\parent-dwo-core\parent-dwo-jre\dwo-jre-jdbc-pool-dbcp\pom.xml.
That's makes the console unpractical, particularly because m2e is now bundled with Eclipse. You get your own log completely diluted within this debug information.
I reported it to the m2e team a while ago, see:, where the issue was identified as an SLF4J configuration in Eclipse. But as of Mars 4.5.2, the problem still exists. So I took Igor's suggestion and added the following snippet to one of my plug-in Activator:
public void start(BundleContext bundleContext) throws Exception {

// Activate m2e configuration to load the logging options
try {
Bundle m2eLog = Platform.getBundle("org.eclipse.m2e.logback.configuration");
if(m2eLog.getState()!=Bundle.ACTIVE) {
} catch(BundleException ex) {
// Ok, might take too much time to start
That did the trick and I'm no longer seeing the m2e log. So add that to your own plug-in, at least during debug, and your experience will be back to normal.

Monday, May 16, 2016

IBM Bluemix - Eclipse Developer Exprience

I'm an IDE guy, which means that I like to fill a dialog with some information and then some magic, that I understand, is done to achieve the desired results.

Recently, I wanted to deploy a demo app to IBM Bluemix. Of course, I quickly equipped my latest Eclipse environment with the Bluemix plugin and I have been able to get my app run on the cloud in a matter of minutes ( I like the clean Eclipse Server integration. It basically works like your local Tomcat, but with a remote cloud server. Pretty cool!

But wait, this first attempt was really basic. Now I need to add features, connect to a directory, configure a set of things, add shared libraries that are not in the WARs.. Unfortunately, the Eclipse plugin does not let me act on the files that are not directly part of my web application. In particular, I cannot edit the server.xml file (the one carrying the configuration), nor I can add my own config files/libraries in the server directory. From the web UI, I can look at these deployed files, but cannot create/update/delete them. Gasp, not even an open FTP port for this.

Does that mean that I should fall back to the command line tools and leave my beloved Eclipse environment? Well, no. There is actually a good IDE experience once you discover it.

1- The deployment options

There are basically 3 options to deploy your code, see:
  1. The one that I initially used from Eclipse is the one picking up a set of default configuration properties, that you can barely change. Good for a first try, but not sufficient going forward.
  2. The second one is about publishing a server directory to Bluemix. In Liberty's vocabulary, a server directory contains both your applications and the related config files (server.xml), but not the core Liberty's code. Moreover the same Liberty's installtion can have multiple server directories, thus launching multiple environments at the same time. Deploying such a directory to Bluemix is pretty easy, as you have to create a dir on your local file system, add your application(s), config files, and push id. Well, several manual steps to do, with potential error s (ex: server.xml content). See:
  3. The last option pushes the whole server to Bluemix, meaning the server code, the applications, and tutti quanti. This is called a packaged server, explained here:
Options 2. and 3. will certainly achieve what I'm looking for, but the doc implies using the command line tools.

2- Doing the job from Eclipse

The Liberty runtime on Bluemix is actually just a regular WebSphere Liberty server. So the first thing to do is to install the WebSphere Developer Tools ( Once done, create a server in the "Servers" view. Some of the steps are explained here: It will seamlessly download the runtime and get your server ready to go. Finally, add your j2ee application to the server and it is ready to run, locally.
Because it is a local server, you can access and modify the server.xml file, add your own config files... In my case, I named my Liberty server 'dwodemo' and I can access the local files in the following directory:

BTW, I get  this directory location at runtime by calling System.getProperty("server.config.dir")

Now comes the magic part. Once you have your application(s) deployed to that server, you can right click the server and package it for Bluemix (assuming you already created a Bluemix server entry):

After answering a few basic questions, a server package will be created for you and deployed to Bluemix. I got my app running pretty quickly without having to deal with the command line. Yes! Moreover, I can now change any of these files, and run this action again to update Bluemix.

Such a configuration, with a local Liberty server makes actually a lot of sense. First, because deploying an app to Bluemix (even just an update) takes time, while it is almost done instantly with a local server. The code/test developer experience is definitively a lot better with a local server. Then, if remote debugging works with Bluemix, this can also be a area of performance frustration... Keep it for when you have a problem on Bluemix that you cannot reproduce locally.

With this configuration in place, I'm getting a good developer experience, with quick code/deploy/test cycles. IBM did a good job with the tools. I simply wish they made this information more prominent in the documentation. I had to read many articles and stack overflow Q&A to understand what I actually need to do. I would have loved a Bluemix tutorial "Bluemix and Liberty for Tomcat developers", that goes beyond a simple WAR deployment, but talks about configuration, users, security, ...

Friday, May 13, 2016

WebSphere servlet resources in your Connections 5.5 extension

If you're deploying an application that executes in the Connections 5.5 WebSphere server, then you may face this issue. Starting from version 3.0, a servlet container allows you to package resources (html, js, jsp...) within the application classpath, under META-INF/resources. Per the servlet spec, these resources can be located in jar files, deployed in WEB-INF/lib. We now heavily use this capability as our components are packaged as self-contained jar files, including both the java code and the web resources.

If this worked well in C4.5+, it stopped working with C5.5. Actually, the java classes from the custom jar files are properly loaded, but none of the embedded resources are served by the web server. After some investigation, we found out that the way the jar file is created makes it work or not. Precisely, if you create the jar from Eclipse (export...) then it does *not* work. Same if you package it using the Windows internal zip capability. If you recreate it using 7-zip, then it starts work... There must be some subtle internal differences between these compressors, and the new WebSphere seems to be very sensitive. If you have the issue, you now have a workaround :-).

The problem has been reported to IBM, with all the pieces to easily reproduce it. A PMR was created several weeks ago. But so far, no solution/fix has been provided. This is one of these cross organization problems where ICS argues that it is a WebSphere problem, while the WebSphere team doesn't show a lot of interests. 

Monday, January 11, 2016

If your widgets no longer show up properly in Connections 5.5...

This happened to us as we are validating ProjExec with Connections 5.5. As a project is related to a community, we have community based widgets that render ProjExec data within the community. But, as of 5.5, our widgets did not display anymore in the middle column:
But it works when being added to the right column. Damned, we didn't change anything, so it must be something introduced in C5.5. We looked at the existing IBM widgets definition and saw that they had a an attribute called "themes" that might be related:
<widgetDef defId="ImportantBookmarks" description="importantBookmarksDescription" modes="view" themes="wpthemeNarrow wpthemeWide wpthemeThin wpthemeBanner" uniqueInstance="true" primaryWidget="false"  

We tried that on our own widgets and they now work as expected! This seems to be a new "feature" as it is not (yet) documented ( On the other hand, the IBM portal documentation describes it (

I submitted a PMR to at least get the documentation updated and, best scenario, maintain the backward compatibility when this attribute is not present. I'm sure that our customers migrating to 5.5 will come to our support team with this issue.

Update: Mikkel provided a detailled post on the subject: