Thursday, August 21, 2014

Running Android apps for development purposes

The next major version of ProjExec is going to support mobile devices, and in particular Android. This means that the developer needs a way to test the application while developing, and the sales guys need a demo environment.
As probably everyone, we started with the emulator bundled with the Android SDK. It has several advantages:

  1. It comes out of the box with the Android SDK, for free
  2. It is heavily configurable, so you can choose from a list of existing devices to emulate, or create a custom one
  3. It can emulate several hardware architectures, including ARM and x86
  4. It behaves like a real device, including the side buttons

If this emulator is technically working, it is actually barely usable in the best case. Most of the time, it is slow as hell... The recent Intel HMAX, now bundled with the SDK, makes it faster because it natively runs x86 code. But still, if it makes the experience better, it executes slowly.

Fine. To get a better experience, I bought a real device. A 10" quad core cheap tablet from Amazon. I got a very good deal for it and, at that time, Android 4.2 was perfectly fine for development purposes. I've been pretty happy with it, once I passed the configuration steps. Forgot to say, you need to install a USB driver on your development PC to communicate with the tablet. And none is offered by Matricom, or other cheap manufacturer. I finally found a generic one that barely works: it does not see the tablet as a USB disk, but the Android Debug Bridge (adb) sees it properly. Good for dev purposes.

Now came Android 4.4, aka KitKat. I wouldn't normally look to upgrade, but this time is different. Its browser component is now based on Chrome. This is a game changer for developers because this component can  be remote debugged from another machine, directly from Chrome. See: Remote debugging on Android with Chrome. Wahoo, this works very well for hybrid apps using Apache Cordova. That's make a KitKat based device really indispensable! But wait, my cheap tablet runs 4.2, with no update in the pipe. It suddenly became obsolete. I haven't opened it for 2 months now. If one knows how to upgrade it with a generic version of KitKat, please let me know. Else, it will finish in the cemetery of obsolete devices, or in the hands of a kid in the neighborhood. By the way, I *hate* the way android devices get OS upgrades. Google should learn from Microsoft, as I don't need to change my PC when I want to update Windows. But this is another debate.

Alright. Before I spend more bucks buying a recent KitKat based device, I investigated alternate solutions.

One of them is provided by a French company called Geny Motion. It is another emulator, based on Oracle's VirtualBox. It is much faster than the emulator from the SDK, with a lot of devices being emulated. Plus, it emulates many sensors. But it has a cost that can be seen as prohibitive (>$400 per developer, per year). Hum, I might buy a few licenses over time. But this can be complex if you hire free lancers. Not sure how the licensing model would work. Google should acquire them and get their emulator integrated into the SDK, similarly to what they did with Instanciations' WindowBuilder. That would really help the Android community.

A recent announcement also caught my attention: the port of Android 4.4 to x86 is now officially available. In short, you can run Android as your main desktop PC OS. This is fun, but I don't see why I would do that on my machines. Except if it can run on a PC, then it can be also installed in a virtual environment Couldn't wait to try it on VMWare. I found many blog posts or videos telling the steps. It took me less than 30 mins to get it downloaded, installed, configured and running. It can be even faster if you run VirtualBox and download the preconfigured image (eeepc). And it is blazing fast. It feels faster than any real tablet I tried so far, even the latest 12.2" from Samsung. This is just brillant! There are still missing bits, like the availability of VMWare tools for Android, but this is sugar. Someone will certainly find out how the FreeBSD ones can be installed.

Back to my development use case. Once the VM is running, you can configure the developer options the same way you would do it on a regular device. And the you can remote access it by simply running one adb command. This is well explained here.

It does not fully replace the SDK emulator, or GenyMotion, as it doesn't provide a phone like experience, but it is great for core development and demo.

Let me know if you have the same experience, or have any trick to share. I can start with a few:
- If it goes to sleep, press the 'esc' key for more than a second to awake it (see: wake up). But better to configure the display to never go to sleep, in the Android Settings.
- The 'Power Off' button can be displayed by dragging the top right bar down (where the time is).
- Changing permanently the default resolution is done with a few edits, detailed here (easy!)
- Configure a fixed ip for you device, that makes adb easier to use

Enjoy Android development!


Thursday, August 14, 2014

Connecting to the IBM social platform from a Domino Agent

We recently worked on a project that involved IBM Domino and IBM Connections. The idea was to 'replicate' some of the Connections data into an NSF, so users can access these data offline from the Notes client. Actually, the synchronization between Connections and the NSF happens on a Domino server, and the Notes clients are using the regular Notes replication capability to get the data down. So far, it sounds simple.

To access Connections data, we obviously wanted to use the IBM Social Business Toolkit SDK. I said obviously because I was at the origin of this SDK, during my time at IBM :-). It really makes things simpler, by handling all the plumbing for you.

Now, the Connections/NSF synchronization task should be triggered on a scheduled basis. For this, Domino offers at least 3 options:

  • Use an agent
    This seems the obvious answer, although it is *not* the most performing one, as the classes are loaded every time the agent is executed, then discarded. On the other hand, it takes less than a second to get it bootstrapped. For a task that just happens from time to time, this is not a big deal.
    The other issue is with the SBT SDK, as it has never been run in an agent. At least by IBM. And it failed when I tried it.
  • Use an Eclipse job within the HTTP server
    We explained this technique in a video. It is relatively simple to code, as it uses all the XPages capabilities (managed beans for the endpoints...)
    But it requires some extra permissions that should be set within the global java security files, or bypassed through a custom plug-in. I implemented a version this way, but I found it to complex to install for this basic project.
  • Use DOTS
    Another great OpenNTF Project from my friend David. But the installation complexity is even worse than the Eclipse job. Moreover, even though parts of the project are now in Domino 9, this is not supported by IBM. I anticipated an extra effort to justify this choice, so I decided to not go this route.

Ok, I finally decided to go back to the agent option, and fix what should be fixed in the SDK. That was actually simple. See: https://github.com/OpenNTF/SocialSDK/issues/1604.

With that fix in, here are the steps to call a Connections API from an agent:

1- Create a Java agent in Designer.
No kinding...

2- Import the SDK libraries into the agent
Import the Java archives to get something similar to this:
Note that I used an older version of the SDK But I would advise you to use the most recent one.

3- Code your agent
If you already used the SDK prior to this, you might know that it uses some 'Endpoint' defined as managed beans. But managed beans do not exist in agent.
Fortunately, the SDK provides you other ways to define these endpoints, including extension points. But, in my case, to make it as simple as possible, I just created one programmatically:
    BasicEndpoint e = new ConnectionsBasicEndpoint();
    e.setUrl(getConnectionsUrl());
    e.setUser(getGlobalUserName());
    e.setPassword(getGlobalUserPassword());
    // For HTTPS access
    e.setForceTrustSSLCertificate(true);
Other endpoints types (oauth...) can be created the exact same way. In this example, I'm using basic authentication, with the URL, user name & password are stored in a configuration document.
Note the use of setForceTrustSSLCertificate. This allows the SDK to call HTTPS URLs without having to install the SSL certificates in the Domino server. Not fully secure, but this is ok for development or within a known intranet environment.
Then you can create the service your need with the desired endpoint as a parameter:
    BasicEndpoint e = ...;
    CommunityService cs = new CommunityService(e);
    CommunityList communities =  cs.getPublicCommunities()

For more information on the available services and API, have a look at the Playground. It contains many examples of working code.

4- Make sure that your agent can execute restricted operations (http calls in this case) and that it is signed with a user with the appropriate rights in the server security document.

To get the whole story in, I also bundled the code into an OSGi plug-in and called it directly from XPages, in the HTTP task. That made it far easier to develop and debug, particularly with the Domino Debug plugin, again from David.

If you also want to access the SDK from an XPages, then you should install the OSGi plug-ins, following the standard procedure. The jar files in the agents are not visible from XPages.




Thursday, July 31, 2014

Filtering Connections Applications

Continuing the Connections integration series, I'm going to talk today about getting your application surfaced in the Connections Navbar, like we do for ProjExec:


Ok, there is no magic this time as this is documented as part of the Connections customization guide. There is a file called apps.jsp that needs to be customized with your own entries. Here is an example:
  Projects 

      --%><tr><%-- 
          --%><th scope="row" class="lotusNowrap"><%-- 
            --%><img class="lconnSprite" 
src="/projexec/tgweb20/media/projexec_blue.gif" alt="" 
role="presentation"><%-- 
            --%><a href="/projexec/projexec/pe_main.xsp"><%-- 
               --%><strong>Projects</strong><%-- 
            --%></a><%-- 
         --%></th><%-- 
     
    --%><td class="lotusNowrap"><%-- 
            --%><a 
href="/projexec/projexec/pe_main.xsp?pepage=myprojects"><%-- 
               --%>My Projects<%-- 
               --%></a><%-- 
         --%></td><%-- 
          
         --%><td class="lotusNowrap lotusLastCell"><%-- 
            --%><a 
href="/projexec/projexec/pe_main.xsp?pepage=myportfolios"><%-- 
               --%>My Portfolios<%-- 
            --%></a><%-- 
         --%></td><%-- 
      --%></tr><%-- 

But wait, some customers asked us to hide these entries for the users that are not entitled to ProjExec. Generally, these users are identified using one of their attributes in the enterprise directory, like a group membership, an organization or even a custom attribute.

Then comes the real thing. In ProjExec, we use J2EE roles to control the access to the application, and then we map the roles to LDAP queries (filters). But  these roles are declared in our EAR, and are only valid in our own application context (war file). On the other hand, the NavBar is executing within all Connections applications contexts, where our roles are just unknown. As a result, we can't then use roles and role mapping for it.

Of course, we can use a direct call to LDAP using JNDI. But this has several drawbacks:
  • We need information to connect to the server: address, user/password...
  • We need to know the LDAP schema used by the actual directory (objectclass, attributes...)
  • We should implement a cache to get efficiency, as this code will be executed every time the navigation bar is displayed
Fortunately, WebSphere exposes a generic directory API that hides the points exposed above. It is called Virtual Member Manager, a.k.a VMM. Of course the app server has to be properly configured. I highly recommend the use of the Federated Repositories in this case.
To check if WAS is properly configured, go to the "User and Groups" in the admin console and verify that it accesses both the users (in Manage Users) and the groups (in Manage Groups). Make sure that the org name is not duplicated and the groups properly retrieved (both can happen when the LDAP directory is a domino server)
Example:


Ok, once WAS is setup, then let's add the real code. In our case, we'd like to enable the ProjExec menu entries if the current user belongs to the 'projexec' group.
We did that by creating a simple hasProjExecRole() method, and then wrap the ProjExec menu items within a test (all in blue bellow). See how the WSCredential API from WebSphere is being used, independently of the actual directory server being used.

Voila, this is it.

=== DO NOT CHANGE ===

   The <lc-ui:serviceLink /> tag can be used to generate links to any service defined in
   LotusConnections-config.xml.

--%>
<%!
   public boolean hasProjExecRole() {
      try {
         java.util.Set creds = com.ibm.websphere.security.auth.WSSubject.getCallerSubject().getPublicCredentials(com.ibm.websphere.security.cred.WSCredential.class);
         for(Object _c: creds) {
            com.ibm.websphere.security.cred.WSCredential c = (com.ibm.websphere.security.cred.WSCredential)_c;
            java.util.List groups = c.getGroupIds();
            if(groups.contains("ProjExec")) {
               return true;
            }
         }
      } catch(Exception ex) {
         ex.printStackTrace();
      }
      return false;
   }
%>

<div role="document"><table class="lotusLayout lotusNavMenuLarge" cellpadding="0" cellspacing="0"><%-- 
   
  Projects 

   --%><c:if test="<%=hasProjExecRole()%>"><%--
      --%><tr><%-- 
          --%><th scope="row" class="lotusNowrap"><%-- 
            --%><img class="lconnSprite" 
src="/projexec/tgweb20/media/projexec_blue.gif" alt="" 
role="presentation"><%-- 
            --%><a href="/projexec/projexec/pe_main.xsp"><%-- 
               --%><strong>Projects</strong><%-- 
            --%></a><%-- 
         --%></th><%-- 
     
    --%><td class="lotusNowrap"><%-- 
            --%><a 
href="/projexec/projexec/pe_main.xsp?pepage=myprojects"><%-- 
               --%>My Projects<%-- 
               --%></a><%-- 
         --%></td><%-- 
          
         --%><td class="lotusNowrap lotusLastCell"><%-- 
            --%><a 
href="/projexec/projexec/pe_main.xsp?pepage=myportfolios"><%-- 
               --%>My Portfolios<%-- 
            --%></a><%-- 
         --%></td><%-- 
      --%></tr><%-- 
   --%></c:if><%--



Tuesday, June 17, 2014

Get your application integrated within IBM Connections

Since I moved to TrilogGroup, I got several requests on how we achieved the ProjExec integration with the IBM collaboration platforms, as it just looks part of it. In particular, the IBM Connections one, where the users feel like they never left Connections. We keep the same user experience than any other Connections application. This includes the same navigation bar, the same look and feel (a.k.a. OneUI), the same authentication mechanism... Some of the techniques being used are documented and supported by IBM, but others are just the result of our engineer’s inventiveness.

Let me talk today how the navigation bar can be integrated into a third party application. IBM SmartCloud for Social Business provides an easy developer experience with a well document reusable component. There are multiple articles on the subject, so I won't expand on this.  But what about IBM Connections on-premises? IBM does not provide a similar, reusable component.  Nevertheless, ProjExec does this integration, as shown below. What is the magic behind it?


Let's talk about some solutions for your own applications.

Disclaimer: This is not documented by IBM, nor it had been communicated to me by IBM. Use the technique exposed bellow as your own risk!

One solution is to load in the main application page the same files and <script> tags than Connections, in the same order.  But this strategy has some issues:
  • It is fragile. What if the next version of Connections adds/renames/removes tags or files? Your application has to be modified accordingly. And it also has to adapt to the different versions of Connections. This can quickly become a maintenance nightmare.
  • What about commons files like Dojo? Should they come from your own application or from the Connections server? You can face some weird issues if you use the former.
  • The JavaScript files and the HTML+ JS tags must be loaded and executed in the right order within the page. It is very important, else the page will just break.  Moreover, the use of third party additions to Connections makes it even trickier. How can we guarantee that the right execution order is enforced?
But wait, there is an even smarter solution. The thinking is: what displays IBM Connections pages the best? Obviously, it is IBM Connections itself :-). So instead of loading Connections pieces within your application, just load your application within Connections. In short, get an existing Connections page and inject your own content in its body, replacing the existing one. That way the common page components, including the navigation bar, are properly loaded and executed.

To make it real, the first thing you have to do was to identify the simplest possible Connections page. A page that has a simple static body you can easily replace with our own content.  After browsing the different pages, it looks like the search page /search/web/jsp/toolsHomepage.jsp is a pretty good candidate. Note that this can be simplified if you deploy a specific empty page, with the headers, within the Connections WAR files. But that requires some Connections customization. If you want to use what already exists, then the search page is good enough.

Then you have the choice of doing the work on the server or the client side:

  • On the server side, a piece of code would call the Connections page, grab the HTML markup, remove the undesired pieces and inject its new content in the body. Once done, the ready-to-go page can be sent as a whole to the browser. I can truly imagine an XPages custom JSF ViewRoot renderer doing the job :-)
  • On the client side, it is a bit more complex, because you need to mash-up the Connections page within your browser, then inject the different fragment at the right place, in the right order. But, on the other hand, it does not depend on any server side technology.

Here is a pseudo implementation of the client side solution. First, you need to start with basic html page similar to this:
<html>
<head>
  <script src=’connections-integration.js’>
  <script src=’<path to jquery>/jquery.js’>
  <script>/* inline JS #1 that grabs the Connections page*/</script>
  <script>/* inline JS #2 that inserts the Connections header*/</script>
   … <!—Your own header tags -->
</head>
<body>
  <script>/* inline JS #3 that inserts the Connections body*/</script>
   … <!—Your own page body and footer -->
  <script>/* inline JS #4 that inserts the Connections footer*/</script>
</body>

1- The first statement in the <head> tag is a reference to your custom JavaScript file (connections-integration.js), containing all the necessary code. It should be included first in the page, right after the <head> tag. Optionally, you can also include JQuery as it makes easier all the DOM operations. The JavaScript inline tags (JS #1,2,3,4) will simply call functions defined in your custom JavaScript file.

2- The JS #1 code emits a request to the Connections server, grabs the page, parses the markup, splits it into 3 parts and saves the result into a global variable. The 3 parts are:
    • Head.
      It is a copy of the <head> tag from Connections, with some content removed. For example, the <title> is removed as your app will obviously come with its own title. This is also true for all the unnecessary <meta> tags. A quick JQuery request does the trick.
    • Body.
      It contains of the <body> tag coming from Connections,  with the main content (#lotusmain) and the search bar (.lotustitle2) removed.
    • Footer.
      This is a specific part of the <body> tag. But it is an important one as some components, like Connections Mail, are injecting their code at the bottom of the page, for performance reasons. So this footer should be inserted as the bottom of your the page. You can simply remove the <ul> tags if you don’t want any Connections footer content to be displayed.
3- The other inline <script> tags (#2, #3, #4) insert the different parts at the right places in the page. To respect the execution order of Connections, these parts must be inserted and executed synchronously. This cannot be achieved with DOM insertion as the script tags won't be loaded and executed synchronously. But, fortunately, the old decried document.write() capability does the trick. The markup is converted into a string and then written on the fly to the document using this function.

There are other subtleties to make it work properly. For example, the ‘login’ button in the toolbar redirects to the last page loaded by Connections. But a quick inspection of the URL shows that there is a logoutExitPage parameter pointing to the page to redirect to. This can be set by overriding the lconn.core.auth.getLogoutUrl function.

Although this technique allows a great integration of a third party application with Connections, we would all prefer IBM to come with a supported solution, as it provides for SmartCloud. Well, with some enhancements :-) I'll be happy to share my ideas with IBM. I'm sure you too, so please join your voice to mine. By the meantime, I hope this solution will help you.

Here we go...

I know, it took me time, don't tell me. But here we go, I'm opening my own blog. I'll be focusing on IT technical articles, with of course an emphasize on IBM related technologies. But not only...

Do all the questions have elementary answers? Certainly not! But still, information sharing through a blog can help everybody. First the readers, as they can grab some experience and knowledge, and share there own experience within the comments... Then the writers, as it forces them to formalize their thinking, and evolve this thinking based on the feedback. Questions can then have obvious answers, and thus the "Elementary my dear Watson" makes sense...

As an irony of history, Watson was learning from Sherlock Holmes more than a century ago. Nowadays, we are all going to ask IBM Watson questions. Watson is now the one giving the answers, while still learning from us. I love the relation, even though the IBM name actually comes from T.J.Watson.