Sunday, November 12, 2017

Develop a Domino application using any modern tools

In my previous post, I talked about a ReactJS library of components that can be used to create Notes like UI. It allows developers to create modern web and mobile UIs in a flash. Fine but, if Domino Designer is a great tool for rapidly developing applications, it is unfortunately not designed to create such modern applications, mostly because:

  • It does not integrate with the modern tool chain required by these libraries, like maven, npm,  babel, webpack...
  • It has a very limited JavaScript editor that cannot compete against Atom or MS Visual Code. Not talking about the latest standards support, like ES6.
  • Source control and continuous build/integration are very limited
  • Well, it is Windows only

We've been thinking about these problems for a while and we worked to provide a solution. Part of the problem comes from the fact that the NSF is both the development and the deployment container, with no third party development tool in the market being able deal with the NSF content. The on-disk projects were an attempt to fix that issue, but this is for source control only.

With Darwino, we go a step further: we allow developers to create projects using the tools they love (Eclipse, IDEAJ, Ms Visual Studio Code, Atom, ...), then use any of these tools during development and deploy the final result into an NSF.

Ok, better to first see it in action:


In this example, I used the Darwino studio to create a new ReactJS application within Eclipse and I'm using MS Visual Studio Code to develop the client side JavaScript. I can use source control, maven, npm, without any limitation, as I'm outside of Domino Designer. Moreover, the Darwino ReactJS library allows me to seamlessly call the Domino REST services, even though I'm running the Webpack dev server. Ah, you might have noticed, I did all the development on a Mac...

When I build the project by running maven, the relevant built files (JavaScript files, Jar files for the business logic...) are seamlessly deployed to the NSF, making the application instantly available from the Domino server. The source files are by default *not* deployed to the NSF, making it smaller and not triggering a build process when opened within Designer. Finally, your IP is also protected.

The magic behind that is a design element server installed right into Domino Designer, as part of the Darwino extension for Darwino Designer. This server exposes some REST services on port 8800: list of projects, access to design elements, ... These services are called by a client, like the Darwino studio in Eclipse, to deploy the runtime files when desired. Because it is all based on standard REST services, other clients, like pure JavaScript ones, can be developed as well. Seems easy right? Well, the evil is as always in the implementation details, but hopefully this is all written using the Darwino runtime, which drastically simplified the development.

The cherry on the cake: the code for these services is isolated from Domino Designer itself, so it could be installed directly within a Domino server. This would remove the requirement of using Domino Designer for development, at least when not developing"classic" notes applications -> it can enable a pure Web based Designer for citizen developers.

Let me know what you  think about it!

Darwino Studio Screenshots

Configuring the Design Server

Configuring the project files to synchronize:

Thursday, November 9, 2017

Modern Notes like UI using ReactJS

The world is evolving fast, and so technologies are. Today, for now a little while, when we talk about building a new web UI or a mobile hybrid one, we think about using pure client technologies without markup generation on the server side. JSF, JSP, ASP.NET are being replaced by Angular, ReactJS, VueJS and services...  I personally think this is a great evolution. But, are these technologies easy enough to use? Can a developer be as productive as he/she is with XPages, for example? Well, the quick answer is no, at least without any addition to these core libraries, in term of components and tooling.

This is the problem we are currently tackling with Darwino: let's make it easy to consume these new JavaScript technologies without compromising their power. For these, Darwino 2.0 is featuring a brand new ReactJS component libraries as well as some new templates (or boilerplates, or pods, ...) that make the creation of a project easy.

The components should give you all the necessary UI constructs to create a form based application. Of course, all of this is responsive to adapt to mobile devices. In short, everything you need to build Notes/Domino like applications in minutes, but with pure standard ReactJS.
Curious? Have a look a running instance here http://playground.darwino.com:8080/contacts-react (use demo/demo to log in)

The applications should also be data source agnostic, in a sense that the same application should,  almost seamlessly, connect the Darwino JSON store, or to Notes/Domino backend, or even other DB like LiveGrid data store. With this architecture in mind, these applications can work online, connecting to any server data, but also offline, connecting to the locally Darwino replicated data in your mobile device.

There are different steps in this journey. Let's go through them.

Choosing the JS library/framework

There is nothing wrong picking up one of the major library currently available. In a previous post, I explained why we choose VueJS for ProjExec and how our developers quickly became productive using this library. It was great with this goal in mind.
Darwino has different goals. As a development tool, it should target the broader set of use cases. This implies using a library with a bigger eco-systems, a large set of third party additions, several rendering libraries (bootstrap, material design...). For these reasons, we choose ReactJS, as it is adopted by the many vendors like IBM or Microsoft. The later is interesting, as we can quickly create apps with the Office look & feel, thanks to the Office UI Fabric

ReactJS it is.

Choosing the right ReactJS components

ReactJS by itself is low-level: let's consider it as a powerful, core engine that can be extended through components. For example, ReactJS has *no* built-in data binding capabilities like the one you'll find in Angular or VueJS. You should either implement them yourself, or use a third party library.
To get a decent set of capabilities without reinventing the wheel, you have to pick-up libraries from one of the large JavaScript repositories. Nowadays, I would advise you to use npm (or eventually yarn) and leave tools like Bower to the historian.
Gasp, this is where the troubles come in: the npm repository features a plethora of libraries designed for the same job. How would you pickup one over the others? Number of downloads? Stars on Github? Recent activity? Return on experience from other developers? Making the right choices takes time as there is no obvious ones.

For Darwino, we choose the following:
  • react-redux
  • react-router
  • redux-form
  • react-intl
  • react-data-grid
  • react-bootstrap (or eqv for other UI targets)
  • ... and a few others

Project boilerplates

If, for you, building an app if about saving a design element in an NSF, then your life is good. The JavaScript world is way more complex! Even if JavaScript is an interpreted language, you'll quickly feel the need of running build tools to use the latest JS specifications, consume libraries, produce an optimized/compressed version of the code... Again, you 'll have to make some choices between Grunt, Gulp, Webpack, Browserify, ... and many other tools. This is another time consuming research.
Once your choice is made, you have to configure the tools, generally using a JavaScript based DSL (e.g. a JavaScript piece of code driving the tool). This is fairly complex as the DSLs feature many options. To get started, developers created templates, also called boilerplates. They contains the configuration files with default behaviors that you then have to customize. But again you have many available, with some clearly developed by people without a deep knowledge of the technology. Furthermore, they are generally tailored for a particular version of the tool and will not function properly with yours...
Ok, we spent many days of try/fail/redo and we are coming with our own template. Well, it is even easier: go to the Darwino studio, create a new project, select the target technology and voila, you're ready to go. Pfuuuu...

Component library

As we'd like to make the developer experience as smooth as possible, we are providing a series of ready to use components covering most of the developer needs. What started as a demo library became a very useful set of components, covering:

  • Forms with document access (load, save, validation)
  • Data binding controls, including read-only mode, multiple values, pickers, rich text, attachments upload & download
  • Computed fields (yes, not that obvious with ReactJS!)
  • Data grid with Notes view like behaviors: categorization, totals, response doc
  • Navigators, action bars, sub-forms
  • Dialogs, messages
  • Easy access to server side business logic using micro services

And more... But again, these components are just regular ReactJS components, that any ReactJS developer would consume without fear. They do not carry any idiosyncrasies from the original Notes ones. They exist to create Notes like UI for Web and Mobile, not to replicate 1-1 what the Notes UI does.

Ah, I foresee a question: can you automatically convert an existing Notes UI to this technology? The answer is... yes! Feel interested?

Wednesday, October 18, 2017

Universal Data Replication

One of the Darwino piece of code that I'm the most proud of is the replication engine. If a majority of our customers see it as a Domino to JSON replication engine, it goes far beyond that. In reality, it can replicate between virtually any data sources.
It is a true, two ways, multi-point replication engine, borrowing some ideas from IBM Domino but going beyond its venerable ancestor in multiple places.

The architecture

The main idea is that any data set can be represented as a set decorated JSON objects. Decorations include meta-data (ex: data security, replication attributes...) and binary attachments. The latest are stored apart from the JSON itself for performance reasons.
As an example, a Domino document can be transformed to a JSON object where every Domino document item is converted to a JSON attribute in the object, along with the attachments.
A relational table can also be transformed to a set of JSON objects or arrays, one per row. Even more interesting, a set of rows coming from multiple tables can be grouped to create a consistent, atomic  JSON object. This way, the replication granularity is not limited to a row, but can be a whole business object even if the physical storage split it into multiple records

Dawino's engine is mostly made of two components:

  1. A universal replication engine
    This engine does *not* know about the data sources. It contains all the replication logic while it accesses the data through a generic connector API
  2. Connectors
    A connector encapsulates the physical access to the data, via an API. Its API provides a "JSON" vision of the data that we'll be consumed by the engine
As of today, the Darwino replication engine comes with ready to use connector implementations:

  • Darwino JSON store, running atop RDBMS
  • IBM Domino
  • FlowBuilder XML store, used by ProjExec
  • HTTP to wrap a local connector and use HTTP/HTTPS as the transport
Some other connectors have been started:
  • Pure RDBMS used, for example, to replicate to DashDB
  • MongoDB
  • MS Sharepoint List

We are also waiting for a first beta release of IBM LiveGrid to replicate these data to your mobile devices and get a true offline expereince.

Writing a basic connector is a pretty easy operation. Then, the connector can be enhanced incrementally to provide fine tuned capabilities (better conflict detection....)

Darwino replication strengths


If the concept of replication feels easy to understand, there are many details that make Darwino unique so far:

Data transformation

The data can be transformed two ways during the replication process. These transformations can happen at the basic field level (ex: transforming a Domino name to its LDAP counterpart) or at a more global level, like grouping a set of Domino multiple fields into a JSON array. This way, the replicated JSON is clean, and it does not carry any legacy tricks from the source platform.
The transformation can obviously be coded, in Java or Groovy, but Darwino already comes with a pretty large set of standard transformations you can apply to any data. Thus, setting up the transformation is mostly about scripting the pre-defined transformers, unless you have more advanced need.
One of the benefit of data transformation is data normalization: you can ensure that the fields in a JSON document are converted to the expected data type. Domino is well know have inconsistent fields across the whole database, as they evolved over time. Darwino can normalized them.

Selective replication

Sometimes you want to limit the data replicated to a target system. It can be because of storage requirements (ex: your mobile device can only handle a subset of the data), or security (ex: you only want some data to be available on the cloud).
Darwino handles the selective replication by either providing selection formulas based on data. It also provides a unique feature called "Business Replication" where related documents can be grouped and replicated as a whole.
Selective replication poses a problem: how to propagate deletions as the data used to select the documents no longer exists, or how to remove a document from the target when the formula no longer selects that document? Well, we solved this problem with an efficient and reliable method that I'll be happy to talk about around a beer :-) This is a pretty complex problem.

Transaction and error recovery

If the target database supports transactions, then the engine can take advantage of them. If the replication fails for any reason, then it can rollback to the initial state, and eventually attempts another time. But this is not enough: when you have a large data set to replicate, you might not want to restart from scratch while you already replicated 98% of the data. For that purpose, the engine features commit points. If it fails at any time, it only restarts from the latest successful commit point.

Conflict detection

This is one of the area were Darwino excels. It has multiple conflict detection systems, going way beyond Domino. A particular connector might only implement a subset of them, depending on the data source capabilities. In this happens, then the engine will degrade gracefully and deal with what is available.
The existing mechanisms include: the last modification date, a sequence id (similar to Domino or some RDBMS), a change string carrying the update history, an update id...
Once a conflict is detected, an API is called and the developer can choose to execute a predefined behavior (ignore, last wins, create a copy data set...) or do a custom resolution like a data merge.
Finally, Darwino has an interesting mechanism to deal with records deletion. If the source database has deletion stubs, it will effectively use them. Else, it provides some helpers to find the deleted records, in particular within relational tables.

Performance

The whole Darwino replication engine is designed to handle very large datasets. The Domino connector is mostly written in "C", thus calling the native "C" API. Not only this allows critical features that are not available with the regular back-end classes, but it also provide high performance data replication. On my laptop, it replicates ~400 complex documents/second, including attachments and rich text conversion. For simpler documents, with only basic data, it goes to the range of 1000-2000 a second! In many cases, you can get your existing NSF replicated in minutes.

Domino data accuracy

A replication engine only makes sense if it is high fidelity. From a Domino perspective, the Darwino replication should act like a regular Domino replication, maintaining the same data (sequence id, creation and modification dates, list of authors...). We can do that thanks to the "C" API.
Darwino can replicate most of the Domino field types: string, number, name, rich text, mime, date/time... with maximum precision. For example, it deals particularly well with the date/time and the time zones.
Any domino document can be replicated, including design elements or profile documents. If you choose to replicate the database ACL document, then it will then be used by the Darwino runtime to apply the same security. Talking about security, we also handle the reader/author fields.

Aggregation

Data can be aggregated during replication: multiple sources of data can be replicated in one single target database, thus allowing a more global view of the whole data set. Think about it: you can replicate many NSF into a single set of relational tables and then run global queries on top of these tables. Isn't that a dream? Darwino liberates you Domino data!


As you see, the Darwino replication engine is a very complete engine that is flexible to handle many data sources. Out of the box, it can replicate your databases as is. But it also give you a lot of options to select and transform your data.

Thursday, June 15, 2017

The power of Domain Specific Languages

We are all used to configuration files, whenever they are XML, JSON or simply text based. If this is ok for simple configurations, but it falls short when the complexity increases. The biggest issue is that they cannot provide more than what they are designed for, basically static values.

Let's suppose, for example, that we have an XML configuration file exposing a property like:
  <visible>true</visible>
Ok, simple enough. This property can be true or false. But now what if I want a more dynamic value, like true if I'm in development mode, or false if I'm in production? I'm stuck!

Some configuration systems allows expressions to be evaluated within the configuration text, like:
  <visible>${runtime.development}</visible>
But this is very specific to every configuration system, when it exists. Moreover, this is limited to value replacement. What if I want to define a set of properties based on a condition? Well, some rare systems provide an advanced syntax, like
  <sys:if cond='runtime.development'>
    <prop1>val1</prop1>
    <prop2>val2</prop2>
  </sys:if>
Looks like the venerable XSL :-) This is about inventing a new ugly programming language that nobody knows about. And it is still have limitations, basically what the programmer provided. 

The use of the programming constructs I know would be more appealing to me. What about:
  if(runtime.development) {
    <prop1>val1</prop1>
    <prop2>val2</prop2>
  }
or even better:
  if(runtime.development) {
    prop1 val1
    prop2 val2
  }
This is mixing and matching configuration statements with regular programming ones. This allows a total flexibility on what is possible! We just need a programming language that supports this kind of construction, including the use of a dedicated grammar to provide prop1 and prop2. The programming language, along  with this specific, dedicated grammar, is what is called a Domain Specific Language, aka DSL. 

In the Java world, Groovy is currently the clear leader. It is being used in many systems, with even books written on this very topic. Its parenthesis less syntax when calling a method, the use of closures (similar to java 8 lambdas) make it particularly suited for the job.

They are several places where we are using a Groovy based DSL in Darwino. This is what allows, for example, a very flexible data replication configuration between IBM Domino and the Darwino JSON store. Not only we are providing many predefined data transformations but we let the developer insert custom code almost everywhere.

Recently, we introduced another DSL in our new micro application architecture. In particular, we added triggers and actions, for example to send a notification to mobile devices when a new document has been inserted in a particular database. Or any other action, on any other trigger...

Here is a mock example bellow. If it seems trivial, it shows that some code can be inserted to lookup a user in the LDAP directory to find its mobile id, or calculate the URL of the page to display. This would have been hard to define these custom processing with a simple static configuration file.

DatabaseTrigger {
  database "MyDatabase"
  view "Workflows"
  handler { doc ->
    def approver = LookupUser doc["Approver"]
    def url = calculateDisplayUrl(doc)
    pushNotification {
      ids approver.mobileid
      page url
      text "Purchase Approval Pending"
    }
  }
}

Neat, concise and simple, isn't it? 
In this example, pushNotification is a predefined action but other actions types  are available (sendEmailwatsonWorkNotification...). Moreover, if Darwino does not provide a trigger or a notification you need, then it is very easy for a developer to inject a custom one into the DSL. Just develop it in Groovy or Java and register it. A DSL is very dynamic and extensible.

Saturday, March 11, 2017

Darwino as the IBM Domino reporting Graal(*)

Reports, dashboards, data analytics... have been the conundrum of IBM Notes/Domino since the beginning. Its proprietary data structure, the absence of standard APIs and its deficient query capability make it very difficult. This has been ranked as one of the top need for any for business applications. I know several business partners who created great Domino solutions but struggling with poor reporting capabilities.

Of course some attempts were made to fix it: LEI, DB2NSF,.. all incomplete and anyway defunct. I personally tried a new approach with DomSQL, which, to be frank, I was proud of :-) But this was a first shot, more a POC than a product, although I know several customers using it in production. Unfortunately, despite the community interest, IBM didn't want to move it forward, so it is barely in the state where I left it. On the positive side, it gave me a great low level SQLite experience which is now leveraged in Darwino's mobile database! Learn, learn, learn, you'll always reuse it.

So we are back 2 decades ago, with no solution to this common business problem. Well, you anticipated, Darwino can help! It is actually the first and immediate benefit than our customers are getting when they start with Darwino.

One of the Darwino capability is data replication. It can replicate, two ways and real real time, any Domino data into its own document store. From a Domino standpoint, Darwino is behaving as another Domino server, with high fidelity replication. But Darwino document store is a JSON based store, sitting on top of an RDBMS. As modern RDBMS now understand JSON natively, this is a great fit. The whole replicated data set is then query-able by any tool that understands RDBMS. With no limitation but the relational database itself. Yes, you can join, aggregate, union, calculate, select, sub select... And, contrary to LEI, this is a real incremental replication that can be triggered when a data event occur on the domino server, keeping your data up-to-date at any time. No more nightly/weekly batch data copy, if you know what I mean.

From a technical standpoint, Darwino stores a JSON version of the domino document within a column in a relational table. It is very flexible as it does not require a custom table schema per fdata set, and there is no data loss (e.g. a field you forgot to define a column for...). Moreover, if the default field conversion between the two database is 1-1, this is actually fully customizable. The Darwino replication process can transform the data during the replication phase. This is achieved using a Groovy based DSL, which allows the following:

  • The fields type can be normalized (e.g. make sure that the field in the JSON document is of the expected data type). Common issue with Notes inconsistent data type across documents, right?
  • Domino patterns/workaround can also be normalized. For example, a set a multi value fields describing a table can be transform a true JSON array
  • Computation can be done ahead of time to make the reporting easier (queries, calculated columns...)
  • ... well, your imagination
If you feel that directly reporting on the JSON column within the RDBMS is uneasy, then you can create RDBMS views (standard or materialized), to then act on a table like structure. This is what our partner, CMS, chose to do. Going even further, they are creating these relational views out of the Domino view design elements. Introduction to their product is available here. This is a great use of Darwino.

In a previous post, I spoke about JSQL, which is a easy way for querying JSON data using SQL. Thinking forward, we can provide a JDBC driver to expose JSQL and allow any JDBC compliant client to access the data. From a technical standpoint, we know how to do it: the above mentioned DomSQL features one we can adapt and reuse. This is not in our current plan, but I'll be interested to know if you have any interest in this.

As a summary, using Darwino as a Domino report tool enabler is a fast way to fulfill some crucial, missing Domino capability. It almost instantly makes the existing Domino applications more interesting to the business users. Of course, this is just a subset of what Darwino can bring to the table, but this is a quick, high value one. Please, give it a shot or contact us so we can help you.

BTW, "Graal" is the French spelling for Holy Grail




Friday, March 3, 2017

Schemaless GraphQL

FaceBook officially introduced a few months ago a new technology called GraphQL. Well, rather than really being new, FaceBook made public and open source their internal graph query engine. It starts to be widely used for exposing APIs. For example, IBM Watson Worskpace makes use of it. I also heard that IBM Connections will also use it.

In a nutshell, it allows powerful, tailored queries including navigation between the data sources, in a single query. As a result, it minimizes the number of requests sent by the client to the server, which is a performance boost. Moreover, it proposes a single query format regardless of the data source. Basically, it means that you can query data from an RDBMS or an LDAP directory almost identically, while getting a consistent response format. All good!

To make the queries discoverable, it uses a schema to define the query parameters, the JSON result as well as the graph traversal links.  If this is appealing, it forces the server to implement these schemas before the queries can be emitted by the client. The whole data and navigation models have to be known and implemented up front. Hum, not only this slows down application prototyping, but it makes it hard to link micro-services that don't know about each other.  It also does not work with truly dynamic services, like a JSON document store, where the document schema can be unknown.  Crap, this is exactly what Darwino offers. So the question is: can we still use GraphQL for dynamic services, where data  schema can be static, dynamic or a mix of both? I saw a few posts on the subject but no solution so far.

Fortunately, GraphQL has the notion of an optional field alias. An alias renames a field, so it can appear multiple times in the same root. As a field can also have parameters, we can "tweak" the engine to implement dynamic field accessors. How? Simply by inverting the roles: the alias becomes the field name while the field is a dynamic, typed, JSON path evaluation on the source document.

Ok, let me give you an example. Suppose that we have the following document as a source:
{
  name: "MyProject",
  budget: {
    amount: 10000,
    currency: "USD"
  }
}
The dynamic query to extract the project name and the budget will be like:
{
  doc: Document(unid:"projectid", database:"project") {
    _unid,
    name: string(path:"$.name"),
    budget: number(path:"$.budget.amount")
  }
}
'string' and 'number' (as well as others, like boolean...) are typed, pseudo field names that are actually evaluating JSON path queries on the source document. And because the path is passed as an argument, it can be fully dynamic! We are using JSON path here, but any interpreted language could be used, including JavaScript or others. Also note that the Document accessor is also implemented as a field with parameters. This triggers the actual load of the JSON document from the database.
How does that sound? You can find a real example dealing with the JSON store in the Darwino playground:

But this is not enough, as we also need to implement the links to navigate the graph. For example, we need to navigate to a JSON document by id, that id being a field in the current object. To make this possible, we introduce here the notion of computed parameter, using the ${....} syntax (remember JSF?).
The pseudo query bellow assumes that you have tasks documents that can be accessed using the parent project id:
{
  doc: Document(unid:"projectid", database:"project") {
    _unid,
    name: string(path:"$.name"),
    budget: number(path:"$.budget.amount")
    tasks: Documents(parent:"${_unid}", database:"tasks") {
     taskname: string(path:"$.name"),
    }
  }
}
The drawback of this solution is that every property becomes a string, to accept a formula. Another solution would duplicate all the fields and have, for example, parent$ as a computed version of parent. As the whole library is under development, I'm not sure yet what's best, so any feedback is welcome!
Note that Documents returns a collection of documents, as opposed to Document that only returns one.

Of course, this is a simple introduction to a schemaless implementation of GraphQL. There are more details, but it expands the GraphQL technology by making it more dynamic, while the retaining the static parts. Look, for example, at this snippets mixing static and dynamic attributes:
The early Darwino 2.0 code includes this capability in its Java based implementation, released as open source on darwino.org. Any GraphQL object can implement a specific interface that will give it the dynamic capability (json path...) on top of its static fields. If you go back to the previous example, you'll see that the document UNID is a static field specific to the Darwino JSON document.

There are more to discover, but hold on: if our session is accepted at Engage 2017, Paul Withers and I will give more details on the implementation, along with an IBM Notes/Domino implementation!

Thursday, February 16, 2017

Get your apps integrated with IBM Connections, cloud and on-premises!

I've been using this blog to share some of the techniques we use in ProjExec to get tightly integrated with the Connections platform. I got a lot of feedback from developers who wanted to know more, so I'm moving a step further: Jesse Gallagher and I will describe these techniques in a breakout session @Connect 2017!

DEV-1430 : IBM Connections Integration: Exploring the Long List of Options
Program : Development, Design and Tools
Topic : Enterprise collaboration
Session Type : Breakout Session
Date/Time : Wed, 22-Feb, 10:00 AM-10:45 AM
Location : Moscone West, Level 2 - Room 2006
Presenter(s) : Philippe Riand, Triloggroup; Jesse Gallagher, I Know Some Guys, LLC

Basically, we'll show for both on the cloud and on premises Connections:

  • how to get your user authenticated by Connections, using OAuth or SSO
  • how to call services in the behalf of that current user
  • how to get your apps in the IBM Connection menu
  • how to integrate the navbar, including on premises!
  • how to share code common to community apps (cloud) and iWidget (on premises)
Beyond live code and demos, we will release a reusable library as open source, with all the necessary code to get your own Java based application integrated effortless. It will be available on darwino.org, a branch of OpenNTF.

To make that possible, IBM helped by providing a long awaited ask,  Wait wait wait, I'm not going to reveal that here, but an IBMer will be on stage to explain. Wait again, should not talk about this neither, people have to come to the session!

See you @Connect, and don't miss this session if you want to be an IBM Connections developer hero!