RHPAM Service Task

Overview

POTENTIALLY DEPRECATED!!!
NOTE: The Service Task implementation in RHPAM is a base-level implementation to be in compliance with BPMN2. There is no guarantee this implementation will continue to work going forward. Custom Work Item Handlers are the preferred implementation for custom code/business logic.

Service Tasks in jBPM/RHPAM are implemented according to the BPMN2 standard. Service Tasks essentially allow the invocation of a Java Class and Method directly from the BPMN palette. Where a Custom Work Item Handler/Custom Task includes some metadata annotations and a slightly easier implementation for BPMN authors, Service Tasks are a little more technical but more straight-forward to develop (no crazy annotations to worry about).

Here is a link to the jBPM Source Code for the ServiceTaskHandler to use as a reference. The code is kind of confusing because it also seems to hold the Web Service implementation as well, but the executeJavaWorkitem() method seems to be the main body of code for the Service Task implementation.

Service Task Limitations

  1. The target method can only accept one Parameter
    • Clearly a single parameter can be of any type, but from the BPMN Data Assignments, you can only pass one Data Input with the name “Parameter”.
  2. The target method can only return one value
    • The object returned by the method can only be mapped as a Data Output with the name “Result”.
    • Custom Work Item Handlers can return more than none result in the Result Map<String, Object>, but not a Service Task.

Write the Java Code

The first place to start is the Java code you want to call from your BPMN process flow. You can definitely pull in an existing JAR, but you’ll have to write a little custom code following the guidelines below.

An easy way to get started is to create a vanilla Maven Java project. This archetype is perfectly fine:

mvn archetype:generate -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-simple -DarchetypeVersion=1.4

Running that command will prompt for GAV information for your project and maybe a few other parameters, then it will create a folder and have you ready to start coding. You can remove the “site” folder completely since it won’t be used.

Once you’ve got a generic Java project ready to code, add any required dependencies to your pom.xml. These can be public assets like Mongo libraries, or internal assets like from another project you already created and uploaded to a Maven Repo.

Then you’ll need to code the Class and Method that will be called by the Java Service Task from BPMN. The code below is a really simple Java Service Task implementation.

package com.thaxtonm.test;

import javax.enterprise.context.ApplicationScoped;

@ApplicationScoped
public class TestService {
	public String greetUser(String name) {
			// Put your custom code here
			// This example just logs the input String and returns a String back
			System.out.println(name);
			return String.format("User %s greeted.", name);
	}
}
  • Make sure to add the @ApplicationScoped annotation to the public Class that will be defined as the Interface on the Service Task activity in your BPMN process flow.
    • You might have to pull in the javax-enterprise Maven dependency depending on your version of Java
    • Other dependent classes don’t seem to need this annotation
  • Remember that your target public method (greetUser in this example) can only accept a single Parameter (String name in this example)
    • The method has to return any single parameter back but it can be of any type (Object, Map, List, String, whatever)

Add whatever other classes and methods you need to implement your business logic. You can also add plenty of Unit Tests and such. When you’re ready, package up your application as a JAR; Maven > Install or Maven > Package seems fine.

Finally you’ll have to Upload your JAR to Business Central > Admin > Artifacts so it is available in the BC Maven Repo.

Configure the RHPAM Project

To get started, you must first add the Service Task Work Item Handler to your Project > Settings > Deployments > Work Item Handlers.

Name = Service Task
Value = new org.jbpm.process.workitem.bpmn2.ServiceTaskHandler(ksession, classLoader)
Resolver Type = MVEL

NOTE: Do not use the Custom Tasks > ServiceTask. This doesn’t seem to work for some reason. The steps above and below have been shown to work in Java11 RHPAM 7.11.

Unlike Custom Work Item Handlers/Custom Tasks that can be added to the palette, the Service Task activity is available by default since it’s a BPMN2 standard. The activity is available under Tasks > Service Task.

Dragging a Service Task to the BPMN palette and opening the Properties will display the fields below.

  • Implementation – use Java for the scenario were you want to call your own custom Java code
  • Interface – the fully-qualified-name (FQN) of the Java Class you want to invoke, so in the example from above this would be com.thaxtonm.test.TestService
  • Operation – the name of the method in the Java Class that will be called when the task is executed (“greetUser” in the example above)
  • Assignments – this is where it gets a little tricky…
    • You have to add two required Data Inputs:
      • Name = ParameterType
        Data Type = String
        Source = whatever variable or expression you want to use to provide the fully-qualified-name (FQN) of the input parameter to your target method. So in the example above this would be a simple expression as “java.lang.String” because the greetUser() method is setup with a single Java primitive String called “name”.
      • Name = Parameter
        Data Type = whatever matches your target method signature. You can use the Java primitives that come with your project (String, Boolean, Integer, etc.) just fine. Or you can select a class from your project dependencies.
        Source = whatever variable or expression you want to use as Input to your target method
    • Data Outputs – again, you only get one of these and it has to called “Result”
      • Name = Result
        Data Type = whatever matches the return type of your target method
        Target = your process variable that will hold the return object

And that’s it! Your BPMN flow is now wired to fire off your custom Java Code. Congratulations!

RHPAM Custom Work Item Handler

Here are some notes about creating Custom Work Item Handlers for Red Hat Process Automation Manager (RHPAM). These notes should work for jBPM/KIE BPM as well.

Relevant links:

Custom Tasks and Custom Work Item Handlers allow you implement activities that do more than Script or the OOTB task types (like Email or Log or REST Request). NOTE: I think the terms “Custom Tasks” and “Custom Work Item Handlers” are essentially interchangeable…I can’t figure out why there are two ways to refer to this subject. I guess the Work Item Handler is the actual Java application (a JAR) that does the work and a “Custom Task” is what it looks like when you add the Work Item Definition (WID) and such to your project. But I’ll just keep calling them Custom Work Item Handlers, or Custom WIH, for this post.

Getting Started

Though it may be tempting to start with cloning or copying an existing Custom WIH, I found that this can create some Maven POM issues that aren’t that easy to resolve. The best way to start is to use the JBPM Maven Archetype (like a template) to create a blank project and then add your customizations or paste any borrowed code.

Here’s the Windows command line to initialize an empty project based on the Custom WIH archetype:

mvn archetype:generate -DarchetypeGroupId=org.jbpm -DarchetypeArtifactId=jbpm-workitems-archetype -DarchetypeVersion=7.52.0.Final -Dversion=1.0.0-SNAPSHOT -DgroupId=com.mygroup -DartifactId=workitem-name -DclassPrefix=workitemClass

The items in bold should be hardcoded, but if you need to, update the archetypeVersion to match whatever jBPM version you’re using (check for a RedHat version if you’re on RHPAM – just make sure you are mapped to the RedHat Maven Repo using the documentation instructions). The -Dversion, -DgroupId and -DartifactId will be the Maven GAV (group, artifact, version) data for your Custom WIH – update these to match your Custom WIH code. The GAV values will be used for dependency management in your RHPAM Project’s POM and the WID in your RHPAM Project. -DclassPrefix just puts this value at the start of the initial Class in your Custom WIH project.
Running that command will grab some files from your Maven Repo, ask a simple project initialization question, and create a directory where it is run to hold your Custom WIH project. You’ll have to press Y at one prompt to confirm the GAV parameters from the command. Then you’ll get two directories and a pom.xml file to get started:

It seems easiest to startup IntelliJ or VSCode or whatever your IDE-of-choice is and configure it to recognize the folder as a Java project with source in src > main > java and tests in src > test > java. The rest of this page will use IntelliJ but I’m not using anything IDE specific, so if you see any major discrepancies, please feel free to comment below.

There should be a single pre-generated class file for your project and that is probably where you’ll want to start you actual development. It will be something like {WorkItemClassPrefix}WorkItemHandler from the archetype generation above.

It doesn’t seem like you need to make any major changes to the pom.xml that is generated except to add any dependencies for libraries that your Custom WIH will need, like JSON or JodaTime or Mongo or whatever. See this thread about how your Custom WIH dependencies could impact your runtime.

Describe the Custom WIH with the WID

The Work Item Definition (WID) controls how your code looks to RHAPM/jBPM. You can author the WID as its own file or just use the @Wid annotation that is already provided in the class generated from the Archetype. The WID is important for controlling how your Custom WIH looks to someone in Business Central. I tried using the text file like the RHPAM docs say, but I could never get it to work, so this annotation seems fine.
Here is the @Wid for a new project from the archetype with some comments about each section.

Custom WIH Java Class File AnnotationNotes
@Wid(widfile="WorkitemClassDefinitions.wid", name="WorkitemClassDefinitions",
        displayName="WorkitemClassDefinitions",
        defaultHandler="mvel: new com.mygroup.WorkitemClassDefinitionHandler()",
        documentation = "workitem-name/index.html",
        category = "workitem-name",
        icon = "WorkitemClassDefinitions.png",
        parameters={
            @WidParameter(name="SampleParam", required = true),
            @WidParameter(name="SampleParamTwo", required = true)
        },
        results={
            @WidResult(name="SampleResult")
        },
        mavenDepends={
            @WidMavenDepends(group="com.mygroup", artifact="workitem-name", version="1.0.0-SNAPSHOT")
        },
        serviceInfo = @WidService(category = "workitem-name", description = "${description}",
                keywords = "",
                action = @WidAction(title = "
Do an exciting Custom WIH"),
                authinfo = @WidAuth(required = true, params = {"SampleParam", "SampleParamTwo"},
                        paramsdescription = {"SampleParam", "SampleParamTwo"},
                        referencesite = "referenceSiteURL")
        )
)
widfile – Maven seems to auto-generate this file which is essentially just everything from this annotation, but in its own text file.

name – the name of the Custom WIH

displayName – what will appear in Business Central as the name of the Custom Task

defaultHandler – what a user will have to put in Project Settings > Deployments > Work item handlers. If you want to require something at this point which will end up being used in the constructor for this class, use “\” to escape the input. So for example the ExecuteSQL WIH has this value: defaultHandler = “mvel: new org.jbpm.process.workitem.executesql.ExecuteSqlWorkItemHandler(\”dataSourceName\”)”, which means you have to supply the dataSourceName when you pull this WIH into your project as a dependencies or else the WIH won’t initialize. NOTE: the defaultHandler in the @Wid section generated from the archetype is wrong, because the WIH is setup to require SampleParam and SampleParamTwo in the only Constructor, so I think those parameters should be in the Default Handler as well.

documentation – not really sure how a URL is used here…

category – This groups the Custom WIH on the BPMN authoring palette

icon – a picture for the Task/Activity box on the BPMN diagram

parameters – the Input parameters for your Custom WIH. You can use the “required” flag if you want, or leave it out so that the parameter is not required. You can also define a runtimeType like @WidParameter(name=”SampleParamThree”, runtimeType = “java.lang.Object”), which should be a fully qualified class name. This will help when a process author drops the Custom WIH into the BPMN flow.
NOTE: parameters appear to be entirely optional, so if your Custom WIH doesn’t need an input you don’t have to include this section.

results – the name of the object returned by the Custom WIH. You can also define the runtimeType just like SampleParamThree above.
NOTE: results also appear to be entirely optional, so if your Custom WIH doesn’t need to return anything, don’t include results.
In order to call the Workitem Manager completeWorkItem() method, you have to pass either null or a Map<String, Object>. The Map should have items for each Result parameter defined. So in this example, the Map<String, Object> “results” object will have an entry for “SampleResult” as the Key and, for this example, a String as the Value/Object. But it is perfectly fine to list multiple @WidReults and then use each of those @WidResults.name values as a “Key” in the Map<String, Object> result. Or not. See notes below for help with results.

mavenDepends – as far as I can tell, this is the GAV for the Custom WIH, so I’m assuming that if you update the VERSION in the Custom WIH pom, you’ll want to update here as well. I guess you can also add other dependencies that you’ve added to your Custom WIH pom…? I’m not sure what happens if you do or don’t.

serviceInfo – this also seems to control how the Custom WIH appears in Business Central, but will need to do some more testing.

description – notice that by default this points to a variable ${description}, but it doesn’t seem to be defined so feel free to add description=”something here” up between documentation and category so the Custom WIH will get a description, but I’m not sure how this used.

@WidAction(title) – will show up after the Custom WIH name in Business Central, so its more of a headline about what the Custom WIH does, like “Send email” or “Send JMS Message” or “Call SOR API…”

@WidAuth – it is OK to leave authinfo=@WidAuth without using the () values. But if anything is added here, these fields will display when the Custom WIH is added as dependency to the Process Project using Settings > Custom Tasks > Install. I have no idea why this is considered “authentication”…
Here’s a good example of a more complete serviceInfo section from here:

Code the actual Custom WIH

Make sure your class constructors match the defaultHandler from the WID and contains any required values for the constructor. Or leave it entirely blank – up to you. You can also define more than one constructor as well. And you can also define any variables global to the class if you want, like String sampleParam, String sampleParamTwo from the archetype.

Once you’re ready to write the real logic, most everything goes into the executeWorkItem() method.  A few pointers for the execute() method:

  1. Wrapping everything in a try/catch seems pretty standard.
  2. Just like the Archetype does, use the RequiredParameterValidator.validate(this.getClass(), workItem) to make sure any WID Parameters marked as “required=true” were actually provided.
  3. To get the values of the Input Parameters from the BPMN, use the parameter name as a String for the workItem.getParameter() method and then cast the result to whatever you need for your code. If you defined the Parameter as a specific object in the WID, use that Class to cast because getParameter() just seems to return a generic object.
  4. Side note: the workItem object has a method to get the running Process Instance ID, which might be helpful: workItem.getProcessInstanceId()
  5. When your code is done you’ll have to call manager.completeWorkItem(workItem.getId(),results) to close out the Task. “results” is a required parameter for the method, but it seems like you can pass “null” if your Custom WIH doesn’t need to return anything (like this).
    • If your Custom WIH needs to return anything, the results parameter for completeWorkItem is always of type Map.
      • Each Key (the String) in the Map should correspond to whatever you’ve put in the results/@WidResult section. So if you have a @WidResult defined as “SampleResult”, your code will have to call results.put(“SampleResult”,{a}) and then put whatever your result object is as the second parameter (replace “{a}”). If you decide to use a runtimeType in your @WidResult section, make sure the Object in the Map is that same type.
      • NOTE: You can also “put” items into the results map that aren’t listed in results/@WidResult. They will still be available to the BPMN, but will have to be added and mapped manually by the BPMN author when your Custom WIH is added to the BPMN palette and the author assigns the Data Mapping.
  6. I guess leave the “abortWorkItem” method alone…or maybe do something like close a connection or something in a scenario where your Custom WIH is interrupted for whatever reason…?

Feel free to write any tests and such that you want. I’ll add some more notes on this later.

Build and Deploy and Integrate

It seems easiest to run the build from the command line in the project directory instead of using the IDE, but feel free to do this whatever way you want. From the command line, this seems to work best:

mvn clean package -Dmaven.test.skip=true

This command will do a ton of stuff but most importantly it creates a JAR in /target that you’ll want to upload to Business Central. It also creates a lot of other stuff that I have no idea how it’s used, so that will be for another day.

In order to fully integrate the Custom WIH in a Process Project, it seems like you need to follow all three (+1) of these steps from here:

  1. ADDING THE WORK ITEM HANDLER TO BUSINESS CENTRAL AS A CUSTOM TASK
    1. This is where you upload the mvn clean package JAR to BC
  2. INSTALLING THE CUSTOM TASK IN YOUR PROJECT – Custom Tasks
    • Project > Settings > Custom Tasks
      1. This should auto-generate the WID Asset for your process project once you click Save after Installing the Custom Task
  3. INSTALLING THE CUSTOM TASK IN YOUR PROJECT – Add GAV Dependency to your project’s pom.xml
    1. Use your Custom WIH’s GAV that is defined in the Custom WIH pom.xml. 
    2. I’m not really sure why you have to do this because BC makes it seem like it will add the dependency automatically, but it doesn’t at this point, so make sure it gets added under Project > Settings > Dependencies and that the Custom WIH GAV shows up in the Process App’s pom.xml.
  4. MAYBE: You might have to create a entry in Project > Settings > Deployments > Work Item Handlers with the @Wid name and @Wid defaultHandler values from your Custom WIH. It seems like this entry shows up automatically some times but not other times…not sure how this is related but just make sure you have the Work Item Handler defined here.

NOTE: It seems to help if you add the following plugin to your Process Application Project’s pom.xml. This seems to help ensure your Project is built with any required downstream dependencies from assets like your Custom WIH.

<plugin>

    <artifactId>maven-assembly-plugin</artifactId>
    <executions>
        <execution>
            <phase>package</phase>
            <goals>
                <goal>single</goal>
            </goals>
        </execution>
    </executions>
    <configuration>
        <descriptorRefs>
            <descriptorRef>jar-with-dependencies</descriptorRef>
        </descriptorRefs>
    </configuration></plugin>

Once you’ve saved all those Project-level changes, you should be able to see your Custom WIH in your BPMN palette. Congratulations! Drag your new Custom WIH into your BPMN flow and map it up. REMEMBER: for Data Assignments > Data Outputs, the “Name” should match a Key for whatever your Custom WIH is putting in the Map<String, Object> results parameter that is passed to completeWorkItem(). Because the Map.key will just resolve to an Object, make sure you cast before you start trying to do anything with it, because the Value for the Map.key you’re using in Data Output and Assignments > Name will just be a generic Object. You’ll want to do something like:

if (sqlResult != null) {
java.util.List<String> lines = (java.util.List) sqlResult;
}

That will get you a List<String> of the SQL Results from the ExecuteSQL Work Item Handler. You know that “Result” parameter is a List<String> because you looked at the code and saw results.put(RESULT,processResults(resultSet)); where processResults returns a List<String> lines = new ArrayList<>();

So hopefully these notes help anyone trying to develop a Custom Work Item Handler or Custom Task in RHPAM/jBPM.

Feel free to add a comment or send me a note if you have any feedback or questions.

Hopefully I’ll get around to Exception handling and such in a later post.

Kafka and Mongo on a Camel’s back

Disclaimer: remember, I don’t always know much of anything and certainly not everything about this stuff. These are just notes.

So for those who don’t know, Apache Kafka is an event streaming platform and MongoDB is a document-based NoSQL database. Apache Camel is an integration framework that probably does a lot of really cool stuff, but for me it really simplifies connecting systems together in pretty easy to read and author Java code. They are all very trendy these days, and for very good reasons.

Without going into the details of each product (there is a ton of knowledge available online about both), I’ll setup a Use Case for RedHat Process Automation Manager (RHPAM) which also applies to jBPM.

In an earlier post I mentioned the ability to setup RHPAM to automatically fire events to Kafka in certain process, case, and task scenarios. Setup information here. This is a pretty neat feature because now you can automatically fire events for your workflow applications and any consumer can take those events and handle them however they want: “pizza tracker” app, instance status updates, task alerts, partner service calls, etc.

  • Has a request been entered to wire money out of a bank account? Well make sure the online account balance app knows about the potential debit of funds before showing a customer their available balance.
  • Has a hotel booking request been cancelled? Let the reservation system pick up that status change and mark the room as available before the settlement activity finishes dealing with any held funds.
  • Did the purchase order get approved by the Senior Manager? Have the contract application start the new activity automatically while the approval process closes out.

So with minimal work on the BPM product side, other applications can get connected to your workflows. And of course you could also setup your process flows to send Kafka events at a certain points or map Messages and Signals to Kafka topics…all great use cases as well. But in this scenario we’re setting up the automatic Kafka event emitters.

So how does Mongo apply? In this proof-of-concept (POC) we setup RHPAM to automatically drop process, task and case events directly to Kafka and then built a really small Spring-based Apache Camel (details below) application to consume the events off the Kafka topics and drop them into MongoDB collections. Now we’ve got an easily accessible (and ideally very fast) repository for process and task data (I’m going to ignore Case events for now since I’m not using it) for applications like a Work Management system (task list) or Process Search system (allow users to see requests and status information across processes). Why not have those systems just consume the topics directly? Well maybe they need to collect related process and task data across events and not persist anything themselves. Why not have applications call the KIE REST API? We don’t want to impact performance of running workflows. But most importantly: this was a good chance to mess around with Kafka, Mongo and Camel to learn some cool things.

So we have a process application publishing events to Kafka and we want to forward those over to MongoDB. Is there a Mongo feature to do this automatically? No idea. Did I Google Kafka and Mongo and end up on the Camel site? Probably.

*NOTE: There are various Sink and Source connectors for Mongo and Kafka and loads of things that would probably have worked in this scenario, but again it was just a learning opportunity.

Apache Camel is a really neat integration framework that essentially provides built-in connections to a lot of different components with a really simple language format (Domain Specific Language – DSL). I’m using the Java DSL for this, but the XML or Spring or whatever you want will probably work just as well.

By the way, here’s the code I wrote for this use case.

CkmApplication.java

This is the main Spring Boot application with no customizations from Spring Initializr except for the MongoClient bean that uses the application.properties > tmd.mongo.url to connect to the MongoDB. Obviously you’d want real security and such around this but the point is that the Bean is used for the Connection and for the Camel Route.

KafkaMongoRoute.java 

This is where the cool logic is…you’ll see the configure() method implemented in a pretty neat from –> to syntax. The Camel application is always running (see application.properties > camel.springboot.main-run-controller=true) so the Java DSL sets up the Kafka consumer using URL configuration in this example. This means the Kafka connection information (broker and such) is configured in-line in the route.

The route works by taking an event from the corresponding topics (see application.properties > consumer.topic is a comma-delimited list of topics), log some stuff, and then decide which Mongo collection gets the event data. Notice how the Java DSL “to” lines use the tmdMongo bean name to identify the Mongo information.

RHPAM/jBPM emits events using this Cloud Event Spec JSON which is essentially some predefined metadata and then a data parameter that is a JSON version of the process or task or case data. And since Mongo is just a document collection, we didn’t have to predefine any table structure or write any code to map from the RHPAM Kafka event JSON into Mongo. It’s just there and able to be queried using normal Mongo connections and applications and such.

camel-context.xml

The other important asset is the camel-context.xml file. Camel allows configurations in-line in the URL and also via XML or Spring configuration settings, like Spring beans. In this example you can see how the Camel Route is pointed to the KafkaMongoRoute class using the id of “myBuild”. Again, there are probably plenty of ways to do this but it was fun to explore the Java DSL, Spring Beans and the XML configuration.

So again, this was a really great learning opportunity with Camel and RHPAM and Mongo and Kafka. We’re already working on custom RHPAM Event Emitters instead of the OOTB emitter, mainly to implement concepts like DLQ (dead letter queue – when the topic isn’t good) or a back-up persistence layer for the events (probably an Oracle or maybe Mongo resource).

RHPAM, jBPM, Kogito…oh my

I’m taking a detour from IBM BAW/BPM for a bit and wanted to jot down some lessons learned while working on a Proof-of-Concept with Red Hat Process Automation Manager (RHPAM), which is also kind of just JBoss jBPM, but then also this Kogito KIE (“Knowledge Is Everything”) product…yeah, welcome to Open Source. We’ll call it RjK for this post.

For the most part, RjK is a full Business Process Management/Process Automation/Workflow solution. It offers design/authoring functionality in a web-based environment under the label “Business Central” (BC). In Business Central you can author process flows, rule assets, data models (like Business Objects), forms (like Coaches), and a few other asset types I haven’t used yet. RjK is much more Java than anything in the IBM BPM/BAW world: the Data Model asset is essentially a Java class file; the process assets are full BPMN-compliant XML files, etc. RjK covers process, rules (pretty much the Drools product), and then this OptaPlanner feature that I haven’t looked at yet but seems like a cool way to create optimization solutions for things like “the best way to plan delivery trips” or “solving a Sudoku puzzle”. There are a ton of open source resources for RjK: examples in GitHub, posts on Stack Overflow, Medium.com posts, etc. All three products offer documentation and some of it is copy/paste from each other, but looking at examples is usually the best place to start. RHPAM is Red Hat’s “hardened” version of jBPM with some Kogito/KIE functionality included. I think this means customers can pay for support and consulting for almost the latest version of jBPM/Kogito, but with a stamp of approval from Red Hat that the version is safe(r). From what I can tell, Kogito is the migration path for what was previously called jBPM. Kogito is putting a more cloud-friendly spin on the offering with things like Quarkus, Kubernetes, and GraalVM…way outside of my comfort zone, but still neat to tech-drop like that.

RjK applications are built into kJAR files which can run under the umbrella of a KIE server, or kJARs can be setup to run all by themselves, or even embedded in another application. There are a ton of API options for RjK, all the way from Business Central to KIE to the individual objects that actually execute the process and rules assets (REST APIs, Java APIs, other APIs I’m sure I don’t understand). It’s very compartmentalized and much more “micro-services” than IBM BPM.

This PoC is setting up one Business Central instance on a Kubernetes pod and then one KIE Server instance on a different Kubernetes pod. There is a shared Persistent Storage Volume (PSV) for the two pods: it is being used for the Maven Repo mentioned below. The PoC is essentially trying to mimic IBM BPM by using Business Central as a Process Center-type environment and KIE Server as a Process Server environment. Projects are deployed from Business Central to the KIE Server instance which is running other projects/containers at the same time. Business Central is connected to KIE Server so BC can manage and start instances and interact with tasks.

I’m also messing around with Spring, Apache Camel, Kafka, and Mongo for some supplemental functionality. I need to scrub the code and get it into GitHub and I’ll share some learning notes on those capabilities next.

Again, these are just raw notes from some setup and configuration work. Hopefully more to come:

  1. Connect Business Central (BC) to KIE/Process Server
    1. This older 7.0 article mentions a specific login-module that needs to be enabled on BC to authenticate to KIE, but the 7.12 page doesn’t mention it:
      <login-module code="org.kie.security.jaas.KieLoginModule" flag="optional" module="deployment.business-central.war"/>
    2. We had to add that element to the BC standalone*.xml in order to get BC Workbench to see process data on KIE
  2. Asset Path – The way BC and KIE store assets (aka code) can be a little confusing…and obviously I’m not 100% sure all of this is correct.
    1. BC uses an internal Git repository (.niogit) in a hidden directory to store and manage design-time assets (versioning, branching, etc.)
      1. There is a lot of read/write to this repo so it needs to be local to BC
    2. BC then uses Maven to pull from code niogit and push assets to a Maven Repository
      1. This repository can be local to BC only or external
        1. A local BC Maven Repo (local as in on the file system where BC is running) is accessible by the BC/maven2 REST endpoint, which has a special security configuration (we could not get this to work)
          1. NOTE: this might work if we add the login-module above to KIE standalone…but we’ll wait and try this later
        2. The external repo can be over HTTP (like Artifactory)
        3. Or a shared file location
          1. We chose to point BC maven to a Persistent Volume (PV) that KIE Maven could also reach
    3. At PAM Project deployment, BC sends container information to KIE (?) along with the build to the Maven Repo
      1. Totally not sure about how this really works…KIE has a configuration reference to BC’s controller, so I’m not sure if KIE is pinging BC to get deployment info…or if BC pushes or signals KIE is some way…
    4. KIE goes to the same Maven repo to pull down the assets (the KJAR and pom) and build (via Maven) the actual container that will run while keeping the build locally in its own Repo…? Again, not sure at all how this is working but this PoC has .niogit on BC’s filesystem, a Repo for BC and a Repo for Maven and it’s working.
  3. Events – It looks like there are three different event scenarios for PAM:
    1. Automatic Event Emitters
      1. This a feature where jBPM will automatically send/publish a Process, Task, or Case event to Kafka using the built-in jbpm.event.emitters feature. This feature has to be enabled and configured in standalone-*.xml. See notes here.
    2. Custom Event Emitter
      1. This is where an activity in a BPMN flow is built to send/publish an event in a specific scenario in the process.  This option requires a WorkitemHandler to be configured.  Here.
    3. Basic Event Processing
      1. This is how to configure KIE to consume and produce Kafka events using the built-in Kafka server functionality. This only works with an app deployed to a full KIE Server that is already configured to deal with Kafka. This configuration setup is slightly different than the Automatic Event Emitters.
      2. See here.
      3. NOTE: if you’re setting up a Receive Message Event (Consume) in a BPMN asset, the event payload has to match the Message Start Event data mapping exactly. And needs to be JSON in the form (“data”:{—-JSON of Input Variable name:value—–}}. You can’t just use a string.
  4. Signals vs Messages
    1. I’m not really sure if this is correct based on the documentation, but apparently Messages are universal, which means you can’t target a specific Process Instance with a Message Event. Using the REST API or the Kafka connection to a Message Topic will fire the message on all active instances…?  This might be unique to the scenario of setting up a central KIE server to run multiple containers/deployment units.
    2. Signals use a specific Process Instance ID and can target only that instance.
  5.  Signals
    1. When firing a signal via the REST API, the payload/body has to include the class name of the output variable defined for the Signal.  So if a Signal is mapped to an output variable like this: {“MyObject”:{“myName”:”Sue Smith”,”myAge”:25}}, that JSON has to be in the Body of the REST API call to fire the Signal, where “MyObject” is the class name of the Data Model for the Output variable. Using the name of the Output Variable doesn’t seem to work.

Some helpful links:

https://medium.com/capital-one-tech/using-machine-learning-and-open-source-bpm-in-a-reactive-microservices-architecture-96bb8dc9e962

https://snandaku.medium.com/integrating-red-hat-process-automation-manager-and-red-hat-amq-streams-on-openshift-in-4-steps-327aa2da7929

https://mswiderski.blogspot.com/2015/09/unified-kie-execution-server-part-3.html?force_isolation=true

Car Mileage

And now for something completely different, here’s a chart of price/gallon and miles/gallon for my 2005 Acura sedan (which has since moved on). I was looking for something in my Google Drive and stumbled upon this old sheet.

I have no idea why I tracked this data. I also don’t think it is 100% reliable: obviously I missed some stops and might have mistyped a few of the values. But nonetheless I think it has some interesting insights:

  1. Price/gallon does trend up and miles/gallon trend down, which seems rational based on inflation and age of the car.
  2. Average price/gallon: $3.22 (the car “required” premium so that is normally what I pumped)
  3. Min price/gallon: $1.69 (12/10/08 – Rock Hill, MO)
  4. Max price/gallon: $4.36 (6/28/08 – Lexington, VA)
  5. Average miles/gallon: 24.55
  6. Min miles/gallon: 17.20 (12/19/08 – St. Louis, MO) – probably bad commutes
  7. Max miles/gallon: 34.05 (9/5/09 – North Carolina) – road-trip from STL to an NC Beach
  8. Total money spent on gas: $7,392.99
  9. Total gallons: 2,308.9126

This data covers 2007 – 2013 which includes time in three different states: Virginia > Missouri > North Carolina. Whenever I learn more about Google Sheets I’ll try to throw in some shading for the states, but that might explain the drop around Fall ’08…of course the giant recession might have had something to do with that as well. The spikes in MPG might come from highway trips which tend to provide better gas mileage as opposed to stop-and-go commuting traffic. The EPA estimates for the car were “Premium Gasoline – 23 combined city/highway MPG / 20 city / 28 highway 4.3 gals/100 miles”, which is about what I was getting.

The Process of Process Automation?

Is there a way to define the process of building process automation applications (or Workflow applications if you’re confused about which is which)?

In other words: is there a way to articulate the process of doing what is done with a product like IBM BPM/BAW, but without actually talking about just one product… What if you want to create a training class (not sales) that explains how to use any BPMS/Process Automation Platform: Pega, Appian, IBM BPM, Camunda, whatever.

As a preface: I’m talking specifically about design and modeling for execution and implementation. I’m not talking about the ten thousand foot sales deck highlighting all the wonderful characteristics of BPM, BPMS, continuous improvement, easy integrations, rules, cloud-ready, built-in AI, why this platform is the right choice, etc. Let’s assume you or your leadership have bought the product already and it’s been installed or enabled or tunneled or whatever had to get done to get to the point where the product is ready to be used by a developer.

When it comes to business process automation, there has to be some kind of consistency when we get down to how to implement a process automation solution.

And yes, we can talk about BPMN and iGraphix and BlueWorks and then drive into SIPOCs and KPIs and whatever other acronym you have or whatever else you want to talk about when it comes to process modeling and discovery. But modeling for discovery and modeling for execution and implementation are two very different things.

We have to agree on that first before we continue.

No matter how much you document an HR Onboarding process (which coincidently is every BPMS’ demo/default process, yet so many enterprises already have a massive HR solution so do we ever really implement HR Onboarding in a BPMS?), is your end-automation solution just those three activities: Create Req, Review Req, Post Req?

  • You’ve got to data marshal something, right?
  • You probably won’t just display a giant empty form with Candidate Name and Address, right? You’ll want to pull from your job posting system or HR position data or organization structure. You won’t do that inside the Human activity, right? You’ll want to do that before hand in a system activity…What does the integration look like?
  • And you might want to make integration calls between those high-level Human Activities…maybe communicate with users (email, alert, etc) or other systems (events, messages, etc)…
  • And then you have to handle exceptions in those human and system activities…
  • Implement escalation or expiration timers…
  • Handle events like CMIS or JMS
  • Oh, and just that small task of designing and organizing your business/data objects somehow…

Before you know it your high-level Create > Review > Post process flow is cluttered with intermediate nodes and exception activities and ad-hoc activities and gateways and nested objects upon objects and Boolean flags that only get used once…Where did that Create > Review > Post BPMN diagram go again? It must be around here somewhere…

So where do we start?

Obviously BPMN and discovery process models are the best place to start: you have to know what activities get you from start to finish for a given process. And you have to be on the same page as your business partners. Luckily the graphical nature of process diagrams makes this an easy win. So go ahead and import that BPMN diagram into your BPMS. It is not wasted work. But be open-minded to the fact that this flow can and should change. You won’t lose the concept of your process; you’re just going to add detail to make it executable.

So let’s make this first topic short and simple and start there: a process model.

  1. Find out who does what when
  2. Find out how the process starts and what constitutes an end-state
  3. Find out what data is needed and where does it come from

I know that seems really simple, but take this chance to gather the high-level requirements for your process. Start in Sprint 0 with just some modeling and generic object design. Always ask yourself and your team: “what are we trying to accomplish here?” Because as soon as you get into the weeds of REST calls and CSS layouts and Task Names and SLAs, you might forget that.

So after the process model, what else do you need?

I’m going to start this series with a list and adjust from there. I can’t answer this question with a single post, sorry. But we’ll see what we discover along the way.

  1. User Experience: how will users participate in the process? Are you building UX inside your BPM platform (if it supports things like IBM BPM Client-Side Human Services and Coaches) or will your application rely on External UX solutions that need to integrate with an API?
  2. Integrations: what other systems does your process application need for inputs and outputs? SoR? Messaging? What is your SOA environment like: do you have an API gateway or catalog or robust microservices? Do you need to onboard to some sort of security framework when connecting to other systems?
  3. Persistence: where does your application store data: RDBMS, noSQL? Does it need to be maintained or auditable (like WORM)? What is your process data half-life?
  4. Support & Performance: what is your throughput? Where do you log? How does Production Support work? Are your process instances recoverable?

Feel free to leave your thoughts and comments about how to tackle this first process modeling step in the Process Automation application lifecycle.

Agile and IBM BPM – Déjà vu

I’m sure plenty of software engineers have been in this position: the rough surf of Agile and Scrum knowledge waves. Weekly blast emails about a new Agile garage or a new Product Owner organization change. Oh wait, Spotify a Guild? Let’s do it! Hire a Scrum Master – go, go go! Rollout DevOps – now!

It can be overwhelming.

But don’t get me wrong: I’m incredibly excited about the opportunity and the change and the excitement about Agile.

But I do have something small that I find a little amusing…I pulled the above image directly from Scrum.org and it reminded me so much of this old slide from an IBM BPM sales presentation about iterative development. I no longer have an exact copy of the deck, but check out this image from Page 11 of a 2015 Redbook:

2015 IBM BPM Design Redbook

And I know: Agile and Scrum have been around for years, I get that. But in some organizations, even in 2021, it is all brand new. But as an IBM BPM team we’ve been practicing this iterative development pattern for years, even in Waterfall projects, right?

Check out this Bruce Silver article about IBM BPM’s predecessor (Lombardi TeamWorks) in 2006:

A second distinguishing characteristic is support for rapid iterative development. Other vendors frequently pay lip service to this implementation style, but Lombardi supports it concretely with the ability to instantly “play back” activities and process fragments even early in the modeling phase, creating default components as necessary that will be refined later on. Lombardi emphasizes a project methodology in which a simple version or fragment of the process is piloted very quickly, with additional features and richness layered on iteratively after that.

Bruce Silver 2006: https://cs.nyu.edu/~jcf/classes/CSCI-GA.2440-001_sp12/handouts/BPMS_-_Lombardi_01.pdf

“Play Backs” and “rapid iterative development” – how much more Agile can you be?

It’s so nice to see other application teams excited (well, excited and full of trepidations) about Agile. Where they traditionally waited around for complete technical specification documents and design documents, now they can start coding and testing at the same time the larger team is building complementary code and functionality for stories. Where they were heads-down in code until QA, they are heads-up with the team each day, refining stories, providing Dev feedback, engaging the business, validating acceptance criteria, etc.

It’s seem like such “old news” to BPM developers, but it’s very exciting to have more partners along for the ride going forward.

Another IBM BPM/BAW Date Hack

The team had a user story to determine the Thursday of the 2nd week of the first month of a quarter (so January, April, July and October), recognizing that a week might only contain one day of the month. This would be server-side code running in a BAW Service Flow asset.

January 2021 is a good example:

Notice that January 1 and 2 fall on Friday and Saturday but our user story considers that a valid “week” for the month. So the first Thursday of the 2nd “week” would be January 7.

We had existing code from a prior version of this requirement where the user story was written as “the 2nd Thursday of the Month”, which would end up on January 14, 2021 because we used a method to get the first occurrence of the weekday (Thursday is getDay() == 4 in this case) of the month using mod (%) 7 then moving that date of the month (1-31) out by 7 until we landed on the correct 2nd occurrence: (2-1)*7.

With the new user story, we still determine the Thursday of the 2nd week (that code has been successfully tested), but then we check to see if that week is actually the 2nd week in the month. If not, we subtract 7 days and use that value.

Here’s the new code:

var c = new java.util.GregorianCalendar.getInstance();
c.set(tw.local.targetDate.getFullYear(), tw.local.targetDate.getMonth(), tw.local.targetDate.getDate());
c.setMinimalDaysInFirstWeek(1);
var wk = c.get(java.util.Calendar.WEEK_OF_MONTH);

if (wk > 2) {
   // If we're not in the 2nd week, move back one week (7 days)
   tw.local.targetDate.setDate(tw.local.targetDate.getDate() - 7);
}

We start by creating an instance of the Java GregorianCalendar (the regular Calendar option doesn’t seem to work in either BPM, Java 8, or Rhino, for some reason). And we set the calendar to the matching year/month/date of targetDate (this was previously defined as the 2nd Thursday of the first month in the quarter using the existing code).

The we call setMinimalDaysInFirstWeek to define how many days we consider to be in the first week of the year (don’t worry, this carries over to the rest of the calendar year – October 2021 has the same setup as January 2021). In this case – just 1 day is considered a “week”.

Then we use the WEEK_OF_MONTH constant to determine which week of the month we’re in and decide if we can keep the current targetDate or set it back 7 days to the previous week (it will never be more than 3 weeks off because of the existing “2nd Thursday of the month” logic).

The GregorianCalendar object was helpful and being able to merge the Java and JavaScript was much easier than trying to transcribe into JavaScript or write something entirely from scratch.

The Java GregorianCalendar has a few other helpful methods, one of which is “roll”, but it behaves differently in Java 7 vs Java 8, so consider your environment if you want to experiment with this object.

The 12 Factors of IBM BPM

So I was taking some Agile training the other day and came across this great resource I had not seen before: https://12factor.net/

That list provides a methodology to help architect and design applications that primarily provide services, not necessarily UX or front-end solutions.

IBM BPM is a unique product because it can provide full-stack solutions with process flows and Coaches, but it can also function solely as a service application (think of a headless process app where client applications like UX interact with the BPM REST API).

So can any (or all) of those 12 Factors apply to IBM BPM process applications? Let’s take them one-by-one and give me a Match or Unmatch verdict.

  1. Codebase
    This one seems pretty easy: IBM BPM includes Process/Workflow Center as a central design-time resource and asset repository…but then again, this one is tough because IBM BPM doesn’t really integrate with common source-control systems, like Git or VSS or SVN. I’ve seen lightly-coupled integrations; for example: use an automated process to kick-up off the install ZIP process and store the file in Source Control, then deploy from there using another process. But Factor #1 mentions tracking…this is the tough part. Everyone knows how easy it is to track changes in most Source Control, especially something as simple as Git. Process/Workflow Center on the other hand isn’t very consistent with change tracking. I think Desktop PD was better than WebPD, by far. So maybe the time stamps and user IDs are hidden in the DB somewhere, but they aren’t always on the WebPD UI.
    Verdict: Partial Match
  2. Dependencies
    I read this as Toolkits, right? Toolkits are really, really helpful in IBM BPM and create easily reusable assets that can shave off tons of development time, especially if the TKs are properly managed (think dependency updates and dealing with nested TK references).
    Toolkits can also introduce a lot of headache if they constantly reference each other over and over.
    I’ve used the BP3 Dependency Checker for years (still running the 7.5 version) and it’s one of the most helpful ways to track TK relationships and avoid potential deployment issues with overlapping TK snapshots. I just wish the Process App and Snapshot dropdowns were alphabetized! 🙂
    Verdict: Match
  3. Config
    I see Environment Variables and Exposed Process Values as part of the Config, right? I think these two features of IBM BPM provide a massive amount of value to deployed process applications. Are they easy to maintain – that’s a different argument. But being able to configure run-time parameters via Process Admin is a great resource for processes that have fluid business rules.
    Verdict: Match
  4. Backing services
    I think this aligns with Toolkits…? Especially if you practice building toolkits for specific integrations. For example: you have an enterprise service that already exists for customer address information. There’s no need to build that SOAP or REST integration in each process app, so wrap the call in a Toolkit and manage the integration in a central location, then attach the resource to your app when and where you need it.
    Verdict: Match
  5. Build, release, run
    I think IBM BPM’s snapshot paradigm aligns well with the separation of these actions. We build the snapshot in Process/Workflow Center and deploy to a runtime Process Server. It’s not like we’re actively developing in a runtime environment. Now granted P/W Center has a Process Server inside of it, but that is mainly for unit testing and debugging.
    Verdict: Match
  6. Processes
    Though we’re building Process Automation Applications, I think the code and resulting process instances themselves are perfectly capable of standing alone as separate entities, but obviously they aren’t entirely “stateless”; especially since state persistence is one of the primary functions of a process/workflow solution. But couple this factor with IBM BPM running on WebSphere Application Server and the platform is executing each activity and service as their own threads in the WAS pool.
    Coming from another angle: are you designing your process at a level that can be re-used or incorporated in places other than its original target? For example: say you build a process application with Coaches to automate the hiring process: enter applicant details, route for review, post to SOR, etc. What if your company gets a new employee or HR portal but you want to re-use your existing process asset? Was your process designed to easily integrate without a Submit Coach? Or a Review Coach? Or a change to the SOR? It’s not that the process is stateless, but can your process application exist without some dependencies or easily adapt to changes in dependencies…?
    Verdict: Match
  7. Port binding
    I’m not sure this one is entirely relevant to an IBM BPM Process Application…Again, I think this is handled by WAS under the covers.
    Verdict: N/A
  8. Concurrency
    Another “covered by WAS” factor? But also a design consideration…are you designing your process assets so that you can add volume or functionality later without a complete re-write of the application? Can your process instances execute at the same time? Most process and workflow solutions probably have this requirement, but then there are others that need relationships between instances. I’ve personally never used this feature in IBM BPM but maybe there are some good use cases.
    Verdict: Match
  9. Disposability
    I’ll have to come back to this one…
    Verdict: N/A
  10. Dev/prod parity
    I think this one is easy if you isolate it to IBM BPM process applications. Especially if you have a pipeline that deploys your install ZIP to multiple non-prod regions (if you have them).
    The tough part comes when you introduce external dependencies, especially test data. A lot of that might be out of your control as a process application engineer. Sure you can mock data in P/W Center or maybe even in another non-prod region, but what if your service providers can’t do that or have different data sources for each region? This can make testing difficult, especially if your process logic needs to handle lots of different parameters from external integrations.
    Verdict: Match
  11. Logs
    I’m going to add a separate post for logs and log4j in IBM BPM, but I think this one is pretty easy to cover using log.* and the default WAS SystemOut. How you manage logs and when you use them are a larger topic for a longer post. Look for that coming soon.
    Verdict: Match
  12. Admin processes
    Again, you can use WAS and the Integrated Console or wsadmin for this factor, but don’t forget about the power of Process Admin. This runtime console is great for managing installed apps, has the utilities for ENVs and EPVs as well as some instrumentation/performance monitoring. That last point is sometimes really helpful or really frustrating, especially if you have a clustered ND setup with multiple servers. Unless you have a way specifically hit a server, this tool can be tough to use with a load balancer or other web server in front of IBM BPM. But either way Process Admin is still an extremely helpful tool for managing your runtime environment.
    Verdict: Match

So can IBM BPM be a 12 Factor app? I think in most cases it is a great match. But there is also a lot of variability in how you design your process flows and services to make sure you meet some of the factors. Especially when it comes to how to integrate with other applications and how your process starts and ends.

Remember that modeling a process in discovery and building a process for execution are often (if not 100% of time) two different models. So take time to plan the automation model so that it can incorporate as many of these 12 Factors as possible.

JavaScript Notes

I ended up down a JavaScript wormhole the other day and found a few interesting sites I bookmarked and wanted to save the context. Remember – I’ve done a lot of IBM BPM development which uses ES5 and a very limited scope of JavaScript. There isn’t really a need for any of the functionality mentioned below, but it was still nice to learn about.

https://flaviocopes.com/javascript-iife/

Immediately-invoked Function Expressions(IIFE) – I’m pretty sure I’ve seen this pattern before…essentially you’re defining a function that is invoked as soon as it is created. The context for this was a security wrapper that was enforcing strict mode.

https://stackoverflow.com/questions/5378559/including-javascript-in-svg

JavaScript in SVG – Sadly I had no idea JavaScript runs inside SVG markup. I actually didn’t really know what SCG markup was aside from normally used for pictures and animation. But that Stackoverflow article provides code for a neat little ball animation that can be copy/pasted into an svg file and run locally, so it is a neat way to learn about JavaScript and SVG.

http://icyberchef.com/

iCyberChef – So I’m still not 100% sure what this site is used for…but it came up in a security article as well because apparently it was broken recently. I’ll follow-up on this again later.

https://www.w3schools.com/jsref/prop_loc_hash.asp

Location Hash – So I’m not entirely sure of the use case for only looking at the anchor part of a URL, but I’m sure its out there…like this, which talks about a SPA scenario and since IBM BPM isn’t really a SPA application, I’m just not that familiar with the pattern. But I’ll save this for later.