Excel Learning

Just a quick post with two silly pieces of Excel information I (sadly) didn’t know about:

https://www.contextures.com/xlDataVal02.html

https://www.contextures.com/xlExcelTable01.html

This came about while I was working on an Excel book that had been exported from a larger enterprise application. The book came with all this nice formatting and data validation and look-ups and hidden tabs and stuff like that.

Some of the columns had drop-downs that would change based on values in other columns, and I didn’t really know how that was being done until I looked up this =INDIRECT(targetCell) function. So essentially what this function does it take the text value of the target cell and uses it wherever the INDIRECT is being called. In the example above and in my workbook, INDIRECT was used as the source for the data validation look-up drop-down.

For example: the C3 data validation list was pointed to something like =INDIRECT(B3), where B3 was something like “Fruit”. Elsewhere in the book was a named range called “Fruit”, with options like Apple, Orange, Pineapple, etc. This means that the “Fruit” named range was being used for the data validation look-up list.

If you changed B3 to “Vegetable”, a different set of options would appear in the data validation drop-down list. Neat, right? Because there was a different named range called “Vegetable” with different values like Potato, Celery, Lettuce, etc.

Doing the Data Validation tutorial above also helped me figure out how Named Tables can be used in Excel, which pointed me to the 2nd link.

I’ve mostly been a data-dump-into-Excel and run Pivot-tables kind of user, so named ranges and tables and INDIRECT weren’t really functions I had to use before, but now that I know a little bit more about these capabilities, I’ll see if I can put them to use.

Pivotal Cloud Foundry

So I wanted to learn about Pivotal Cloud Foundry (PCF) and it turns out there is a free intro available:

https://pivotal.io/platform/pcf-tutorials/getting-started-with-pivotal-cloud-foundry/introduction

What is PCF? I don’t really know. But I think PCF is what folks call a “Platform-as-a-Service” cloud offering. You can Google all the different cloud options (SaaS, PaaS, IaaS, etc) but essentially PCF provides a platform to deploy Applications you write in a lot of different languages (Java, .NET, Node.js, etc). And you get to focus on writing the application and PCF owns how your application runs. PCF can also connect your app to services like databases and monitoring tools automatically. And that entire stack of your app and everything that supports it can be managed using PCF tools like an admin console through your web browser or a command-line interface (CLI) on your local machine. You can start and stop apps, scale them, add and remove services, etc. And PCF isn’t really a specific cloud provider…its a framework you can take and put almost wherever you want, like Amazon Cloud, somewhere in your enterprise or even your local PC.

So I ran through the intro linked above and have a few small notes:

  1. You have to setup a Pivotal Web Services (PWS) account first – but you need to do more than get a login name. You’ll have to validate your email and then you need to actually log into PWS and setup a default Org (like a way to group your apps). Once you do that, download CLI and do the git stuff and then “cf push” will work. “cf push” didn’t work for me until I setup that PWS Org.
  2. I had to download git for Windows and I used the Git GUI to clone the repository on the “Deploy the Sample App” page. I just created a local folder (C:\pivotal) and told Git GUI to clone the PWS Intro app into that folder.
  3. FYI – the app doesn’t really do that much…it essentially throws up some config type stuff using templates/index.html. You’re really just getting an overview of how to use PCF without any focus on the app in the demo/intro.
  4. I also had to run “cf push” a few times…apparently it tries to pick a random endpoint/URL (a “route”) for your application and I guess it kept picking random words that were already used by other users or apps. So I just kept hitting “cf push” and eventually it worked.
  5. Pretty much everything you’re doing in the CLI can be done in your PWS Admin Console. It was neat to type into the CLI and then switch over to the console and see all the changes taking place, like the new DB and the scaling changes.

So what’s next? I think I’ll work on a custom Java or Node.js app and get that deployed and see what happens. I’ll check on Eclipse and PCF/git connections as well.

(Update 1/17/19) I was able to also get PCFDev working. I had to update my VirtualBox client and run the install a couple of times but eventually I had PCF running in a virtual machine on my PC and was able to deploy the spring-music demo app, which was pretty cool.

Then I pushed spring-music to my free PCF cloud instance and I connected it to a ClearDB service (essentially MySQL). I was then able to download MySQL Workbench on my local machine and connect to my ClearDB instance – neat. I tried to connect spring-music to a Redis service in PCF but couldn’t get that working…will try that again later.

IBM Process Federation Server (PFS)

In an older role I helped work on a proof-of-concept (POC) for IBM Process Federation Sever (PFS). At the time this was a really new offering from IBM in the BPM space. PFS is a WebSphere Liberty Profile and Elasticsearch-based source for BPM Task data.

IBM BPM already provided a way to federate instance and task data between two different BPM systems running the same version: essentially you register the systems with each other and one API call for instance or task data can pull the data from both systems.

PFS introduced a way to provide this same functionality to systems not on the same version. So think about running an IBM BPM v8.0 platform but you want to upgrade to v8.6. You could do that in place and put your platform at risk and require a lot of testing; or you could stand-up a v8.6 system and migrate apps one at a time to the new platform. But what if you don’t want your users to have to switch back and forth between the different Process Portals? This is where PFS comes into play: create a central source for task data that can be accessed by Process Portal (or any other work mgmt UX using PFS REST APIs which look just like the Federated BPM REST APIs) and that source can serve both the v8.0 data and the v8.5 data. Yay – win win!

PFS was not a performance solve, per IBM. It was just a way to consolidate task data in one location; the fact that they chose Liberty and ElasticSearch is totally up to them…both are relatively lightweight, so I’m sure that had something to do with it. I’m pretty sure Liberty is on the horizon for the BPMN engine so maybe it was their own little POC of BPM in Liberty.

Either way, when a BPM Task is created a bunch of stuff happens in BPM and the database, but most obviously a row is added to the LSW_TASK table. When someone logs into the Responsive Process Portal, a legacy* REST API call is made against BPM and that ends up hitting this LSW_TASK table (and a bunch of other tables) to determine if the user should see that task and be able to claim it and action on it. (*) By legacy I mean the existing IBM BPM REST API that has been with the product since Lombardi Teamworks 7.5.1.

A lot of other DB activity hits LSW_TASK, like when a task is created and actioned-on and claimed and even some instance-related events can hit LSW_TASK. The bottom line is that pretty much all IBM BPM developers and administrators realize that database latency should be as low as possible; the tech sales pitch was always “put BPM and its database right beside each other.”

Contrary to IBM’s standpoint, we started the PFS POC looking for a task source that displayed better performance then hitting the regular BPM REST API. In the POC environment the database resources were all centrally managed at the enterprise level. And though our DB and schema were in the same data center as our application servers, we still saw some latency between BPM and the DB. Attribute it to whatever you want…we were running Oracle 12c on Exadata as a two (2) node cluster. Do all the research you want – that’s a fast setup. But also recognize that we were one app in an incredibly large landscape of DB consumers. And most schemas were partitioned off with default settings and such. But as the BPM team, we weren’t DBAs and we relied on the DBA teams to manage that for us, which was totally fine.

So we thought “hey, a noSQL file-based BPM task repository with an existing API – yay speed!”

I’m not going tinto the details of our POC but in the end I learned a lot about how PFS works.

  1. First you have to build new tables in the BPM database for PFS information
  2. Second you have to configure BPM to use those new tables. How you might wonder?
    1. You essentially setup a trigger that says anytime BPM creates a task or does anything human-task related add a row to one of the PFS tables
  3. PFS is configured to regularly ping those BPM-schema PFS tables over JDBC to look for changes in the data – essentially PFS is looking for new tasks or changes to existing tasks
  4. Then PFS queries into the detailed LSW tables to collect the full task and business data
  5. PFS indexes that data into its Elasticsearch cache and updates the BPM-schema PFS table to flag the row as already read
  6. And this happens over and over as tasks are created and updated

NOTE: PFS does not have its own SQL database except for one to hold Saved Searches, which are defined over a REST API operation and can be used to search the task index in Elasticsearch

If you paid attention to that extremely simplified walk-through, you’d notice how many times I mention DB calls…so maybe you see? In an environment where we already had DB latency, why would we want more activity on the DB just to outsource the task data?

The POC still continues and there is a chance the new DEF might end up playing a role in the solution. But either way – just like IBM said, we didn’t find PFS to be a performance solve.

IBM BPM – EPVs

IBM BPM provides a capability called Exposed Process Variables (EPVs). These assets allow you to use variables that can be changed in the run-time without a new code deployment. IBM BPM has a basic change utility and audit tracker for these values so you can use an out-of-the-box utility to see the current values, update them and also track when they were last changed and who changed them.

EPVs are great for parameters that might need to be changed frequently after a process application is deployed. I use EPVs a lot for things like threshold values or service integration parameters that are used for identification but not business logic.

Another helpful option is to define a comma-delimited list of strings that we might want to validate against a service response. For example if we call a service that returns a list of securities like IBM, AMZN, BLK, F, GPS but we only care about IBM and BLK for example, we can store the entities we care about in an EPV and iterate through the response list to see if the current item is one of our EPV values.

Use the code below as an example for parsing a comma-delimited EPV into a native JS array and then comparing the current array value to some local variable (tw.local.checkVar is this example)

// Get the EPV String
var currentEPV = String(tw.epv.EPVGroup.specificEPV);

          
if (currentEPV && currentEPV.length > 0) {
      // Split the EPV String into a native JS Array
      var currentEPVArray = currentEPV.split(/,/g);             
      for (var i = 0; i < currentEPVArray.length; i++) {
        var curItem =  currentEPVArray[i].trim();
        if (tw.local.checkVar == curItem) {
           // Apply whatever variable or logic check 
           // you need against the current value
           tw.local.logicFlag = true;                            
        }
      }
}

JMS Learning

IBM BPM added some event capabilities they call DEF for Dynamic Event Framework. It looks like this functionality could replace Tracking Groups and Tracking Points in process assets. Essentially DEF provides a way to dump notifications and data out of the process to an external system.

IBM KC Article on DEF
https://www.ibm.com/support/knowledgecenter/en/SSFPJS_8.5.7/com.ibm.wbpm.admin.doc/topics/capturingevents.html

My current role has a potential business case to connect an IBM BPM application to a Solace message engine. IBM BPM would both drop messages to Solace and also be able to consume messages (or be called from Solace to send a message over).

When I started some research on Solace and connecting it to WebSphere Application Server I realized I didn’t have any practical experience with JMS so I set aside some time this week to hack through some JMS tutorials and see what I could learn. Here are my notes from this journey.

Sites that helped

Old School JMS Tutorial from IBM: https://www.ibm.com/developerworks/java/tutorials/j-jms/j-jms-updated.html

JMS in Liberty:
https://github.com/WASdev/sample.jms.server

JMS Overview:
https://www.javatpoint.com/jms-tutorial

So first I just needed to learn the lingo of JMS and all of those sites helped with that. Next thing was to get coding, so I had to download the latest version of Eclipse (64-bit) for Java EE Developers. Easy enough. I already have a 64-bit Java JDK so there wasn’t a need to download anything new from Oracle.

My plan to take the code samples from Mr. Farrell’s IBM JMS overview and put those into a new project that will run on Liberty using the queues and topics setup by lauracowen’s demo. I like to combine things to help understand how each of them work and to avoid the cookbook approach of just following directions without understanding what was happening.

I wanted to focus on Pub/Sub so I took Mr. Farrell’s code for TPublisher.java and TSubscriber.java into a new Eclipse project. I liked that his code prompted for TopicConnectionFactory and Topic names; laura’s sample had the values in the code. So I figured I could use laura’s server.xml for Liberty and just type in the values to Mr. Farrell’s app and all would be good.

Well it turns out I couldn’t really figure out how to connect Mr. Farrell’s code to the Liberty JMS stuff in server.xml. I mean I ran the code “On Server” but I don’t really know if it had all the necessary context. It was failing at the JNDI look-up.

So instead of that approach and went with all of lauracowen’s code. I didn’t use Git for Eclipse so I just created a new project called jms11-JMSSample, copied all of her Java files and packages into Eclipse, used her server.xml and booted up Liberty.

I kept getting another JNDI failure. Ugh. It was here:

TopicConnectionFactory cf1 = (TopicConnectionFactory) new InitialContext().lookup("java:comp/env/jmsTCF");  

Again I couldn’t figure out what was the matter. The JNDI values seemed fine in server.xml:

	<jmsTopicConnectionFactory jndiName="jmsTCF"
		connectionManagerRef="ConMgr3" clientID="clientId1">
		<properties.wasJms />
	</jmsTopicConnectionFactory>

So what was the matter?

This is another case of not doing Java development frequently enough…I didn’t copy web.xml over. web.xml had the Java resource references that connected the code to the JNDI values. What’s interesting is that if I replaced the lookup value in the code with just “jmsTCGF” it worked fine – yay! But it look me a bit more time to understand how web.xml fit into that flow and get it corrected. The web apps worked perfectly after that.

My next task was to find some sort of tool that would let me look at the JMS queues and topics in Liberty. The sample web app had a little bit of info but I wanted to change it up and start dropping messages, not consume them, but be able to look at them somewhere else.

I found this application: JMSToolBox

It looked the a good solution so I downloaded the zip. Unfortunately I couldn’t get this tool to talk to my Liberty server. I have the full WAS JARs that are required and the Liberty JAR as well but I’m getting hung up on the SSL stuff. I don’t need SSL (everything is just local) but either Liberty or JMSToolbox is somehow forcing someting over https instead of just http…I’ll keep trying to work on this or maybe try a different tool…or just try jConsole. I’ll post an update later.

But in the end I had a good refresher on Java development, Liberty and an overview of JMS. Now I get to keep hacking with the code and start writing my own messages and topics.

Then maybe I’ll move on to adding an Elasticsearch index as a source for the messages…?

Installing IBM Process Designer (Desktop)

For folks that still cling to the old ways, I’ve run into a lot of different scenarios when trying to install the Desktop IBM Process Designer.  I finally put together some notes for a way that works if you don’t have admin rights to your machine and the default install scripts aren’t working either.  Maybe this will help some folks that find it.

Navigate to Process Center (whatever your PC URL is…maybe something like):
https://pc_host:port/ProcessCenter

  1. Log on with your current Process Designer credentials
  2. Click on the “Download Process Designer” link in the right-hand navigation window or the pop-up window
  3. Save “IBM Process Designer.zip” to your local machine (~800 MB)
  4. Once downloaded, Extract the contents of the zip file
  5. If you don’t already have IBM Installation Manager installed:
    1. Of the extracted files, navigate to the IM64 folder
    2. Run the “userinst.exe” command to install IBM Installation Manager.
      NOTE: Be sure to install this program in a location accessible for your ID (eg – C:\Users\myid123\IBM\IM)
      You do not need to create the directory before running the installation
      The installation path must not exceed 40 characters.  The installation path should not containspaces
  6. Once the setup finishes it will ask you to restart Installation Manager
  7. Open IBM Installation Manager again (or let it re-open itself, whichever you need) and click File > Preferences and select Add Repository
  8. Navigate to the folder where you extracted the ZIP file and go into the IMPD85 folder and select the repository.config file
  9. Click OK on the Preferences window
  10. Back on the Installation Manager launch page select Install
  11. Installation Manager will read the repository preferences then the config file and present you with the option to install IBM Process Designer > Version 8.5.X
  12. Check the box and click Next to select an Installation Directory
    NOTE: If you already have a version of Process Designer installed, Installation Manager will warn you of an existing package.  
    Just click “Continue” to install Process Designer to a new Installation Manager package group.
  13. Click Next and set the Installation Directory
    NOTE: again, be sure the directory is in a path that is accessible to your ID (eg – C:\Users\myid123\IBM\PD85x)
    You do not need to create the directory before running the installation.  The installation path must not exceed 40 characters.  The installation path should not contain spaces.  It is helpful to add the full Process Designer version to the installation directory (such as /PD857).
  14. Wait for Installation Manager to complete the installation
  15. Navigate to your installation directory (eg – C:\Users\myid123\IBM\PD856) and open the “eclipse.ini” file in a text editor
  16. Edit the line for the Process Center URL to correspond to your appropriate Process Center:
    -Dcom.ibm.bpm.processcenter.url=[INSERT PROCESS CENTER URL]
  17. Save the “eclipse.ini” file

All done.  Now you can open Process Designer

If you ever have the need to run multiple Desktop IBM Process Designer for more than one Process Center, use this:

Running the same version of Process Designer with multiple Process Centers

If you use more than one Process Center and the versions of each instance are the same, you can use one installation of Process Designer to access either Process Center.

  1. Install Process Designer from one of the Process Centers you use for BPM development
  2. Locate your Process Designer installation directory and find the eclipse.ini file
  3. Copy this file and add a suffix to designate the INI is for the other Process Center.  For example: eclipse_SANDBOX.ini
  4. Open the copy/renamed INI in a text editor and edit the line for the Process Center URL to correspond to your appropriate Process Center:
    -Dcom.ibm.bpm.processcenter.url=[INSERT 2ND PROCESS CENTER URL]
  5. Now find a shortcut to Process Designer (you can use the one in the Windows Start Menu)
  6. Rename the shortcut to include the name of the 2nd Process Center.  Like “IBM Process Designer Sandbox”
  7. Right-click on the 2nd shortcut and select Properties
  8. Update the Target field by adding –launcher.ini and the name of your 2nd eclipse.ini file.  For example: \path_to_PD\eclipse.ini –launcher.ini eclipse_SANDBOX.ini

Java notes

Just a few tidbits of information I reference pretty regularly for anything Java.

Java was the language of choice for all my CS classes in college but my first few career roles had nothing to do with programming except for a contract job using the LAMP stack.  The IT side of my financial services roles were mostly .NET.

When I moved to Big Blue I had to refresh my Java knowledge, primarily for a project using IBM Operational Decision Manager (ODM).  The IBM ODM rules platform at the time had a couple of ways to create what it called a Business Object Model (BOM) which is essentially a vocabulary of nouns and verbs that can be used to author business rules. 

For example – you might have an object called “a Customer” with properties like “name” (String) and “age” (Integer) and “can purchase alcohol” (Boolean).  When you wanted to write a rule about that object you could simply write something like:

If the age of the customer is more than 21 then make it true that the customer can purchase alcohol.

The BOM was supported by a Executable Object Model (XOM) which could be sourced from an XML Schema or a library of Java classes.  

It was easy enough in Eclipse to create a class with some parameters and Eclipse would automatically create the “getters” and “setters”.  Some of the real work came when you had to decide what functionality resided on the Java side as public or private methods and what functionality did you want to add into the BOM.  

But either way – I had to refresh my basic Java skills and some of these notes came in handy.

Creating Objects

  1. Declaration: The code set in bold are all variable declarations that associate a variable name with an object type.
  2. Instantiation: The new keyword is a Java operator that creates the object.
  3. Initialization: The new operator is followed by a call to a constructor, which initializes the new object.

Interfaces (“implements”)
• public class UsingClass implements InterfaceClass
• The Interface class InterfaceClass defines a set of empty constants and methods
• The using class UsingClass has to actually define a method body (make the methods do something) for all methods defined in the Interface (in this example InterfaceClass)
• This creates a standard (like an API) for how to engage with the class

Inheritance (“extends”)
• When you want to create a new class and there is already a class that includes some of the code that you want, you can derive your new class from the existing class. In doing this, you can reuse the fields and methods of the existing class without having to write (and debug!) them yourself.
• A subclass inherits all the members (fields, methods, and nested classes) from its superclass. Constructors are not members, so they are not inherited by subclasses, but the constructor of the superclass can be invoked from the subclass
• public class Bicycle {…..}
• public class MountainBike extends Bicycle {…}

• MountainBike has access to all of the variables and methods of Bike
• MountainBike can use the same method names of Bike (which means MountainBike overrides the Bike method)
• super.methodName() calls the method of the Super Class (the model that was extended)

WebSphere Notes

Obviously the Big Blue stack I used was based heavily in Java and at the time that meant WebSphere Application Server Network Deployment (ND) Profile was the server of choice for IBM BPM and IBM ODM.

Since then Liberty has taken a much larger role, which is really nice to see.  But it seems like a lot of big enterprises still rely on traditional WAS and WAD ND so I don’t think this knowledge is a complete waste.

When I got to my solutions role I knew very little about WAS apart from the fact that it was a Java Application Server and that’s about it.  I had some experience with Weblogic and WAS Community Edition on my personal Oracle VMs, but nothing very specific and nothing like managing a network deployment environment.  Installing one Java EE app on WAS CE didn’t compare to deploying applications across a three node cell.  IBM BPM on WAS was a big introduction for me and that made IBM ODM on WAS so much easier.

Here are some notes for IBM WebSphere Application Server ND/Traditional/Legacy (whatever you want to call it these days).  I’ll log another post with a few tiny notes on WAS Liberty Profile from a POC with IBM Process Federation Server.

I know this information is petty for most folks but for a guy brand-new to WAS ND just understanding the layout and the components was extremely helpful.

WebSphere (Network Deployment)

  • Application Server – a Java application that runs other Java applications
  • Server – an entity that actually runs the Java EE application (one or more than one, depending on the configuration)
  • Node – for all intents and purposes its a server running enterprise Java applications
  • Cell – a group of nodes
  • Deployment Manager – a specific server instance responsible for managing a cell.  It essentially consolidates server management for all the nodes in the cell.  It communicates to each node via a Node Agent.  Most people interact with the Deployment Manager using the WAS Integrated Solutions Console – essentially a web-application that manages WebSphere.
  • Node Agent – an admin type server program that communicates with the Deployment Manager to localize the management tasks.

This is a great resource for more in-depth information about Traditional ND and Liberty Profile:

https://www.redbooks.ibm.com/redbooks/pdfs/sg248022.pdf

And one more link with a great diagram of WAS ND:

https://itdevworld.wordpress.com/2009/05/03/websphere-concepts-cell-node-cluster-server/

What are we trying to accomplish here?

In one of my first roles I started working as a production support business analyst for a brand new suite of applications that sat between sales channel front office users and back office operations users.  The applications went live for a merger and obviously total chaos ensued, so most of my early application life cycle experience was very rushed/put out fires/do what needs to be done.  

But eventually the application settled into maintenance mode and we started to add brand new capabilities to the suite.  I had a good time in this new business systems consultant role because I learned a lot about the application’s highs and lows, but also about the business needs for the application.  I had a lot of good experience when it came to what worked and what traditionally didn’t work and I liked having that perspective on these new projects.

Like I’ve mentioned before, applications start with requirements gathering and go on to more and more technical design assets.  This was back in 2006 or so and Agile or Iterative development hadn’t even been brought to the table at this employer; so we were very much waterfall.  We engaged IT early to help make sure we didn’t commit to something that was totally unobtainable, but for the most part the business drove these discussions (as they should).

But when you take stakeholders with years and years of experience using legacy processes or applications and try to talk about a new way of doing things, you might run into a lot of calls and meetings that get buried in the nitty-gritty details of some very specific topics.  Like “how big should this field be” or “should this section come first or second” or the horrible “this is how it used to work/this is how it’s always worked”.

My technical side would always love to get into the weeds and start problem solving code right then and there: “well if we put these values in a look-up table and expose an admin utility we can manage this logic without additional deployments” – things like that.

But after awhile I learned to sit back and let the experienced voices talk and just listen.  And I’ve got to give a lot of credit to my manager because he knew that if I wasn’t speaking out about the problem or topic right then and there, I was thinking ahead.  And he always took a second to pause and call me out even though he already knew what I was going to ask…”What are we trying to accomplish here?

I found myself on so many calls buried in the weeds and guts of details of problems that I always tried to bring at least myself and hopefully the larger team back up a hundred feet or higher and figure out if what we were arguing about really meant that much to the endgame.  It was about finding some perspective.

This was in no way a workaround to persistent stakeholders or stubborn management.  It was just a way to gut check the meeting and make sure everyone was on the same page.  Sometimes it work; sometimes the topic was pushed aside for more important concepts.  Other times it didn’t work at all.  Like I said, experience talks and even though I didn’t see the higher purpose doesn’t mean it didn’t exist.  But sometimes just asking the question helped me and probably a few others reestablish themselves.

On the technical side now I can get buried in code and look for immediate solutions to problems from QA or UAT.  But I always try to take a step back and make sure that we’re working with a defect or change request that will make a difference in the long run.  This has been especially important in a large organization: change can come very, very slowly.  So the decisions we make and the applications we build now need to fulfill their goals because who knows when we’ll get a chance to address a missed need down the line.

IBM BPM – Date Diff Code

Here’s a few blocks of code samples that I like to reference when trying to deal with Dates in IBM BPM.

This first block (Date Difference Calculation Testing) helps you determine an integer number of dates between two TWDate variables.

// Date Difference Calculation Testing
// Date1 + xNumberOfDays = Today
// xNumberOfDays = Today - Date1
 
tw.local.aDate = new TWDate(); //SETUP SAMPLE
tw.local.aDate.parse("05/15/2005","MM/dd/yyyy"); //SETUP SAMPLE
 
if (tw.local.aDate && tw.local.aDate < new TWDate()) {
    // Today...as in right now
    var tday = new Date();
    
    // convert to native Java Date
    var aDay = tw.local.aDate.toNativeDate(); 
    var one_day = 1000*60*60*24; // milliseconds in 1 day
    
    var tm = tday.getTime(); // milliseconds for date1
    var am = aDay.getTime(); // milliseconds for date2
    
    var diff = tm - am; // milliseconds difference
    
    tw.local.dateInt = diff; // this is actually a decimal
    
    // convert milliseconds to days
    var days = Math.round(Math.abs(diff/one_day)); 
    
    // Integer of number of days between aDate and today
    tw.local.dateIntTwo = days; 
}

This is a cool way to tell if a date variable is within a specific threshold.  My use case was that the application stored when it made a specific integration call and then if we came back and needed that call again we’d check to see when it was last invoked.  This could easily be done with caching or something fancier but for this scenario a simple “was the last call too long ago?” bit of logic was all that we needed.

// Assume the service needs to be called again
tw.local.refreshNeeded = true;
 
// aDate is the value of the last time the service was called
if (tw.local.aDate) {
    var lastCall = tw.local.aDate.toNativeDate();
    var now = new Date();
    // EPV to store threshold in seconds to consider time stamp stale
    var threshold = Number(tw.epv.refreshThreshold);
    var thresholdMS = threshold * 1000;
    var lcMS = lastCall.getTime();
    var nowMS = now.getTime();
    var diff = (nowMS - lcMS);
    if (diff <= thresholdMS) {
        tw.local.refreshNeeded = false;
    } 
}