DBAT Connector Customization Or How I Learned to Stop Worrying and Love the Groovy Script

I recently had a client who required OIM to manage access to a database application (or two). They needed users managed and they needed roles managed.  To meet their needs, I embarked on the process of installing, configuring and, of course, customizing the Database Application Tables(DBAT) connector. As with many ICF connectors, installation had its own challenges.


The first step is to generate the connector.  This is done by editing a config file with the data that will determine how the connector works. This process is somewhat straight forward.  Edit the configuration file with the connector name, primary table name, role table name, attributes you wish to provision and recon, etc.  Make sure to note what attribute is used to enable and disable a record.  Also, note the attribute used for to time stamp changes.  You can have a lot or a little in this configuration file, but remember, the more you do upfront, the more that can be generated for you.  As an example, in the section for creating prepopulate adapters for your attributes, if you fail to list an attribute you intend to use, you can always add it later.  Just remember that will require the Design Console to do so.  It is much better to add it now.


 //Generate prepopulate adapters.PrePopluate Adapters Usage is ['CONNECTOR_ATTRIBUTE':'OIM USER ATTRIBUTE'] Eg., ['DISPLAY_NAME':'Display Name']
    prepopulate = ['USERID':'User Login', 'FIRSTNAME':'First Name', 'LASTNAME':'Last Name', 'EMAIL':'Mail Id', 'DESCRIPTION':'Description', 'SALARY':'Salary', 'JOININGDATE':'Join date']


There is also a section for indicating groovy scripts that should be run in place of various operations, create, update, delete, add role, etc.  If the relationship between your application user table and roles are very straight forward, the OOTB ICF processes will serve.  But for most of us, it’s never that simple. Groovy makes customization much easier here.  There are some great advantages to using it.  The customization is on the fly.  This means you can modify the scripts, upload them to the scripts folder, and run them without restarts, or compiling and uploading of jars.   This makes testing much easier as you can simply write a change, click save, and run the job again.  One other thing I liked about using the groovy scripts is that a recon job in java could be as many as 400 or 500 lines where as my groovy was 98, though of course, mileage may vary there.

After spending some time in the Oracle docs for this connector, I have found that for the most part, the document is fairly comprehensive.  At the very least the sample groovy scripts display what you need to do pretty well. click here.


I did find a few pain points worth pointing out.

  1. Make sure the attribute types you set in the groovy script, match the attribute type in OIM.  If it is an int on the account form, it must be an int on the groovy script.   This sounds obvious, but the real pain point here isn’t that you need to do it, it’s that if you make a mistake, the error is buried making it tough to tell that is what is happening.  In my case I was working on the reconciliation script and there were a few attributes that stored in an unexpected format on the form itself (such as int when a string would have been expected).  In this case there were a few different people working on the connector so the custom attribute was created one way and when the script set it for another.  It wasn’t a huge error, but what we found was the attribute simply wasn’t in the recon event, and it took some digging to find the error indicating the types were wrong.
  2. The other major pain point was related to these lines in the recon groovy…
role.setObjectClass(new ObjectClass("ROLE_TABLE_NAME"));
reconRecord.addAttribute(AttributeBuilder.build("ROLE_TABLE_NAME",(Object[]) roleEm));

The lines are from the target user recon script.  It is used for reconciling role data and attaching it to the recon record.  For each role, we add the object class named “ROLE_TABLE_NAME”.   This is the table where roles were stored.  When adding the role attribute to the recon object, we again reference the table name where the roles came from.  This was not explicitly explained in the script samples.   The sample has the line, but didn’t indicate why it was one value vs another.  In the sample, it was “USER_ROLE” which was, of course the table name, and the alias name.  In the docs, I only found one line to reference it at all, in a section talking about how to name the roles in the provisioning and reconciliation lookup. click here.

By trial and error, I confirmed that the reference in the code to “ROLE_TABLE_NAME” needed to match what was in those lookups.  So, if my table was called “ROLES” or “USER_ROLES” or anything else, it would need to be in my script.


That is pretty much the worst of it.  For the most part, deploying the connector is relatively a straightforward process.  So, to end, here is a random Groovy fact..

In groovy, when calling a parameterized method, it’s not necessary to pass parameters.

for example

 def someMethod(int number1,String string1,….)

It is acceptable to call the method as obj.someMethod().  You will not get an error (the method may balk unless it is designed to handle no parameters).  If you do pass parameters, they must match the required type.  This would not work in Java.