donderdag 7 december 2017

UKOUG Tech - PaaS & Development review


This year, I went to UKOUG Tech for the first time, as I got my paper about Oracle Process Cloud Service accepted. Looking at the agenda in advance, the content of the conference looked very interesting and I can already say that I wasn't disappointed. Not being particularly interested in the vast amount of database sessions, I decided to mainly focus on the Middleware and Development tracks, to see the latest developments on Oracle's PaaS offerings and the coolest new technology trends.

Even though many visitors to the conference are still working on on-premise projects, I haven't been able to find even one session about on-premise middleware. This is only logical, since SOA Suite, BPM Suite and Oracle Service Bus are lacking spectacular new features: it's all happening in the cloud. So, what exactly is happening in the cloud?

PaaS (Platform as a Service)

The main thing is that Oracle's iPaas (Integration Platform as a Service) portfolio is ever growing stronger. Every three months, these products are getting updated, rapidly maturing and expanding in, occasionally overlapping, functionality. Oracle Mobile Cloud Service impressed me as a solid back end for mobile, offering APIs, offline synchronization, authorization and tons of other features that are really useful for the challenges that come with mobile (or multi-channel) integration. Oracle API Platform is growing stronger as well and makes us re-think our way of agile development. API first is the way to go, so we get feedback from consumers early, while we are still working on the actual implementation in the back end. Another advantage here is that the back end is not impacting the API design much, so we can keep things clean and smart.

Moving further down the road, we see that Integration Cloud Service is turning more and more into a full-blown SOA platform and I was happy to present the Decision Models of Process Cloud Service myself. Once Dynamic Processes (Case Management) capabilities are released, I think we can say goodbye to BPM Suite, at least for new projects. Development in Process Cloud Service has become a smooth experience and the UI has improved dramatically since the product was launched in 2014.

Open Source & Development

But PaaS is not everything. We have seen an increasing interest in open source technology recently and even Oracle is embracing those products these days, standing at the very heart of their cloud offerings. So, I had the opportunity to learn more about Docker, which is a key element in Oracle's many Container oriented cloud offerings, Kubernetes, for which Oracle will provide a managed platform soon and Wercker, which can be used for continuous integration/continuous delivery of containerized microservices.

However, the star of the show was Apache Kafka. Brought to us with much grandeur by Robin Moffatt and Guido Schmutz, among others, Apache Kafka is looking extremely promising for not big data and streaming content, but basically for any event-driven style of architecture. Kafka can be used as an open source product, but you can also choose to use the Confluent Platform or Oracle's Event Hub Cloud Service. I believe that Kafka will be the cornerstone of modern integration architecture, powerfully delivering the promise that traditional SOA couldn't live up to. It's also perfect for being the event hub between your microservices, so they can communicate with each other without dependencies.

All in all, I can say that it was a fantastic conference, with not just great content, but also great social activities. It was a great opportunity to catch up with my friends, meet new people, exchange ideas and attend my first Oracle ACE dinner. I hope to be back next year!

woensdag 29 november 2017

Running SoapUI Test Cases with Maven

So, you have developed your software and you've done the right thing by creating your tests in SoapUI and they're all running smoothly. Now it's time to make the next step: make sure that your tests can be run automatically, preferably on different environments, for example every night or after a deployment. This is a major improvement in your continuous integration and delivery efforts, but how can this be achieved? In this blog, I will show how it can be done by using Maven, which can then again be used in for example Bamboo, Jenkins or any other continuous integration tooling.

With Maven, you basically just need a POM and a command to kick things off. In this example, we are using SoapUI 3.5.0, which is the latest open source version, Maven 3.5.2 and (since we need Java) JDK 1.8.0_131, but any other recent version will do as well.

Now, before we begin, if you use any JDK 1.8.x version, you need to copy a jar file named "jfxrt.jar" from ..\jdk1.8.0_131\jre\lib\ext into the ..\jdk1.8.0_131\jre\lib folder to make things work. If you resist, you will get a nasty error message and your test will not run with Maven. This obviously also applies to your continuous integration server.

Once you're setup, you will need to create a pom.xml file like the one below. You can place it in the same folder as your actual SoapUI test, but if you prefer to put it elsewhere, that's fine too. Just make sure to adjust the path to the SoapUI test then and be aware of differences between Windows and Linux environments (forward and backward slash). This is why I prefer to put the POM in the same folder as the SoapUI project.

Now there are several important elements here that you need to change for your own testing:
1. projectFile: the location plus name of your SoapUI project. Don't forget the .xml extension.
2. testSuite: the TestSuite that you want to be executed. If you leave it out, all TestSuites will be run.
3. testCase: the TestCase that you want to be executed. If you leave it out, all TestCases will be run within the specified TestSuite.
4. projectProperties: here you can manipulate the Custom Properties in your SoapUI project. Very useful for environment settings, for example.

Once you have the POM in place, you can navigate to its folder with command line, powershell or whatever tool you use and execute the following command:

mvn com.smartbear.soapui:soapui-maven-plugin:5.3.0:test -Denv=localhost

So, in the sample above, I have chosen to test my local environment by entering the value "localhost" into my Custom Property called "Env".

You might see some harmless error messages now, but they will not stop the test from running adequately. If you want to get rid of these, navigate to C:\Users\[Your User]\.soapuios\plugins and remove all files from there.

Once you run the test project like this, it will go through the specified TestSuite and TestCase, performing all the TestSteps in those and reporting on all the assertions. In the end, you'll get a BUILD SUCCESS or BUILD FAILURE message, depending on whether the result of the test matches the expectations set in your assertions.

Now you are ready to use the same command from your continuous integration tooling, decide when it should be run and which environment should be triggered.

dinsdag 31 oktober 2017

REST API for Oracle Adaptive Case Management

For all of you who have been struggling with how to interact with their cases, there is good news. Since 12c, Oracle has created a REST API for Adaptive Case Management (ACM):

Since the API is pretty much self-explanatory and fairly easy to use, I will not write a lot of detail about it (at least not right now, maybe later). However, I think that those of you who are struggling with the Java API or something custom made will definitely find something much easier to use here, for both integration and testing. Since most blogs about the subject have been written before this REST API became available, I thought it would be good to draw people's attention to this.

woensdag 10 mei 2017

Oracle Process Cloud Service - Decision Model Notation part 2

In my previous blog, I showed how to get started with Decision Model Notation (DMN) in Oracle Process Cloud Service and how to create a simple Decision Table. Picking up from there, we will now look into creating If-Then-Else rules, which should also be familiar to people who know Oracle's old Business Rules. We will also create a service and call it from a process.

Creating an If Then Else Decision

As Input, I have created a TotalAmount object, which is the total amount of a Sales Order. Based on this TotalAmount, we are going to calculate a Discount Price, for which I have created a DiscountPrice type to make the service interface a bit prettier than just 'output'. To create an If-Then-Else rule, just click the + button next to Decisions, enter a name and set the output type to string, number or any other type, in this case DiscountPrice.

Now, Oracle will have created a rule for you, in which you only need to fill in the "if", "then" and "else". Since you've already decided your output object, we will not use that one in the expression, which is different from the old Oracle Business Rules. So just enter the value that you want for this object and you'll be fine. You can also create nested expressions, as shown below:

One thing that I don't like, is that all the nesting needs to be done in the "else" part. I hope for Oracle to acknowledge this and create a new "if" section (for example with indent), where I can happily nest away in a more user-friendly manner. However, it works (use the Test feature to verify) and if you don't make things too complicated, it's mainly a minor display issue.

Calling an If Then Else Decision

Calling any Decision from a process is super easy. Just make sure to have a service created for your Decision and deploy it, so the process can find it. In Oracle Process Cloud's Process Composer, you can then select "Decision" as a system task, select the Decision Model that you want to use and the service within that Decision Model that you want to call:

From here, you can make your data associations and you're done. Obviously, a process is generally not as simple as this one, but using Decisions within processes is.

So that's the second part of this blog series. The third part will be an overview of other DMN functionality: Expression, Relation, List, Function and Context. I still think that we will mostly be using If/Then/Else and Decision Tables though, so for most use cases, the information in this blog and the previous one should provide you with a nice kickstart.

maandag 8 mei 2017

Oracle Process Cloud Service - Decision Model Notation part 1

Recently, Oracle Process Cloud Service (PCS) has made another major step forward through the addition of a whole new way of dealing with business rules. This brand new Decision Model Notation (DMN) feature is developed seperate from the actual processes and deployed as a microservice, so your decision models can be reused and everything is nicely loosely coupled. I like it.

What I also like about DMN is that it's much more (business) user friendly than the rule engine from Oracle SOA Suite, which was used until now. While it was performing well and somewhat agreeable for technical users, business users were often lost and leaving the business rule modelling to developers. With this new DMN feature, this is no longer necessary. Business users will be able to do much more, if not everything, themselves and actually enjoy the experience!

I've decided to write a series of blogs about the different types of decision models that can be created and how to use them. But first we need to turn it on.

Getting Started

When you're in the home screen, click on your login name in the top-right corner and choose 'Administration'. On the Admin page, go to 'UI Customization' and tick 'Enable DMN in Composer', then Save.

Now you're ready to go! You can go back to the home page now, click 'Develop Processes' and then under Create, you will have the option to create a New Decision Model. As you can see, these Decision Models are seperate from your normal PCS applications, but you can call them from your processes. Once you have created your Decision Model, your interface will show you Services on the left, Input Data on the right and Decision Models in the middle.

Input Data

The objects and the left and right can be expanded. Here's a brief explanation of what they contain, starting with the right: Input Data. When you expand it, you can see two options: Input Data and Type Definition. In Type Definitions, you can setup complex types for your input data, like lists or elements that contain attributes. Types can also refer to other types and so on and so on. It's pretty much a declarative way to make an XSD.

In Input Data, you can then refer to these Types, but also create strings, numbers and other basic stuff. In this case, I choose to make a SalesOrder, referring to the SalesOrderType I've created. This Input Data is obviously very important, because it will be the facts in your Decision Model and defining the interface of your Service.

Decision Table

So, now we know how to turn on DMN and how to create Input Data. Now let's create a simple Decision Table to make it work. Just click on the Blue + next to Decisions and choose Decision Table on the right. Since my Decision Table will decide the approval level needed for the Sales Order, the output will be a simple string value.

Once the Decision Table has been created, you obviously need to fill it. You start with Enter Expression on the left and just type the first letters of your Input Data object. DMN will suggest automatically what you want and you can just select it. In this case, I wanted to have SalesOrder.TotalAmount as input, but oops... I forgot to add it to the Type! No problem, you can always modify your Types later and the changes can be used immediately. Now we can enter our rules, which is also very nice and declarative. It could look something like this:

Absolutely no coding is necessary! To test it, you can use the test button in the top-right corner (the blue play button) and the test interface is very easy to use. Once you're satisfied, save your work.

Creating a Service

Believe it or not, creating a Service is even easier. Expand the menu on the left, create a Service, give it a name and drag the Input Data and Decision on the Input and Output fields. It will generate a REST Service and will give you the URL, as well as request and response payload when you click on Payload. Obviously, you'll have to fill in the data yourself, but all the types are described, so it will be very easy to use. Now you're ready to deploy, but the runtime aspects and calling the Decision Model from a process we will save for a later blog.


Oracle has created a significant improvement in Decision Modelling with their DMN-based feature. It's easy to use, business friendly and allows for fast development. Further blogs will get into more complex rules, different types of rules, calling decision services from processes and the runtime environment.

woensdag 29 maart 2017

Why you want to become an Oracle SOA developer

With a mixture of surprise and amusement, I've read mr. Sten Vesterli's statement below:

"You don’t want to become an Oracle SOA developer, for two reasons: SOA and Oracle."

Quite a powerful statement, so let's dive deeper into the two reasons mentioned above and explain why I strongly disagree with mr. Vesterli.

First of all, he makes a rightful claim that SOA has over-promised and under-delivered for a decade. I share this feeling, but I do not believe in the bleak picture of the future of SOA that he is painting.

When you look at the Gartner hype cycle, it's clear that SOA is currently in the through of disillusionment, which is a rather tough place to be in. However, after this phase come the slope of enlightenment and plateau of productivity, so is this really a good moment to stop being interested in SOA? I don't think so. We have made our mistakes, learned our lessons and now it's time for "SOA done right".

Of course, we can now jump into microservices, which are at the peak of inflated expectations and doomed to fail in their own way with all the downsides and requirements that they have, or we can adjust our perspective on SOA and modernize our architectures. Take elements from microservices, don't run from one extreme to the opposite extreme and find your way into Domain Driven SOA Design, in which I strongly believe. We can and will do this right if we can for once stop thinking so black-and-white.

So, having covered SOA, let's talk about Oracle. It is indeed true that Oracle is making a strong movement to the cloud, which I embrace and support. Now, according to mr. Vesterli, apparently an Oracle SOA developer will only work on-premise and will not be working with Oracle Integration Cloud (ICS). I think this is not true and I can use myself as an example, having done projects with both Oracle SOA Suite and ICS and happily using my skills both on-premise and in the cloud with Oracle's rapidly expanding PaaS portfolio.

Apart from that, SOA Suite on-premise and Oracle SOA Cloud Service are exactly the same from a developer's point of view. They are the same products, requiring the same type of code, being developed in the same JDeveloper. And while ICS is very good for integrating SaaS applications with each other, it's not going to be the cornerstone of anyone's enterprise architecture anytime soon. Therefore, SOA will remain relevant for a long time and when you know SOA, you can easily learn about ICS as well. The other way around will be significantly harder!

So, I'd like to conclude by turning mr. Vesterli's statement around: now is a great and exciting time to become an Oracle SOA developer.

dinsdag 10 januari 2017

Quick tech tip: using MDS Events in BPM

How to use MDS Events in your BPM processes?

With choreography gaining importance compared to orchestration, events will start playing an increasingly important role. In Oracle SOA Suite, it makes sense to put the related event definitions into MDS, so they can be used by multiple processes and services.

However, when you have your event in MDS and you try to use it in your BPM process, you will not be able to access it. You'll find that your BPM process will only look into the contents of the project that it's in and you can't use external events. Now here's the trick to still make it happen:
  1. Copy the event definition that you want to use into your project;
  2. Let your BPM process use the local event on design time;
  3. Open your composite.xml in source mode and change the import location of the event definition to the oramds: location;
  4. Remove the local copy;
  5. Deploy & enjoy!

maandag 9 januari 2017

Domain Driven CDM

How to handle your Canonical Data Model (CDM) in a SOA environment?

As decribed by my colleague Emiel Paasschens, it can be highly beneficial to use a Canonical Data Model (CDM) when you're going for a SOA approach. To not repeat what he already, I refer you to his article if you want to know what a CDM is and how it can be helpful:

However, what that article doesn't describe is how to setup your CDM in a flexible, yet reusable way that's allowing for maximal business flexibility. This article is intended to fill that gap.

Problems occuring with a poorly setup CDM

Messing up your CDM is a great foundation for complete and utter failure of your SOA/integration project, so you might want to avoid this. Having done many different SOA projects over the years, I've experienced different ways of (not) dealing with a CDM with different results. Not dealing with a CDM is out of scope for this article, as I think it belongs more to the microservices domain, so let's talk about different ways to manage the CDM.

1. One CDM to rule them all

If you want to go extremely hardcore on reusability, this is the one for you. You'll have exactly one version of your CDM, which is used by all your services. Obviously, this will lead to a lot of problems, because any breaking change will force you to also change many services and your interfaces will generally not be very precise. For example, I've worked at a governmental department, where they have a "Person" type. This Person type contained all the elements that a Person could possibly have from all kinds of different perspectives, making it rather epic, hard to transform and generally too broad for any interface definition. Of course, you can create different subtypes, but you'll still have the breaking changes issue and once you let a type become too epic, it will be almost impossible to break it down later.

2. Versioned CDM

So, having read point-1, we're going to add some versioning to the CDM. This means that services can use different versions of it to avoid breaking changes having too much impact. However, those services will need to be upgraded to the latest CDM version sooner or later and you might also end up having to do a lot of transformations between services, because their CDM versions are not matching. So, while this is significantly more manageable than the CDM in point-1, you'll still be perfectly able to run into a mess.

3. Design Time CDM

Having learned the harsh lessons from point-1 and point-2, I've been in two projects where they decided that using a runtime CDM is too troublesome and CDM was only used in design time. This was done by copying the necessary CDM types to service specific XSDs, each service having its own namespace and then allowing some minor tweaks. While this makes your services resistant to CDM changes, it requires a lot of transformations, so adding an optional element can easily lead to quite some work, bringing down productivity dramatically. I think it's a too harsh way of countering the problems occuring with a runtime enterprise CDM.

Solution: domain driven CDM

Since neither of the three options mentioned above appealed to me, during a project at a financial company, I decided to take a different approach and break down the enterprise-wide CDM. Instead, I've created a CDM exclusively for the domain that I was working on. In this case, reusing the Person example, a Person can have different definitions in different domains, because a person from an HR point of view will have different attributes than a person from a pension point of view.

So, let each domain have its own definitions and namespace and you'll lose dependencies between domains. You still have dependencies within the domain, but I don't see this as a bad thing. When dealing with an Employment domain, it's extremely unlikely that you make a change to the Person type that's applicable for some service operations (f.e. UpdateEmployment), but not for others (f.e. GetEmployment). So, things that change for the same reasons should be able to reuse the same types. On top of that, you can also version the Domain Driven CDM to make it more robust. You'll still end up having some transformations between different domains, but when your domain exposes APIs or services in a smart way, this effort should be limited and manageable. It will also be easy to add an optional element in this case, because within the domain you can work with Assigns (BPEL) or Data Associations (BPM) on a higher object level, instead of having to rely on transformations all the time.

General design consideration

Apart from the choices you make in how you handle your CDM, it is strongly recommended to avoid breaking changes as much as possible, since they're always going to be painful. Therefore, think hard about your design standards before you start and consider for each element within your types if it might become an array in the future or not. These things should help you to reap the benefits of the CDM without feeling too much of the pain that can come with it.

woensdag 4 januari 2017

What can we learn from the Microservices movement?

If you’re into integration, SOA or web services, you’ve probably heard the term Microservices fairly often lately. Is applying Microservices architecture the one-size-fits-all solution that can replace the traditional one-size-fits-all SOA solution that doesn’t fit anymore? Of course not, because the world isn’t just black and white and both architectural concepts have their pros and cons. However, I think we can learn from the Microservices movement to improve and modernize our traditional SOA systems.


Microservices vs traditional SOA?

As tradition dictates, you can’t write an article about Microservices without explaining what Microservices are. So here’s a link to Wikipedia and a picture:


So, now we have a clear understanding about Microservices, let’s also have a clear understanding about what I call “traditional SOA”. Traditional SOA, according to me, is a monolithic, semi-loosely coupled way of doing web services, where all services are deployed on a single platform (e.g. WebLogic Server), where services have a lot of dependencies, where scaling is an all-or-nothing effort and where most of the services are both stateful and synchronous. Typically, a canonical data model is used and services are sharing data stores. Obviously, there is much more to it, but you don’t need me to repeat whatever Thomas Erl and countless of other authors have already written.


Flexibility vs Reusability

Quite often, I sense the vibe that traditional SOA is old and irrelevant, while Microservices architecture is the only way forward. This is nonsense. In the end, what it comes down to is that traditional SOA relies heavily on reusability, while Microservices architecture relies heavily on flexibility. So, the main question in your reference architecture should be how much flexibility you need and how much reusability you want to give up on it. If you’re working on a disruptive new system that needs to adapt to changes rapidly, you have a different use case from traditional ERP or CRM integrations, where changes only occur occassionally and sudden scaling demands are not likely to happen. So, before you go and embrace Microservices, think about what kind of business you’re actually in and even if you have a need for Microservices, you probably won’t need them for all your integration efforts. I also strongly recommend using common sense in this matter, because dogmatic belief never leads to anything good. Basically, ask yourself what kind of system your service is for: a system of stability or a system of innovation?


What can traditional SOA learn from Microservices?

Let’s just say that you are not Netflix or Union Pacific and a traditional SOA approach prevails for your needs. Then you should still occasionally re-evaluate your SOA standards. Ideas that may have been great in 2011 might just have reached their expiry date, while new innovations can be analyzed to improve your architecture. I think that the traditional SOA approach can learn from the Microservices movement, because it gives a fresh perspective on things and makes us re-evaluate things we’ve been taking for granted. Even if your SOA implementation is successful and you hardly encounter any problems, establishing that you’re still on the right path can be useful! I’ve been doing SOA for a long time now and in many different projects I’m finding the same problems. How can we learn from Microservices to handle those problems?

Challenge #1: Scaling

Since everything is deployed on the same platform, scaling whatever needs more horsepower means scaling everything. With multitenancy in WebLogic Server 12.2.1, it’s possible to create domain partitions to overcome such problems. Try splitting up your SOA layer into different domains of services that scale together, possibly do the same with the underlying database, add work managers to those domains and you are already winning a lot. For example, I am currently involved in a project that deals with courthouse cases. Some cases are short running and happen often, while others are more rare, but longer running. Putting these into different domains will give you the flexibility to treat them differently, although dealing with underlying reusable services can still be a challenge. Those services could also be in their own domain though, but you could also go as far as deploying them to multiple domains or even having two different versions of such a service. Once again, it depends on the balance between flexibility and reusability.

Challenge #2: Performance

“Our services don’t perform!" is something that I hear too often. Obviously, here you need to look at the platform too: what’s the purging strategy, are we applying patches, is the configuration right? Tons of improvements can be made on that side, but pointing and blaming is no longer cool in the DevOps world, so we also need to look into the mirror: are we doing what we can to make our services perform well? Here we can apply some Microservices patterns: for example, if you are integrating between SaaS applications or between BPM processes and a relational database, do you really need stateful services in between? Think about the minimum of metadata that you need for each service and act on it: maybe use Service Bus instead of a Mediator, look at the use of each component in the service layering and get rid of redundant capturing of state. It will make your services run much faster. Try to be as stateless as you can be without getting into trouble.

Challenge #3: Dependencies

Dependencies are real killers of success. While Microservices are fully responsible for their own well-being, traditional SOA services tend to aggregate, orchestrate and use shared resouces. Aggregation is inevitable, but for orchestration there are some nice alternatives. For example, I’ve worked at a project where we had two BPM layers: one for end-to-end business processes that do orchestration and one for domain-bounded processes that do the actual processing. Instead of letting the top layer do orchestration through web services, you could also make it more loosely coupled through JMS or Events, so the entire building doesn’t collapse when one floors burn downs. At least consider it.

Another major dependency is the canonical data model. Having a general CDM that affects all services can be extremely inflexible and leads to unclear service definitions. I prefer a domain driven CDM approach (also see Challenge #1): make a CDM for each logical domain and have internal reuse of the objects (same namespace), while between domains there can be differences. For example, I have an Employment domain and a Person domain: in this case, a change in Person without relevance for the Employment domain will have no impact. This has the downside of needing some transformations, but at least you have a reasonable balance between flexibility and reusability. Personally, I don’t believe in taking such an approach for each individual service (hello Microservices), because you probably still want to be able to push through non-breaking changes quickly. Find your own balance in this, but try to avoid extreme measures.

Challenge #4: Purpose

With a Microservice, you always know exactly what it does, because it does exactly one thing. In SOA, we should also embrace such an approach, at least partially. Always try to make a service about exactly one object and make seperate services for “get" and transactional operations for the sake of scaling. Logically, aggregations can’t always be avoided, but then the aggregation itself should be considered one object. Also try to not make too many variations of basically the same operation: if needed, use a specific, non-reusable connector service for applications who need specific sorting, filtering etc…



Many organizations are struggling with their SOA approach, while not having a clear use case for rebuilding everything in a Microservices architecture. I believe that reading into the subject helps to understand the challenges that come with traditional SOA and I hope that this blog can help you address those challenges, as well as trigger you to put your design patterns under scrutiny on occasion. By gathering inspiration from different architectural approaches, we can already make a difference without fully adopting those approaches.