Deployment automation in XL Deploy

Deployment automation in XL Deploy is great, but do not forget to automate the setup and configuration of your XL Deploy environment as well.

Try to avoid using the user interface and try to avoid adding entries to the dictionaries manually. In stead use the API to create all of your infrastructure, environments, and dictionaries. Treat the set up of XL Deploy as code!

Here is a link to their API: XL Deploy Rest API
And here is some example of how you could use it in a bash script.

# Helper methods for accessing XLDEPLOY API using CURL
del_ci() {

curl -H "Authorization: Basic $XLD_BASICAUTH" -k -X DELETE -H "Content-type:application/xml" $XLD_SERVER/deployit/repository/ci/$1 --data "<$2 id=\"$1\"></$2>"

}

add_ci() {

curl -H "Authorization: Basic $XLD_BASICAUTH" -k -X POST -H "Content-type:application/xml" $XLD_SERVER/deployit/repository/ci/$1 --data "<$2 id=\"$1\">$3</$2>"

}

update_ci() {

curl -H "Authorization: Basic $XLD_BASICAUTH" -k -X PUT -H "Content-type:application/xml" $XLD_SERVER/deployit/repository/ci/$1 --data "<$2 id=\"$1\">$3</$2>"

}

add_ci_from_file() {

curl -H "Authorization: Basic $XLD_BASICAUTH" -k -X POST -H "Content-type:application/xml" $XLD_SERVER/deployit/repository/ci/$1 -d@$2

}

add_ci Environments/test core.Directory
add_ci Environments/test/test_dict udm.Dictionary "<entries>
    <entry key=\"DATABASE_URL\">$dep_DATABASE_URL</entry>
    </entries>
  <encryptedEntries>
  	<entry key=\"DB_PASSWD\">$db_password</entry>
  </encryptedEntries>
  <restrictToContainers/>
  <restrictToApplications/>"

Keeping track of the dictionary keys in combination with the keys being used in certain versions of your deployable archives is very important in order to realize reliable and more consistent results.

In the end these kind of deployment robots like XL Deploy or Nolio will have to become more and more mature in supporting immutable server concepts so that code and configuration is exactly the same in all environments.

Continue reading

Building a MongoDB topology using MMS automation

As mentioned in previous blogs the reference application can use MongoDB.
For this a MongoDB database needs to be set up. You can do this in any number of ways:

  1. Download the software on your server, start the mongod process and configure your application with the connection details
  2. Get MongoDB as a SaaS service from MongoLab or other providers, followed by the same steps

If you decide to build your own database, you can do so by configuring everything manually or using scripts, however you can also choose to use MMS as a kind of SaaS solution for managing your MongoDB environment. This can be done on private clouds/networks using your own dedicated MMS solution, or in the public cloud.

The picture below shows such an environment:
MongoDB topology

Using MMS to set up this complex topology is quite simple:

  1. Start by installing the automation agents. These will then connect to the mms.mongodb.com environment using some shared secrets that have been configured for your account. This will make these agents appear on your account.
  2. Then use the web interface of MMS to install monitor agents and/or backup agents and everything else that you need: standalone servers, sharded clusters etc.

The automation agents will automate all the deployment and installation tasks needed, such as:

  • downloading software of the desired version
  • upgrading versions of the agents and databases
  • configuring security
  • creating & changing clusters

Without MMS, I would have needed much more time to set up such a cluster. The alternative is to go SaaS all the way, where you don’t care anymore about how and where it is installed. In that case MongoLab solutions or the MongoLab within Bluemix solution is a good choice as well.

For now I think that using MMS in combination with your own infrastructure is a very good choice. And using a private MMS would be even better from a security and trust/privacy point of view.

Application landscape in the Bluemix cloud

The reference application that I am building is a Java EE application that runs on JBoss, WebSphere Liberty, WebSphere Full and can be run on local Windows or Mac laptops, on Raspberry Pi, in docker or on the IBM Bluemix cloud.

The picture below shows the landscape of the reference application in the Bluemix cloud.

Bluemix topology

In Bluemix you can choose in which region you want to host your application. E.g. UK or US South. Within each region you then have the opportunity to define spaces, such as dev, test, prod. Each space then consists of your application and bounded services.

Also Bluemix provides the opportunity to deploy your application in multiple ways: As a CloudFoundry app, as a docker container or as a virtual machine.

The reference application is deployed as a cloudfoundry app on a WebSphere Liberty instance. It is bounded to several services: The single sign on service, a MongoDB service from MongoLab, a MySQL service from ClearDB.

Currently, not all services are available in each region. This depends on the overall state of such a service. The Single Sign On service which offers OAUTH or OpenID integration was initially only available in the US South region. This service can be used to provide authentication functionality to your application. Your application then needs to provide autorisation based on the user id from the authentication system.

The reference application is aware of the authentication system. That is, it knows whether standard Java EE authentication with LDAP user registries in the Java EE container are being used or OAUTH is used.

All information of the services are available in CloudFoundry based environment variables as well as being defined as Java EE resources (MySQL datasource and MongoDB liberty database connection pool) in the liberty server configuration.

The code of the application can be deployed locally from Eclipse or other development tool, or from a build pipeline configured in the DevOpsServices environment. This environment is fully integrated with Bluemix, GIT and other tools and can be configured in such a way that an application will be automatically build, and deployed to one or more environments in Bluemix.

Managed webservice clients

One of the application server specific things is the use of managed web service clients. Especially since you will always want to configure the location of your service endpoints.

WebSphere Liberty and WebSphere Regular also do this their own way.

Let’s start with development of a managed web service client. Start with a wsdl and generate the client code. Then use @WebServiceRef to link your servlet or ejb code to the service client.

@Singleton
@Path("/payments")
@DeclareRoles({ "BANKADMIN", "BANKUSER" })
public class ExpenseService {

	@WebServiceRef(name = "ws_PaymentWebService", value = PaymentWebService.class)
	private PaymentInterface service;

When you do not provide any more information, the endpoint address is determined from the wsdl that is accessible for the client.

You can override this for WebSphere with the use of the ibm-webservicesclient-bnd.xmi file, and for WebSphere Liberty using the ibm-ws-bnd.xml

ibm-webservicesclient-bnd.xmi for WebSphere Application Server

<?xml version="1.0" encoding="UTF-8"?>
<com.ibm.etools.webservice.wscbnd:ClientBinding xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" xmlns:com.ibm.etools.webservice.wscbnd="http://www.ibm.com/websphere/appserver/schemas/5.0.2/wscbnd.xmi" xmi:id="ClientBinding_1427118946547">

  <serviceRefs xmi:id="ServiceRef_1427119116658" serviceRefLink="ws_PaymentWebService">
    <portQnameBindings xmi:id="PortQnameBinding_1427119116658" portQnameNamespaceLink="http://soap.zubcevic.com/" portQnameLocalNameLink="PaymentWebServicePort" overriddenEndpointURI="https://localhost:9443/accountservice/PaymentWebService"/>
  </serviceRefs>

</com.ibm.etools.webservice.wscbnd:ClientBinding>

Once you have added this binding file, you can add instructions during the deployment process to override the actual timeout and endpoint values for a particular environment. This is done using additional install parameters in Jython/wsadmin or e.g. in XLDeploy:

<was.War name="accountservice" groupId="com.zubcevic.accounting"
         artifactId="accountservice">
  <contextRoot>accountservice</contextRoot>
  <preCompileJsps>false</preCompileJsps>
  <startingWeight>1</startingWeight>
  <additionalInstallFlags>
      <value>-WebServicesClientBindPortInfo [['.*'  '.*' '.*'  '.*' 30 '' '' '' 'https://myserver1/accountservice/PaymentWebService']]</value>
  </additionalInstallFlags>
</was.War>

ibm-ws-bnd.xml for WebSphere Liberty

<?xml version="1.0" encoding="UTF-8"?>
<webservices-bnd xmlns="http://websphere.ibm.com/xml/ns/javaee"
		xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
		xsi:schemaLocation="http://websphere.ibm.com/xml/ns/javaee http://websphere.ibm.com/xml/ns/javaee/ibm-ws-bnd_1_0.xsd"
		version="1.0">
	<service-ref name="ws_PaymentWebService" wsdl-location="WEB-INF/wsdl/PaymentWebService.wsdl">
		<port name="PaymentWebServicePort" namespace="http://soap.zubcevic.com/"
				address="https://localhost:9443/accountservice/PaymentWebService" username="admin" password="password"/>
	</service-ref>

</webservices-bnd>

Some may say, that you could make things more easy by doing it yourself (unmanaged service client) and reading some endpoint configuration from a property file. But it will get more and more difficult when you want to additional configuration like SSL transport security settings, basic authentication, WS Addressing, WS Security and others.
With managed clients you can have this stuff get arranged by the application server. In stead of building your own application server capabilities in your application.

Platform specific resource binding

One of the most obvious problems of running your application anywhere is determining how your Java EE application can use application server resources.

Always use resource references is the best choice to start with. Not only will you prevent a hard link between application and runtime configuration, but jndi namespace bindings can typically cause problems. 

Binding deployment descriptors are a way to facilitate easy deployment in your IDE environment. Both JBoss as well as Glassfish and WebSphere make use of these binding files. This also implies that if you want to develop applications on JBoss as well as WebSphere, that you will need to supply both sets of binding files.

So that’s what I did to get my jdbc datasources up and running for both JBoss and Liberty.

In order to get it working on the Bluemix cloud, I had to choose to set the IBM specific binding files to use the resources according to Bluemix.

In IBM Bluemix, you can deploy applications in containers, VM’s as well as Liberty profiles on Cloudfoundry.

I user the latter approach. Then when you bind a ClearsDB MySQL, the name of the service will be bound to jdbc/servicename in the Liberty profile.

All information to connect is allready available and configured in that Liberty instance based on the bound service. You only need to bind your application resource reference to that specific resource.

That’s a Java EE app using a real datasource which is bound dynamically to your database as a service.