Friday, December 21, 2012

Elastic Search plugin to add custom REST http handlers


This post demonstrates how to extend elastic search via its plugin model by taking an example of building a simple REST plugin which adds new functionality and can be invoked via POSTing some data to an URL.

Lets start with the folder structure of an elastic search plugin, the picture below demonstrates the same.
















Its a simple maven module with sources and resources.

es-plugin.properties - When elastic search instance starts it loads all the inbuilt modules and plugin module being the first one. Plugin module's job is to load and initialize the plugins available in the plugins directory. And this property file specifies the custom plugin class that has to be loaded which does rest of the job like creating new modules, extending existing modules by adding new components etc.

In this example es-plugin.properties content looks like
CustomRestHandlerPlugin - This class extends the AbstractPlugin of elastic search and is the place where any new modules or components of the custom plugin has to be initialized. Here goes the code for this example.
Override the methods name( ) and description( ) with appropriate details which suits the plugins role. The name should be unique amongst the other plugins in the elastic search server.

The important piece of this class is the onModule(...) method which will be invoked by the plugin module on loading custom plugins. This is the place any existing module can be extended.

This example is to add new REST http handler and to do this we have to extend a REST module which is why the onModule( ) method takes in the parameter as RestModule. The parameter type can be of any class which is of type Module and while invoking this method the plugin module takes care of passing the right module's instance for you to extend.

In this case we are registering a new REST action via a REST module. We will get in to what that REST action looks like in a bit.

RestRegisterAction - Extends the BaseRestHandler whose job is to add new handlers to serve a given http URL.
Notice the constructor annotated with @Inject annotation, elastic search will take care of injecting the required types in to your class, as long as they are bound in to the IOC container by either the elastic search server or any other modules during its initialization. Most of the elastic search components will be bound in IOC container by the time plugins are loaded though (or I think).

Once you have got the reference to the required components via constructor you could do any operations on it based on your need.

In our example all we need is a RestController to which we are adding a new handler for a new URL and in this case the URL is /register and the handler is the same class which expects you to implement handleRequest(RestRequest request, RestChannel channel) method.

When a client makes a HTTP POST request to elastic search server @ /register path this method will be invoked by the RestController. And as you would have guessed you will get access to all the HTTP request attributes and will also give you a way to respond to the client via RestChannel to write response content and headers. Once you get access to the Client instance you can interact with the elastic search for all sort of operations like indexing, searching or even administration tasks.

plugin.xml - Is a plugin descriptor for maven assembly plugin which packages this plugin for deployment on to the elastic search server. The contents of this plugin file and the maven configuration for assembly plugin are shown below.
And this instructs the maven via assembly plugin how to package, and here goes the maven configuration for the same.
Once this is configured packaging a plugin is as simple as running a maven package command.






And the plugin is created as a zip artifact custom-http-handler-0.1-SNAPSHOT.zip in {project.dir}/target/releases directory.

To install the plugin on to your elastic search server just run the plugin command by specifying the plugin artifact as shown below. Replace the {project.dir} with root directory path of your plugin codebase.




After installing the plugin, restart the elastic search server to load your new plugin and test your plugin using any http command line tools like curl by posting data to the URL http://:9200/register  and verify the response and the servers log.

Hope this helps, I had to post this as there is hardly any documentation around creating custom plugins for elastic search. Watch out this space for posts on building river plugin and more..

Thursday, August 23, 2012

Running applications using third party http services with out having to build stubs

While building applications which has integration with third party systems via http services the usual questions that arise are one, how to run your application in your own environment with out having to worry about installing the third party systems and how to test your system for different behavior that third party services expose. The first thing that comes to mind as a solution to this kind of a problem is to build some stubs. 

Is there a better way ? Can we avoid having to write stubs for every such kind of application which has lots of http services ?

Yes there is a simpler way, try fakerest. Fakerest is a sinatra based http stub server built using ruby. All you have to do is just write a configuration yaml with the http request URLs, response code, type, file that contains response body etc. A sample configuration file looks like: 


To use this all you have to do is install fakerest gem and start by running $ fakerest from command prompt. As you can see from the configuration file above, you can have as many services as you would like just by making a new entry in the configuration file with new request and response details.

The value for the content_file field in the configuration is the file from which the http response is loaded and served to the client. Fakerest supports all http methods like GET, POST, PUT, HEAD and more. With different configuration files you can just change the behavior of the http services just by restarting the Fakerest server by providing a different configuration file (this is quite handy for testing).

All this is fine, how do I verify what request is being posted to the http service ? 

Fakerest has an answer for this as well, all requests that are made to Fakerest are captured. You can just go to your browser hit http://host:port/requests/5 to see the last 5 requests, you can change the number to see the recent requests based on your need.

Not just simple POST and GET, with fakerest you can also upload files and verify how the file looks like on the server (fakerest) using the same requests url mentioned above. 

You could even change the http response status code to verify how your application behaves with different status codes that third party system might return. The same is true with the content type as well.

To know more details on how to configure and run go to fakerest github page.

Tuesday, August 7, 2012

Collaboration between agile and waterfall teams

This post shares my experience working on a project where multiple vendors are involved in developing a project which has lot of integration between the modules being built by different vendors. 

Interesting thing is the working methodology of each of these vendors. One vendor uses agile methodologies, while the other vendors software building process is completely waterfall model. 

This is how it all started, we had a inception with the clients to begin the project where we had a week of intense discussions on what the project is all about, what features do they need, technical components involved, prioritization, some high level flow diagrams and of course high level estimates (one thing I hate the most). We documented everything that came out of inception, got it reviewed and signed off by all parties involved and then got started with the development.

As this project has lot of integration with other modules which are being built by other vendor (at the same time) both the vendors went with the obvious approach of building stubs which simulates the modules the other vendor is building and finished development.

Sounds good, what is a problem here ? everything seems to be going so well as planned.

Well not really, to give a bit more context on what happened in inception around technical areas.

1. Teams identified all the components involved in building this
2. Came up with flow / sequence diagram which represents the user functional flow.
3. Agreed that integration between modules to be integrated will be over http.
4. Draft version (remember the word draft here) of the API documentation on all the modules that have to be integrated irrespective of which vendor is building it.
5. So on..

Both the vendors (Agile and Waterfall) took away these artifacts and started development. While for an Agile vendor all the artifacts from the inception were just a supporting material for development from initial discussions with clients and what is good enough to get the project started. But for the Waterfall vendor, these artifacts became a specification for the project.

Agile vendor as you all know would prefer design while you go, while developing stories API contracts were revisited by developers, changed based on whats the best choice for design etc. Started to communicate with the other vendor and they went crazy over it, the waterfall vendor were just so reluctant to agree to any changes to the API no matter if its for good design or not. 

For waterfall vendor, whatever was discussed and agreed in the inception (where just few hours were dedicated to discuss technical approach) became a specification for the project. Wonder how can a few hours of discussion is good enough for the vendor to freeze on design. 

Lets consider it was good enough, if so is this the mistake by Agile vendor ? why couldn't they just develop as per the technical artifacts that came out of inception, why did they had to change while developing ? may be that because they are agile :)

For me it doesn't seem to be a problem with either of the vendors, for the waterfall vendor they prefer upfront design, spec it out and then develop. Any change to that would be a change request. But for the agile vendor, initial discussions are just a guideline and basis for getting started and they prefer to evolve the design while they go.

Now the biggest question is how to bridge these two vendors ? Is it fair to expect the agile vendor to do upfront design just because they are collaborating with the other vendor which is waterfall. Or should the waterfall vendor be more agile while partnering with agile vendor ?? 

At the end its the customer who is loosing money and the quality of the product between these.. its still puzzling me though !!


Friday, July 13, 2012

Jenkins Growl Notifier

If you are looking for a desktop growl notifier for your build runs on Jenkins, you are at the right place.

This does exactly that, at regular intervals it checks if any of the configured jobs had run and if so it figures the status and change set and notifies via growl.

Installation

Prerequisites

Growl notifier command line tool has to be installed and should be available on PATH.

Installing from rubygems

$ gem install jenkinsgrowler

Building from source

$ git clone https://github.com/katta/jenkinsgrowler.git
$ cd jenkinsgrowler
$ rake
$ gem install pkg/jenkinsgrowler.gem

Running

Usage: jenkinsgrowler [options]
    -s, --server SERVER_URL          URL of the jenkins server
    -j, --jobs JOBS                  Comma separated jobs names
    -i, --interval INTEVAL           Polling interval in seconds. Default (60 seconds)
    -u, --user USERNAME              Username for basic authentication
    -p, --password PASSWORD          Password for basic authentication
    -t, --timezone TIMEZONE          Servers timeone. Default (+0530)
    -h, --help                       Displays help message

Examples

Provide server url and job names to be monitored

$ jenkinsgrowler -s "http://ci.myhost.com/" -j "GrowlerTest"

In the above example the http://ci.myhost.com/ is the jenkins continuous integration server url and GrowlerTest is a job name to be monitored.

You can also monitor more than one job by specifying comma spearate job names like

$ jenkinsgrowler -s "http://ci.myhost.com/" -j "GrowlerTest, Job3"

Basic authentication

If your ci server is protected with basic authentication, you could just pass on the credentials to the jenkinsgrowler as shown below

$ jenkinsgrowler -s "http://ci.myhost.com/" -j "GrowlerTest" -u "username" -p "password"

You can fork this on github

Wednesday, March 7, 2012

Maven and Out of Memory PermGen space

We are working on a java project building a web app using standard libraries like Spring, Hibernate, Commons, ActiveMQ etc. As common practice we follow TDD and have written lot of integration tests for the application, after writing more and more active mq integration tests, maven started choking while running tests with OutOfMemory PermGen space error. We first thought its our application code which is causing the error, then isolated the problem to the integration tests (especially those active mq tests).

Tried all possible suggestions available over net like the rest who faced a similar problem by increasing the PermGenSize, tweaking JVM options like -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled, in spite of setting high memory options tests were running out of memory with permgen space. Tried profiling the maven process using yourkit and what not but with no luck.

JVM uses 64MB as a default PermGen space and ActiveMQ (especially if you are using in-memory) needs more memory than the default, we were aware of this but were clue less why maven was choking even after setting high memory JVM options. Finally google the best friend of developers came to rescue, we figured out that maven's surefire plugin which is the default plugin which runs tests spawns a new process for running them. So what ?? the memory options should have been inherited from JAVA_OPTS or MAVEN_OPTS environment variables isn't it ? thats what we assumed and we were wrong.

Surefire plugin by default does not inherit the parent processes java options neither does it take the options from the environment variables like JAVA_OPTS or MAVEN_OPTS. To set any jvm option for surefire plugin, it has to be configured in maven pom file.

There is a argLine element which helps you do set jvm opts for surefire plugin. It has to be configured like <argLine> -Xmx512m -XX:MaxPermSize=256m</argLine>  under the configuration section of surefire plugin. This did the trick.