LibraryToggle FramesPrintFeedback

Figure 3.1 gives an overview of how Master endpoints enable you to create a failover cluster.

The sample failover cluster consists of two servers, with the URIs, master:FD:jetty: and master:FD:jetty: When the servers are created, they compete to grab a lock on the FD entry in the fabric registry. Whichever server manages to get the lock (in this example, the server listening on port 9090) becomes the master, registering its URI under the cluster ID, FD, and activating its route. The other servers remain slaves: they are not able to register their URIs under the FD cluster ID, their routes do not get activated, and they continue to try the lock on the FD registry entry, in case the master server should fail.

The client must be defined using a Fabric endpoint (see Fabric). In this example, when the client route starts, it looks up the ID, FD, to find the master's endpoint URI, and then connects to the master server.

At some point, the master server could fail. When this happens, the following sequence of events occurs:

  1. Now that the master has died, the lock is free again. One of the slaves will succeed in grabbing the lock and become the new master.

  2. The new master registers its URI under the FD cluster ID, replacing the URI of the old master.

  3. The auto-reconnect capability of the Fabric endpoint in the client is activated. The client detects that the master has died, goes back to the fabric registry to obtain the URI of the new master, and then connects to the new master.

To create a failover cluster, all that you have to do is to publish more than one endpoint URI under the same cluster ID, using Master endpoints. Now, when a client looks up that cluster ID, it gets the URI of the currently active server in the cluster (the master). If the original server should fail, the client will automatically go back to the fabric registry to get the URI of the new master and then connect to the new master.

The servers in the failover cluster have almost the same configuration. Essentially, the only difference between them is that they publish an endpoint URI with a different hostname and/or IP port. Instead of creating a separate OSGi bundle for every single server in the failover cluster, however, it is better to define a template that enables you to specify the host or port using a configuration variable.

Example 3.1 illustrates the template approach to defining servers in a failover cluster, highlighting the relevant parts of the code.

A reference to the org.linkedin.zookeeper.client.IZKClient OSGi service is created using the reference element. This reference is needed, because the Master component implicitly looks for an IZKClient object in the bean registry and uses this object to connect to the underlying fabric.

The route starts with a from command that specifies a Master endpoint URI. The Master endpoint registers the given Jetty URI under the FailoverDemo cluster ID, which effectively means that the server joins the FailoverDemo failover cluster.

This example also illustrates how to use the OSGi blueprint property placehoder. The property placehoder mechanism enables you to read property settings from the OSGi Config Admin service and substitute the properties in the blueprint configuration file. In this example, the property placeholder accesses properties from the masterCamel persistent ID. A persistent ID in the OSGi Config Admin service identifies a collection of related property settings. After initializing the property placeholder, you can access any property values from the masterCamel persistent ID using the syntax, {{PropName}}.

The Master endpont URI exploits the property placeholder mechanism to substitute the value of the Jetty port, {{portNumber}}, at run time. At deploy time, you can specify the value of the portName property. For example, if using a custom feature, you could specify the property in the feature definition (see Add OSGi configurations to the feature in Deploying into the Container). Alternatively, you can specify configuration properties when defining deployment profiles in the Fuse Management Console.

To look up a URI in the fabric registry, simply specify the fabric endpoint URI with an ID, in the format, fabric:ClusterID. This syntax is used in a producer endpoint (for example, an endpoint that appears in a to DSL command).

Example 3.2 shows a route that implements a HTTP client, where the HTTP endpoint is discovered dynamically at run time, by looking up the specified ID, FailoverDemo, in the fabric registry.

The client route also needs a reference to the org.linkedin.zookeeper.client.IZKClient OSGi service, which the Fabric component uses to connect to the underlying fabric.

Because the route is implemented in blueprint XML, you would normally add the file containing this code to the src/main/resources/OSGI-INF/blueprint directory of a Maven project.

Comments powered by Disqus