Production Checklist

Added by Jon Purdy, last edited by Mark Falco on Dec 08, 2006  (view change)

Labels:

Enter labels to add to this page:
Wait Image 
Looking for a label? Just start typing.

Overview

Warning
Deploying Coherence in a production environment is very different from using Coherence in a development environment.

Development environments do not reflect the challenges of a production environment.

Coherence tends to be so simple to use in development that developers do not take the necessary planning steps and precautions when moving an application using Coherence into production. This article is intended to accomplish the following:

  • Create a healthy appreciation for the complexities of deploying production software, particularly large-scale infrastructure software and enterprise applications;
  • Enumerate areas that require planning when deploying Coherence;
  • Define why production awareness should exist for each of those areas;
  • Suggest or require specific approaches and solutions for each of those areas; and
  • Provide a check-list to minimize risk when deploying to production.

Deployment recomendataions are available for:

Network

During development, a Coherence-enabled application on a developer's local machine can accidentally form a cluster with the application running on other developers' machines.

Developers often use and test Coherence locally on their workstations. There are several ways in which they may accomplish this, including: By setting the multicast TTL to zero, by disconnecting from the network (and using the "loopback" interface), or by each developer using a different multi-cast address and port from all other developers. If one of these approaches is not used, then multiple developers on the same network will find that Coherence has clustered across different developers' locally running instances of the application; in fact, this happens relatively often and causes confusion when it is not understood by the developers.

Setting the TTL to zero on the command line is very simple: Add the following to the JVM startup parameters:

-Dtangosol.coherence.ttl=0

Starting with Coherence version 3.2, setting the TTL to zero for all developers is also very simple. Edit the tangosol-coherence-override-dev.xml in the coherence.jar file, changing the TTL setting as follows:

<time-to-live system-property="tangosol.coherence.ttl">0</time-to-live>

On some UNIX OSs, including some versions of Linux and Mac OS X, setting the TTL to zero may not be enough to isolate a cluster to a single machine. To be safe, assign a different cluster name for each developer, for example using the developer's email address as the cluster name. If the cluster communication does go across the network to other developer machines, then the different cluster name will cause an error on the node that is attempting to start up.

To ensure that the clusters are completely isolated, select a different multicast IP address and port for each developer. In some organizations, a simple approach is to use the developer's phone extension number as part of the multicast address and as the port number (or some part of it). For information on configuring the multicast IP address and port, see the section on multicast-listener configuration.

During development, clustered functionality is often not being tested.

After the POC or prototype stage is complete, and until load testing begins, it is not out of the ordinary for the application to be developed and tested by engineers in a non-clustered form. This is dangerous, as testing primarily in the non-clustered configuration can hide problems with the application architecture and implementation that will show up later in staging – or even production.

Make sure that the application is being tested in a clustered configuration as development proceeds. There are several ways for clustered testing to be a natural part of the development process; for example:

  • Developers can test with a locally clustered configuration (at least two instances running on their own machine). This works well with the TTL=0 setting, since clustering on a single machine works with the TTL=0 setting.
  • Unit and regression tests can be introduced that run in a test environment that is clustered. This may help automate certain types of clustered testing that an individual developer would not always remember (or have the time) to do.

What is the type and speed of the production network?

Most of the time, production networks are some combination of 100Mb Ethernet, gigabit Ethernet ("GigE") and occasionally ten-gigabit Ethernet. It is important to understand the topology of the production network, and what the full set of devices that will connect all of the servers that will be running Coherence. For example, if there are ten different switches being used to connect the servers, are they all the same type (make and model) of switch? Are they all the same speed? Do the servers support the network speeds that are available?

Avoid mixing and matching network speeds: Make sure that all servers can and do connect to the network at the same speed, and that all of the switches and routers between those servers are running at that same speed or faster.

Tangosol strongly suggests GigE or faster: Gigabit Ethernet is supported by most servers built since 2004, and Gigabit switches are economical, available and widely deployed.

Before deploying an application, you must run the Datagram Test to test the actual network speed and determine its capability for pushing large amounts of data. Furthermore, the Datagram test must be run with an increasing ratio of publishers to consumers, since a network that appears fine with a single publisher and a single consumer may completely fall apart as the number of publishers increases, such as occurs with the default configuration of Cisco 6500 series switches.

Will the production deployment use multicast?

The term "multicast" refers to the ability to send a packet of information from one server and to have that packet delivered in parallel by the network to many servers. Coherence supports both multicast and multicast-free clustering. Tangosol suggests the use of multicast when possible because it is an efficient option for many servers to communicate. However, there are several common reasons why multicast cannot be used:

  • Some organizations disallow the use of multicast.
  • Multicast cannot operate over certain types of network equipment; for example, many WAN routers disallow or do not support multicast traffic.
  • Multicast is occasionally unavailable for technical reasons; for example, some switches do not support multicast traffic.

First determine if multicast will be used. In other words, determine if the desired deployment configuration is to use multicast.

Before deploying an application that will use multicast, you must run the Multicast Test to verify that multicast is working and to determine the correct (the minimum) TTL value for the production environment.

Applications that cannot use multicast for deployment must use the WKA configuration. For more information, see the Network Protocols documentation.

Are your network devices configured optimally?

If the above datagram and/or multicast tests have failed or returned poor results, it is possible that there are configuration problems with the network devices in use. Even if the tests passed without incident and the results were perfect, it is still possible that there are lurking issues with the configuration of the network devices.

Review the suggestions in the Network Infrastructure Settings section of the Performance Tuning documentation.

See also:

Hardware

During development, developers can form unrealistic performance expectations.

Most developers have relatively fast workstations. Combined with test cases that are typically non-clustered and tend to represent single-user access (i.e. only the developer), the application may seem extraordinarily responsive.

Include as a requirement that realistic load tests be built that can be run with simulated concurrent user load.

Test routinely in a clustered configuration with simulated concurrent user load.

During development, developer productivity can be adversely affected by inadequate hardware resources, and certain types of quality can also be affected negatively.

Coherence is compatible with all common workstation hardware. Most developers use PC or Apple hardware, including notebooks, desktops and workstations.

Developer systems should have a significant amount of RAM to run a modern IDE, debugger, application server, database and at least two cluster instances. Memory utilization varies widely, but to ensure productivity, the suggested minimum memory configuration for developer systems is 2GB. Desktop systems and workstations can often be configured with 4GB for minimal additional cost.

Developer systems should have two CPU cores or more. Although this will have the likely side-effect of making developers happier, the actual purpose is to increase the quality of code related to multi-threading, since many bugs related to concurrent execution of multiple threads will only show up on multi-CPU systems (systems that contain multiple processor sockets and/or CPU cores).

What are the supported and suggested server hardware platforms for deploying Coherence on?

The short answer is that Tangosol works to support the hardware that the customer has standardized on or otherwise selected for production deployment.

  • Tangosol has customers running on virtually all major server hardware platforms. The majority of customers use "commodity x86" servers, with a significant number deploying Sun Sparc (including Niagra) and IBM Power servers.
  • Tangosol continually tests Coherence on "commodity x86" servers, both Intel and AMD.
  • Intel, Apple and IBM provide hardware, tuning assistance and testing support to Tangosol.
  • Tangosol conducts internal Coherence certification on all IBM server platforms at least once a year.
  • Tangosol and Azul test Coherence regularly on Azul appliances, including the newly-announced 48-core "Vega 2" chip.

If the server hardware purchase is still in the future, the following are suggested for Coherence (as of December 2006):

The most cost-effective server hardware platform is "commodity x86", either Intel or AMD, with one to two processor sockets and two to four CPU cores per processor socket. If selecting an AMD Opteron system, it is strongly recommended that it be a two processor socket system, since memory capacity is usually halved in a single socket system. Intel "Woodcrest" and "Clovertown" Xeons are strongly recommended over the previous Intel Xeon CPUs due to significantly improved 64-bit support, much lower power consumption, much lower heat emission and far better performance. These new Xeons are currently the fastest commodity x86 CPUs, and can support a large memory capacity per server regardless of the processor socket count by using fully bufferred memory called "FB-DIMMs".

It is strongly recommended that servers be configured with a minimum of 4GB of RAM. For applications that plan to store massive amounts of data in memory – tens or hundreds of gigabytes, or more – it is recommended to evaluate the cost-effectiveness of 16GB or even 32GB of RAM per server. As of December, 2006, commodity x86 server RAM is readily available in a density of 2GB per DIMM, with higher densities available from only a few vendors and carrying a large price premium; this means that a server with 8 memory slots will only support 16GB in a cost-effective manner. Also note that a server with a very large amount of RAM will likely need to run more Coherence nodes (JVMs) per server in order to utilize that much memory, so having a larger number of CPU cores will help. Applications that are "data heavy" will require a higher ratio of RAM to CPU, while applications that are "processing heavy" will require a lower ratio. For example, it may be sufficient to have two dual-core Xeon CPUs in a 32GB server running 15 Coherence "Cache Server" nodes performing mostly identity-based operations (cache accesses and updates), but if an application makes frequent use of Coherence features such as indexing, parallel queries, entry processors and parallel aggregation, then it will be more effective to have two quad-core Xeon CPUs in a 16GB server – a 4:1 increase in the CPU:RAM ratio.

A minimum of 1000Mbps for networking (e.g. Gigabit Ethernet or better) is strongly recommended. NICs should be on a high bandwidth bus such as PCI-X or PCIe, and not on standard PCI. In the case of PCI-X having the NIC on an isolated or otherwise lightly loaded 133MHz bus may significantly improve performance.

See also:

Operating System

During development, developers typically use a different operating system than the one that the application will be deployed to.

The top three operating systems for application development using Coherence are, in this order: Windows 2000/XP (~85%), Mac OS X (~10%) and Linux (~5%). The top four operating systems for production deployment are, in this order: Linux, Solaris, AIX and Windows. Thus, it is relatively unlikely that the development and deployment operating system will be the same.

Make sure that regular testing is occurring on the target operating system.

What are the supported and suggested server operating systems for deploying Coherence on?

Tangosol tests on and supports various Linux distributions (including customers that have custom Linux builds), Sun Solaris, IBM AIX, Windows 2000/XP/2003, Apple Mac OS X, OS/400 and z/OS. Additionally, Tangosol supports customers running HP-UX and various BSD UNIX distributions.

If the server operating system decision is still in the future, the following are suggested for Coherence (as of December 2006):

For commodity x86 servers, Linux distributions based on the Linux 2.6 kernel are recommended. While it is expected that most 2.6-based Linux distributions will provide a good environment for running Coherence, the following are recommended by Tangosol: RedHat Enterprise Linux (version 4 or later) and Suse Linux Enterprise (version 10 or later). Tangosol also routinely tests using distributions such as RedHat Fedora Core 5 and even Knoppix "Live CD".

Review and follow the instructions in the "Deployment Considerations" document (from the list of links below) for the operating system that Coherence will be deployed on.

Avoid using virtual memory (paging to disk).

In a Coherence-based application, primary data management responsibilities (e.g. Dedicated Cache Servers) are hosted by Java-based processes. Modern Java distributions do not work well with virtual memory. In particular, garbage collection (GC) operations may slow down by several orders of magnitude if memory is paged to disk. With modern commodity hardware and a modern JVM, a Java process with a reasonable heap size (512MB-2GB) will typically perform a full garbage collection in a few seconds if all of the process memory is in RAM. However, this may grow to many minutes if the JVM is partially resident on disk. During garbage collection, the node will appear unresponsive for an extended period of time, and the choice for the rest of the cluster is to either wait for the node (blocking a portion of application activity for a corresponding amount of time), or to mark the unresponsive node as "failed" and perform failover processing. Neither of these is a good option, and so it is important to avoid excessive pauses due to garbage collection. JVMs should be pinned into physical RAM, or at least configured so that the JVM will not be paged to disk.

Note that periodic processes (such as daily backup programs) may cause memory usage spikes that could cause Coherence JVMs to be paged to disk.

See also:

JVM

During development, developers typically use the latest Sun JVM or a direct derivative such as the Mac OS X JVM.

The main issues related to using a different JVM in production are:

  • Command line differences, which may expose problems in shell scripts and batch files;
  • Logging and monitoring differences, which may mean that tools used to analyze logs and monitor live JVMs during development testing may not be available in production;
  • Significant differences in optimal GC configuration and approaches to GC tuning;
  • Differing behaviors in thread scheduling, garbage collection behavior and performance, and the performance of running code.

Make sure that regular testing is occurring on the JVM that will be used in production.

Which JVM configuration options should be used?

JVM configuration options vary over versions and between vendors, but the following are generally suggested:

  • Using the -server option will result in substantially better performance.
  • Using identical heap size values for both -Xms and -Xmx will yield substantially better performance, as well as "fail fast" memory allocation.
  • For naive tuning, a heap size of 512MB is a good compromise that balances per-JVM overhead and garbage collection performance.
    • Larger heap sizes are allowed and commonly used, but may require tuning to keep garbage collection pauses manageable.

What are the supported and suggested JVMs for deploying Coherence on?

In terms of Tangosol Coherence versions:

  • Coherence version 3.x (currently at the 3.2.1 release level) is supported on the Sun JDK versions 1.4 and 1.5, and JVMs corresponding to those versions of the Sun JDK. Tangosol will provide support for the Sun JDK version 1.6 beginning shortly after it is reaches a "GA" status.
  • Coherence version 2.x (currently at the 2.5.1 release level) is supported on the Sun JDK versions 1.2, 1.3, 1.4 and 1.5, and JVMs corresponding to those versions of the Sun JDK.

Often the choice of JVM is dictated by other software. For example:

  • IBM only supports IBM WebSphere running on IBM JVMs. Most of the time, this is the IBM "Sovereign" or "J9" JVM, but when WebSphere runs on Sun Solaris/Sparc, IBM builds a JVM using the Sun JVM source code instead of its own.
  • BEA WebLogic typically includes a JVM which is intended to be used with it. On some platforms, this is the BEA WebLogic JRockit JVM.
  • Apple Mac OS X, HP-UX, IBM AIX and other operating systems only have one JVM vendor (Apple, HP and IBM respectively).
  • Certain software libraries and frameworks have minimum Java version requirements because they take advantage of relatively new Java features.

On commodity x86 servers running Linux or Windows, the Sun JVM is recommended. Generally speaking, the recent update versions are recommended. For example:

  • Sun JDK 1.5.0 update 10 is the latest "JDK 1.5" version and thus (absent any regressions) is a recommended JDK, but the bulk of the Coherence 3.2 release test was conducted on the then-latest update 7, so (absent any application-specific JVM-related issues) going to production using update 7 would be just as recommended.
  • Sun JDK 1.4.2 update 13 is the latest "JDK 1.4" version and thus is recommended if deploying on JDK 1.4, but if an application were developed and tested with JDK 1.4.2 update 12 instead, and there were no known JVM-related issues, then going to production using update 12 would be suggested to minimize the risks associated with upgrading one of the production components.

Basically, at some point before going to production, a JVM vendor and version should be selected and well tested, and absent any flaws appearing during testing and staging with that JVM, that should be the JVM that is used when going to production. For applications requiring continuous availability, a long-duration application load test (e.g. at least two weeks) should be run with that JVM before signing off on it.

Review and follow the instructions in the "Deployment Considerations" document (from the list of links below) for the JVM that Coherence will be deployed on.

Must all nodes run the same JVM vendor and version?

No. Coherence is pure Java software and can run in clusters composed of any combination of JVM vendors and versions, and Tangosol tests such configurations.

Note that it is possible for different JVMs to have slightly different serialization formats for Java objects, meaning that it is possible for an incompatibility to exist when objects are serialized by one JVM, passed over the wire, and a different JVM (vendor and/or version) attempts to deserialize it. Fortunately, the Java serialization format has been very stable for a number of years, so this type of issue is extremely unlikely. However, it is highly recommended to test mixed configurations for conistent serialization prior to deploying in a production environment.

See also:

Application Instrumentation

Be cautious when using instrumented management and monitoring solutions.

Some Java-based management and monitoring solutions use instrumentation (e.g. bytecode-manipulation and ClassLoader substitution). While there are no known open issues with the latest versions of the primary vendors, Tangosol has observed issues in the past.

Coherence Licenses

During development, use a development license.

The Coherence download usually includes an evaluation license. This license is only intended for evaluating the software, and is only valid for roughly 30 days. These licenses are full-featured, and allow unlimited use of all product features solely for the purposes of evaluation.

Tangosol provides development licenses for free. These licenses last for one year, and can be renewed for free. For more information, contact your account manager or send a request to the "sales" email address at Tangosol. The Coherence development license may be used for load testing and even production staging!

Once a development license is obtained, replace the license file that is included inside tangosol.jar with the development license file.

It is recommended to use the development license for all pre-production activities, such as development and testing. This is an important safety feature, because Coherence automatically prevents nodes that are using a development license from joining a production cluster.

Production licenses are required to use Coherence in production. These licenses include access to a specific set of product features depending on the license edition, and are generally not time limited.

What are the proper steps for deploying production licenses?

Be sure to obtain enough licenses for the servers in the production environment (including storage-disabled application servers).

If you are transfering a license file via FTP (or similar method), always use the BINARY mode of transfer for the license file.

The tangosol-license.xml file contains signed data elements and thus it is important that it be transfered in binary form to prevent any CR/LF based conversions when moving between Windows and Unix operating systems. If the file is transfered in ASCII mode some or all of the included licenses may fail the signature check and be deemed invalid.

Starting with Coherence version 3.2, on each production server that has a different hardware configuration (number or type of processor sockets, processor packages or CPU cores), verify that Coherence correctly identifies the hardware configuration.

java -cp tangosol.jar com.tangosol.license.ProcessorInfo

If the result of the ProcessorInfo program differs from the actual configuration, send the program's output and the actual configuration to the "support" email address at Tangosol.

How can production licenses be added to a running cluster?

Starting with Coherence version 3.2, production licenses can be added to a running cluster without interrupting the operation of the cluster.

There is a new configuration section in tangosol-coherence.xml (located in coherence.jar) for license-related information; it is the <license-config> element.

  <license-config>
    <license-path system-property="tangosol.coherence.licensepath"></license-path>
    <edition-name system-property="tangosol.coherence.edition"></edition-name>
    <license-mode system-property="tangosol.coherence.mode"></license-mode>
  </license-config>

These elements are defined by the corresponding coherence.dtd in coherence.jar. It is possible to specify each of these settings on the command line using the command line override feature.

As detailed in the documentation for the <license-path> element, Coherence can be configured to load its licenses from a license file located on the file system. This option should be used to easily support adding licenses to a running cluster.

  • Configure a value for <license-path> that indicates the location of the tangosol-license.xml file.
  • Copy the entire new <license> element(s) into your Copy/Paste buffer.
      <license>
        <software>Tangosol Coherence: DataGrid Edition</software>
        ...
        <signature>302D02141818B24EBCBE3B97..3738FB3CC27E9F3CF0</signature>
      </license>
    
  • Open the tangosol-license.xml that is specified by <license-path> and paste the new <license> element(s) inside the <license-document> element, just like the pre-existing <license> elements in the file.
  • Any production licenses added to that file will be incorporated into the running cluster when the next node joins the cluster.

How can the license mode be overridden if there are licenses of multiple license modes (such as development and production) in the license file?

As detailed in the documentation for the <license-mode> element, if a mode is specified, Coherence will use that license mode.

It is possible to specify this setting on the command line using the command line override feature:

-Dtangosol.coherence.mode=dev

How can the license edition be specified on a node by node basis?

The <edition-name> element specifies the product edition that the member will utilize. This allows multiple product editions to be used within the same cluster, with each member specifying the edition that it will be using.

It is fairly common to have to specify a license edition. For example, if a server is connecting as a real-time client of a Coherence Data Grid over TCP/IP, it should specify the "Real Time Client" mode.

It is possible to specify this setting on the command line using the command line override feature:

-Dtangosol.coherence.edition=RTC

Valid values are:

Value License Edition
DGE DataGrid Edition
AE Application Edition
CE Caching Edition
CC Compute Client
RTC Real-Time Client
DC Data Client
  • Data Clients can connect to DataGrid Edition, Application Edition and Caching Edition clusters.
  • Real-Time Clients and Compute Clients can only attach to Data Grid Edition clusters.

Coherence Operational Configuration

Operational configuration relates to the configuration of Coherence at the cluster level including such things as:

The operational aspects are normally configured via the tangosol-coherence-override.xml file.

The contents of this file will likely differ between development and production. It is recommended that that these variants be maintained independently due to the significant differences between these environments. The production operational configuration file should not be the responsibility of the application developers, instead it should fall under the jurisdiction of the systems administrators who are far more familiar with the workings of the production systems.

All cluster nodes should utilize the same operational configuration descriptor. A centralized configuration file may be maintained and accessed by specifying the file's location as a URL using the tangosol.coherence.override system property. Any node specific values may be specified via system properties.

The override file should contain only the subset of configuration elements which you wish to customize. This will not only make your configuration more readable, but will allow you to take advantage of updated defaults in future Coherence releases. All override elements should be copied exactly from the original tangosol-coherence.xml, including the id attribute of the element.

Member descriptors may be used to provide detailed identity information that is useful for defining the location and role of the cluster member. Specifying these items will aid in the management of large clusters by making it easier to identify the role of a remote nodes if issues arise.

Coherence Cache Configuration

Cache configuration relates to the configuration of Coherence at a per-cache level including such things as:

The cache configuration aspects are normally configured via the coherence-cache-config.xml file.

The default coherence-cache-config.xml file included within coherence.jar is intended only as an example and is not suitable for production use. It is suggested that you produce your own cache configuration file with definitions tailored to your application needs.

All cluster nodes should utilize the same cache configuration descriptor. A centralized configuration file may be maintained and accessed by specifying the file's location as a URL using the tangosol.coherence.cacheconfig system property.

Choose the cache topology which is most appropriate for each cache's usage scenario.

It is important to size limit your caches based on the allocated JVM heap size. Even if you never expect to fully load the cache, having the limits in place will help protect your application from OutOfMemoryExceptions if your expectations are later negated.

For a 1GB heap that at most ¾ of the heap be allocated for cache storage. With the default one level of data redundancy this implies a per server cache limit of 375MB for primary data, and 375MB for backup data. The amount of memory allocated to cache storage should fit within the tenured heap space for the JVM. See Sun's GC tuning guide for details.

It is important to note that when multiple cache schemes are defined for the same cache service name the first to be loaded will dictate the service level parameters. Specifically the partition-count, backup-count and thread-count are shared by all caches of the same service.

For multiple caches which use the same cache service it is recommended that the service related elements be defined only once, and that they be inherited by the various cache-schemes which will use them.

If you desire different values for these items on a cache by cache basis then multiple services may be configured.

For partitioned caches Coherence will evenly distribute the storage responsibilities to all cache servers, regardless of their cache configuration or heap size. For this reason it is recommended that all cache server processes be configured with the same heap size. For machines with additional resources multiple cache servers may be utilized to effectively make use of the machine's resources.

To ensure even storage responsibility across a partitioned cache the partition-count should be set to a prime number which is at least the square of the number of cache servers which will be used.

For caches which are backed by a cache store it is recommended that the parent service be configured with a thread pool as requests to the cache store may block on I/O. The pool is enabled via the thread-count element. For non-CacheStore-based caches more threads are unlikely to improve performance and should left disabled.

Unless explicitly specified all cluster nodes will be storage enabled, i.e. will act as cache servers. It is important to control which nodes in your production environment will be storage enabled and storage disabled. The tangosol.coherence.distributed.localstorage system property may be utilized to control this, setting it to either true or false. Generally only dedicated cache servers, all other cluster nodes should be configured as storage disabled. This is especially important for short lived processes which may join the cluster perform some work, and exit the cluster, having these nodes as storage disable will introduce unneeded repartitioning.

Other Resources