Thursday, August 21, 2014

Application server, what is it?

What is an App Server?

It's worth reconsidering what the definition of an application server is, really, to make sure that the term means the same thing to everyone who uses it - and everyone does use it, often meaning completely different things. So let us ask the question: what is an application server? Which attributes from the following list are part of an application server?
  • Serves up web pages
  • Provides a container model for applications
  • Provides services for applications
  • Adheres to a specification controlled by industry
  • Distributes requests across multiple physical servers
  • Provides management and/or development tools
Chances are good that the average reader thinks a Java application server basically provides an implementation of the servlet specification, probably an implementation of JavaServer Pages, and perhaps some more services like database connection pooling.1 An application server is more and less, at the same time: an application server provides an environment where applications can run, no matter what the applications are or what they do.
The real definition of an application server in the Java EE world is more than servlets, JSP, database connection pooling. These are just a few pieces of the Java EE container model2, where an application is split into a server portion and a client portion3. The server itself is composed of different containers, each providing different services to an application.
There are lots of service containers! There's the servlet container that presents the front-end user interface4, the Enterprise JavaBean container that (presumably) manages business logic, a naming and directory interface, a message service, an adapter service that allows access to non-Java or other non-managed services, a security container... the list goes on5.
Since Java EE is designed to be able to handle large, complex applications, the container model tends to have a built-in level of complexity that can be daunting, to say the least, and historically was challenging even to advanced developers.
Most applications only use one or two parts of an application server, usually zeroing in on servlets and database connectivity; frameworks like Spring sprang up to make much of the container model unnecessary. There were costs to this, in that the applications ended up managing their own resources, but that was all right for most cases6... and the dependency injection frameworks restored so much simplicity to the development model that they made up for any weaknesses they may have had. (Most criticisms of the J2EE development model centered around the complexity of configuration and deployment.)
The result of applications using few services in an application server is that the definition of an application server becomes simpler. To a J2EE traditionalist, someone referring to Tomcat as their application server used to be a horror. Now, though, with simpler development models, Tomcat and Jetty are fully recognizable as "application servers" - even while not meeting the heavier requirements of the full specification7.
The problem with the Java EE specification and the notion of profiles is that it's still tied to the concept of request/response. A browser or client application sends a request, via HTTP, or CORBA, or whatever transport is chosen; the server then takes that request and farms it to an installed module, which responds (presumably over the same transport.)
Is that a big problem? Of course not. Chances are that 99% (or more!) of the applications in the world fit that model. That may be because one chooses how to build based on what tools you have - you don't use a screwdriver when all you have is a nail - but the fact remains that in general, the request/response model, using HTTP as a transport, is good enough for most applications. (If it weren't, people wouldn't continue to use the web as an application delivery mechanism.)
There are, however, applications or tasks for which request/response isn't enough.8 The Java EE specification has started to acknowledge this through the definition of EJB timers, which can themselves initiate processes after time lapses.
In fact, EJB timers are a huge shift in mindset for Java EE.
J2EE - the prior name for the specification - allowed for components (through the J2EE Connector Architecture specification9) which were able to manage all kinds of tricks, such as creating threads (which could, of course, spawn events), accessing filesystems10, or throttling external requests, or polling mail servers11.
With EJB timers, however, the need to spawn threads to watch for time lapses goes away. This doesn't get rid of the need for JCA components, but it goes a long way to handling the most common need that a JCA component would fulfill.
However, while EJB timers and JCA components are great enablers, and it's possible to scale up enormously with Java EE through the use of a good JMS container, the fact remains that the Java EE specification is only going to serve a wide middle ground of applications. It's sufficient for most applications, to be sure!
However, for some applications, even servlet engines are heavy - and for applications that need to run in realtime or need to scale up to handle incredible loads, Java EE has not yet truly been able to satisfy requirements.
Java EE's status as a set of specifications leads to some interesting problems, based on reliance on specific implementations of the specifications. An example? See http://blog.griddynamics.com/2008/07/gridgain-on-ec2.html, "Scalability Benchmark of Monte Carlo Simulation on Amazon EC2 with GridGain Software." The benchmark ran a computationally intensive algorithm on 512 EC2 instances, and boasted that it only had a 20% degradation after 256X increase in load.
That sounds impressive, until you note that the JMS container was the bottleneck here. All it had to do was serve out new data, a task for which it should have been perfect... yet it introduced a 20% loss in performance. It's important to remember that only one JMS container - Sun's Open MQ - was tested, although the use of ActiveMQ "ran into scalability issues." Other JMS containers may do better, and one certainly hopes this is the case, but these results are not encouraging.
Let's be clear: JMS itself is not to blame. It's just a specification. In this case, it's the container that deserves what blame there is - but people rarely understand or measure the container's actual impact, assuming that they can "use container X" and it'll work for them. It's a failure of expectation, not of platform... but the platform deserves the blame for setting the expectation.
One takeaway from the Grid Dynamics benchmark, however, was entirely valid, though: the use of the grid has enormous potential.
Grid Dynamics' test was not especially instructive as far as the power of the grid is concerned. (Nikita Ivanov said that the test was primarily to prove Amazon EC2's capabilities to run 512 nodes at the same time on the same application12.) However, if you were able to easily scale out and manage transactions and data on the grid, the ability to scale linearly (or nearly so) means you can have as much computing power as you can afford - literally, whether it's a $300 desktop machine lurking on a network, or an EC2 node you're paying for on an hourly basis.
The challenges for the grid fall in three primary areas: architecture, coding methodologies, and deployment.
Architectural changes for the grid can be a significant challenge simply because we, as developers, tend to accept architectural limits as a matter of course - and such limits aren't necessarily part of a grid13.
Coding methodologies also change. As Java EE is a largely request/response paradigm, developers learn to think in terms of single producers and consumers, and tend not to scale a given process out - simple load balancing is more preferred, from anecdotal studies. While it's impossible to give a generalized model for programming a grid-aware application, it's safe to say that truly leveraging the grid will impact your final model in fairly severe ways14.
Deployment issues around the grid mostly center around dynamic provisioning, where the systems on the grid participate only if they're needed.
Cloud computing isn't the only thing being added to the definition of an application server. Application servers for Java are traditionally packaged in very specific ways: web archives, ejb jars, enterprise archives, resource archives15. With the advent of OSGi, a specification for modules becomes a platform for application servers.
Right now, there's really only one major application server based on OSGi - SpringSource' Application Platform - although other application servers are starting to head that way, such as Glassfish. With OSGi, deploying an application becomes a matter of specifying a module, its dependencies, and what it handles - which may or may not be in the set of things one normally considers as "application server territory."
SpringSource' Application Platform is called a platform rather than an application server, but that's because "application server" means Java EE to many people! The meaning of "application server" should be divorced from "Java EE," rather than creating new memes for people to remember.
The definition of an application server used to be simple: it was something that helped applications. SAP defined itself as an application server, and was correct to do so, long before they offered NetWeaver. In a Java context, though, "application server" has been narrowed to mean a J2EE server. Now, however, with the advent of cloud computing, the definition simply has to widen again, to include not only Java EE, but any application platform that provides services that developers can leverage.
That's what it was, and what it should always have been, and what it should be now.

Footnotes

1http://searchsqlserver.techtarget.com/sDefinition/0,,sid87_gci211584,00.html, "What is application server?" Note the emphasis on HTTP as a transport.
3The client portion of a Java EE application can be seen as one (or more) of three environments: a client-side rich application, a web browser, or an applet running inside of a web browser.
4This is often with HTTP, although other protocols like SMTP are fully possible if you use a different port and, well, use Servlet instead of HttpServlet...
5Interested in the whole list? See http://java.sun.com/javaee/5/docs/tutorial/doc/bnacj.html, "Java EE APIs."
6With the advent of Sarbanes-Oxley, this changed somewhat: an application should not manage its own database passwords and the like. The Java Naming and Directory Interface was designed to isolate confidential information from the developer: see http://www.ibm.com/developerworks/library/j-jndi/?ca=dnt-62, "The role of JNDI in J2EE"
7In fact, the Java EE 6 specification created the idea of "profiles," built around the idea that containers like Tomcat are, in fact, acceptable application servers, and should be recognized as such by the specification.
8For example: batch processing, map/reduce architectural problems, complex flows without a specific request/response phase, or event-driven architectures - although it should also be noted that when all you have is a hammer, everything looks like a nail. People can use and have used J2EE for all of these, no matter what a pain it was to do so.
9http://java.sun.com/j2ee/connector/, "J2EE Connector Architecture," a rather underappreciated specification thanks to its arcane structure.
10Thus enabling safe usage of Lucene: see https://lucenerar.dev.java.net/ and https://lucene-connector.dev.java.net/
11A mail polling JCA component is the basis for the JCA tutorial.
12"I ... don't think that Grid Dynamics claims anything beyond just this test - you can simply perform computationally intensive tasks with almost linear scalability on 512-node strong Amazon EC2 cloud," from http://www.theserverside.com/news/thread.tss?thread_id=50262#266007
13In the interest of product neutrality, I don't think I can fairly go into more on architectural changes for a grid. I work for GigaSpaces Technologies; my bias is pretty easy to discover.
14Note that there are (at least) two vendors who might protest that statement: Azul and Terracotta. Both claim to take an application written traditionally and scale them out. I won't say otherwise; however, I'd say that even with Azul and Terracotta DSO you'll see a greater benefit from modifying your architecture to leverage the platforms' capabilities.
15JbossAS used to inspire a lot of hilarity for me with some of their additions to the set of archive formats, with things like hibernate archives and service archives. These were good ideas, really, and it's sad that I lacked the forethought to appreciate them. Mea culpa.

JDBC_JNDI what is what?

What is JDBC, Jdbc Provider and Data Source and JNDI

1. What is JDBC ?

JDBC is an API (Application Program Interfaces) , that is useful to write a java program to connect any database, and retrieve the data for the database and utilize the data in the java program.
Making a connection to a database
Creating SQL or MYSQL Statement
Execute that Sql or MySql queries in the database

We have 2 types of Jdbc drivers in WAS, Those are type 2(think) and type 4(thin/native protocol)
type 2 driver require the database client software on the client node to connect to the database server
type 4 driver connect directly to the database server

2. What is Data Source ?

Data Source allow to you to manage a pool of connection to a database
Data Source is handle to which database you want to connect , it is a communication between database and
client or end-user

3. What is Connection Pool ?

Connection pool is a place where a set of connection are kept and are used in different programs without creating connection to the database, after using the connection he can send back that connection to the connection pool

4. What is Jdbc Provider ?

It is a database provided by the vendor which database we have to use like mysql , oracle etc..
This details given by the client or vendor

5.What is JNDI ?

Java Naming Directory Interface Service is used to register the resource hosted by server's
JNDI gives unique name for every server
It is implements on the top of CORBA (Common Request Broker Architecture)

Http, Https, Ajp which one to choose.

http, https and ajp - comparison and choice

In a web scenario, client to server traffic is usually carried using an http (HyperText Transfer Protocol) transport. That's both from browser to public facing server, but also in ongoing transfers from the public facing server to other servers which provide content or run business logic in many applications.

But you'll note that I said "usually" - there are other transports that are available and used. The first group are those which transport the same data as http - specifically https and ajp. let's start off with describing what's in http.

What is http?

An http request comprises a series of lines of data, each new line terminated. The first of these lines comprises the request method (such as GET or POST) followed by the name of the resource required (such as /index.html) followed by a protocol version (such as HTTP/1.1). Subsequent lines include such things as the name of the host being contacted, referrer headers, cookies, the type of the browser, preferred language, and a whole host more details. In HTTP/1.1 only the name of the host being contacted is required in subsequent lines - the rest are conditional or optional. In the case of the POST method, the header is followed by the data that's associated with the request. An http requested is followed by a blank line which indicated that it is complete.

A server processes an http request and sends out a response. The response comprises a header block, a blank line, and (in most cases) a data block. The first line of the header includes a response code which indicates the success or otherwise of the request - a 3 digit number in the following ranges:
200 and up - success; expect good data to follow
300 and up - good request but only headers (no data). e.g. page has moved
400 and up - error in request. e.g. request was for missing page (404)
500 and up - error in handling request. e.g. program on server has syntax error

This line of the header block is followed by other headers telling the receiving system the content type (Mime type) which allows that receiving system to know whether to handle it as HTML, and a JPEG image, etc. Then there's a blank line and the actual data.

As there are often multiple requests made from the same client to the same server in quick succession (for example a web page will call up images), the connection often stays alive for a few seconds under HTTP/1.1.

See http protocol specification for further details

So what is https?

The https protocol carries the same information as http, but adds to it a secure socket layer (SSL). In other words, the data is encrypted at the client and decrypted at the server, and then the same happens in reverse. The purpose of this encryption is to ensure that stray data packets that are viewed along the way are no use the person who has them - they're uninterpretable binary data.

The https scheme is quite complicated - it starts off with the client having to establish that it's really talking to the correct server (and not some other machine pretending to be the correct server!) and then goes on to agree with that server just how things will be uniquely encoded. The same keys can't be used for multiple connections between different systems, or individual security would be compromised.

See https protocol - detailed description

How about AJP then? How does that compare to HTTP?

The http protocol is quite expensive in terms of band width - it's an ascii text protocl with words like "POST" and phrases like "Content-type:" taking up more bandwidth than is really needed, and having to be interpreted at destination too. So the ajp protocol (Apache Java Protocol?) was established to allow for much less expensive exchanges between upstream and downstream servers that are to be closely linked.

ajp carries the same information as http but in a binary format. The request method - GET or POST - is reduced to a single byte, and each of the additional headers are reduced to 2 bytes - typically, that's about a fifth of the size of the http packet.

See ajp protocol specification for further internal details.

Should I use http, https or ajp?

For most browser to server traffic, use http. If there's a need for security in the data (or if you're in doubt / customers may question the security), use https.

Between servers, http actually works very well - if you have an Apache httpd fronting a number of other servers (be they Apache http or Apache Tomcat), then there's nothing wrong with using the protocol at that layer too. Httpd's mod_proxy and mod_rewrite both allow for forwarding, and server languages such as PHP and Perl can make outgoing requests from the top tier server to other servers using http.

If you're looking to share the load between a number of second level (application) servers from a top level httpd server, mod_proxy_balancer introduced in Apache httpd 2.2 provides you with the tools that you'll need, and mod_rewrite can also do a good load distributions job (although the distribution algorithm is simple). For programs running on the server, outgoing requests can be distributed programatically.

One of the big issues of forwarding to a series of machines to balance the load is making sure that a series of linked pages and data entries called up by the same user are properly co-ordinated ("session continuity" it is called) and both mod_proxy_balancer and mod_rewrite provide the facility to support this. In the case of mod_proxy_balancer, it's a core feature. With mod_rewrite, a clever configuration.

If you have intensive / busy servers with bandwidth issues between them, use ajp as your linking protocol. The now-excellent mod_jk (available for you to build from the Jakarta project in Apache httpd 2.0 and prior, standard with the httpd distribution from Apache 2.2) provided an excellent use of the protocol, and support in Tomcat is strong. Many commercial systems are using ajp as their transport, and some recent benchmarks I did showed it to be 25% faster that httpd. You, should, though, remember that the transport is only a tiny part of most applications and so the savings are likely to be minimal on a real live system.

See protocol documents if you want to read further into this.

This is quite a long story, isn't it? If you're setting up multiple servers and sharing resources, you may want to learn the deployment and configuration details. We run several courses that may help you, where you get a chance to set up and try out the various options - see Deploying Apache httpd and Tomcat if you're linking the two servers, or Linux / Unix Web Server if you're configuring / linking multiple copies of httpd. We can also arrange specific private courses for groups, and / or short consultancy sessions. Contact me - graham@wellho.net to talk about your particular needs.

Other Protocols

To help complete the picture - protocols such as ftp and rmi transport different types of content, and xml, soap and the like are different layers. Again - I can cover that for you if needed!

Tomcat Configuration - A Step By Step Guide

Tomcat Configuration - A Step By Step Guide

Once you get Tomcat up and running on your server, the next step is configuring its basic settings. Your initial configuration process will consist of two tasks, which are explained in detail in this article. The first is editing Tomcat's XML configuration files, and the second is defining appropriate environment variables.

XML Configuration Files

The two most important configuration files to get Tomcat up and running are called server.xml and web.xml. By default, these files are located at TOMCAT-HOME/conf/server.xml and TOMCAT-HOME/conf/web.xml, respectively.
Don't do the same configuration work twice. Try Tcat - server profiles let you save common configurations and apply them to multiple Tomcat instances with a single click.

SERVER.XML

The server.xml file is Tomcat's main configuration file, and is responsible for specifying Tomcat's initial configuration on startup as well as defining the way and order in which Tomcat boots and builds. The elements of the server.xml file belong to five basic categories - Top Level Elements, Connectors, Containers, Nested Components, and Global Settings. All of the elements within these categories have many attributes that can be used to fine-tune their functionality. Most often, if you need to make any major changes to your Tomcat installation, such as specifying application port numbers, server.xml is the file to edit.
You can find comprehensive documentation for these options on Apache's Tomcat Documentation pages, but here's some information on some of the most important elements to get you started with your configuration!

Top Level Elements

Server

This element defines a single Tomcat server, and contains the Logger and ContextManager configuration elements. Additionally, the Server element supports the "port", "shutdown", and "className" attributes.
The port attribute is used to specify which port Tomcat should listen to for shutdown commands. The shutdown attribute defines the command string to be listened for on the specified port to trigger a shutdown. The className attribute specifies which Java class implementation should be used.

Service

This element, which can be nested inside a Server element, is used to contain one or multiple Connector components that share the same Engine component. The main function of this component is to define these components as a single service. The name of the service that will appear in logs is specified using the Service element's "name" attribute.

Connectors

By nesting one Connector (or multiple Connectors) within a Service tag, you allow Catalina to forward requests from these ports to a single Engine component for processing. Tomcat allows you to define both HTTP and AJP connectors.

HTTP Connector

This element represents an HTTP/1.1 Connector, and provides Catalina with stand-alone web server functionality. This means that in addition to executing servlets and JSP pages, Catalina is able to listen to specific TCP ports for requests. Each Connector you define represents a single TCP port Catalina should listen to for HTTP requests. When configuring your HTTP connectors, pay close attention to the "minSpareThreads", "maxThreads", and "acceptCount" attributes. The "maxThreads" attribute is of particular importance. This attribute controls the maximum number of threads that can be created to handle requests exceeding the number of available threads. Setting this value too low will cause requests to stack inside the server socket, which will begin refusing connections once it is full. Comprehensive testing will help you avoid this problem.

AJP Connector

This element represents a connector that is able to communicate with the AJP protocol. The main role of this element is to help Tomcat integrate with an installation of Apache. The most common reason why you would want this functionality is if you plan to use Apache to serve static content in front of Tomcat. This technique is intended to free up more power for dynamic page generation and load balancing, so if fast performance is a concern for your application, this is something to consider. AJP Connectors can also be used to expose Apache's SSL processing functionality to Tomcat.

Containers

These elements are used by Catalina to direct requests to the correct processing apparatus.

Context

This element represents a single web application, and contains path information for directing requests to the appropriate application resources. When Catalina receives a request, it attempts to match the longest URI to the context path of a given Context until it finds the correct element to serve the request. The Context element can have a maximum of one nested instance per element of the utility elements Loader, Manager, Realm, Resources, and WatchedResource. Although Tomcat allows you to define Contexts within "TOMCAT-HOME/conf/server.xml", this should generally be avoided, as these central configuration settings cannot be reloaded without restarting Tomcat, which makes editing Context attributes more invasive than necessary.

Engine

This element is used in conjunction with one or more Connectors, nested within a Service element, and is responsible for processing all requests associated with its parent service. The Engine element can only be used if it is nested within a Service element, and only one Engine element is allowed within a given Service element.Pay close attention to the "defaultHost" attribute, which defines the Host element responsible for serving requests for host names on the server that are not configured in server.xml. This attribute must match the name of one of the Host elements nested inside the Engine element in question. Also, it's important to assign a unique, logical name to each of your Engine elements, using the "name" attribute. If a single Server element in your server.xml file includes multiple Service elements, you are required to assign a unique name to every Engine element.

Host

This element, which is nested inside of the Engine element, is used to associate server network names with Catalina servers. This element will only function properly if the virtual host in question is registered with the managing DNS of the domain in question.
One of the most useful features of the Host element is its ability to contain nested Alias elements, which are used to define multiple network names that should resolve to the same virtual host.

Cluster

The Cluster element is used by Tomcat to provide context attribute replication, WAR deployment, and session replication, and can be nested within either the Engine or the Host element. The Manager, Channel, Valve, Deployer, and ClusterListener elements are nested inside of it. More information on these elements and how they are used can be found on Apache's Tomcat Configuration page. Although this element is highly configurable, the default configuration is generally enough to meet most users' needs.

Nested Components

These elements are nested inside of container elements to define additional functionalities.

Listeners

These elements, which can be nested inside Server, Engine, Host, or Context elements, point to a component that will perform an action when a specific event occurs.
While most components possess the className attribute, to select different implementations of the element, the Listener element is unique in that there are a number of unique implementations other than the default, and as of Tomcat 6.0, all of these implementations require that the Listener element be nested within a Server element. Thus, setting this attribute correctly is important. The implementations currently available are an APR Lifecycle Listener, a Jasper Listener, a Server Lifecyle Listener, a Global Resources Lifecyle Listener, a JMX Remote Lifecycle Listener, and a JRE Memory Leak Prevention Listener.

Global Naming Resources

This element is used to specify global Java Naming and Directory Interface (JNDI) resources for a specific Server, distinct from any per-web-application JNDI contexts. If you wish, you can declare JNDI resource lookup characteristics for and within this element by defining them and linking to them using . The results of this method are equivalent to including elements in an application's "/WEB-INF/web.xml" file. If using this technique, be sure to define any additional parameters necessary to specify and configure the object factory and its properties.

Realm

This element, which can be nested inside of any Container element, defines a database containing usernames, passwords, and roles for that Container. If nested inside a Host or Engine element, characteristics defined in the Realm element are inherited by all lower-level containers by default. It is important to set the "className" attribute of this element correctly, as a variety of implementations exist, to provide different types of Container Managed Security. These implementations are used to expose Catalina to other systems of user security management such as JDBC, JNDI, and DataSource.

Resources

This element has one simple job - directing Catalina to static resources used by your web applications. These resources include classes, HTML, and JSP files. Utilizing this element allows Catalina to access files contained in places other than the filesystem, such as resources contained in WAR archives or JDBC databases. It is vital to remember that this technique of allowing web applications access to resources contained off-filesystem can only be used if the application in question does not require direct access to resources stored on the filesystem.

Valve

Valve components are nested inside Engine, Host, and Context elements to insert specific functionalities into the request processing pipeline. This is a very versatile element. Types of Valve elements range from authenticators to filters to fixes for WebDAV errors. Many of these types of Valves can only be nested within specific elements. Needless to say, paying attention to this element's "className" attribute is essential. Extensive documentation on the types of Valve elements and their uses is available on Apache's Tomcat Configuration page.<

Web.XML

The web.xml file is derived from the Servlet specification, and contains information used to deploy and configure the components of your web applications. When configuring Tomcat for the first time, this is where you can define servlet mappings for central components such as JSP. Within Tomcat, this file functions in the same way as described in the Servlet specification. The only divergence in Tomcat's handling of this file is that a user has the option of utilizing TOMCAT-HOME/conf/web.xml to define default values for all contexts. If this method is utilized, Tomcat will use TOMCAT-HOME/conf/web.xml as a base configuration, which can be overwritten by application-specific WEB-INF/web.xml files.

Other important configuration files

A few other configuration files will be important as you get Tomcat up and running for the first time. Default lists of roles, users, and passwords that Tomcat's UserDatabaseRealm will use for authentication can be found in tomcat-users.xml. If you want to access any of the administrative tools that are packaged with Tomcat, you can edit this file to add admin and manager access. Default context settings applied to all deployed contexts of your Tomcat installation can be adjusted in the context.xml file.The catalina.policy file, which replaces the java.policy file packaged with your chosen JDK, contains permissions settings for Tomcat's elements. You can edit this file by hand or with policytool, an application packaged with any Java distribution 1.2 or later.

Environmental variables

Finally, when configuring Tomcat for the first time, there are several environment variables that should be modified to suit your needs.

JAVA_OPTS

Using this variable, you can define the heap size of the JVM. Setting an appropriate values for this variable is crucial when deploying a new application that may require more or less heap size to function properly. Finding the proper values for these settings can help eliminate or reduce OOME messages.

CATALINA_HOME

This variable specifies the location of your Tomcat installation. Tomcat's startup scripts will attempt to guess the value of this variable, but it is a good idea to simply set it to the correct value yourself to avoid any problems.

CATALINA_OPTS

This variable is used to set various Tomcat-specific options. This variable can be used to set environmental variables that override your JAVA_OPTS settings for Tomcat only, which is useful if you are running multiple Java applications on a single JVM.