Saturday, April 16, 2011

Clustering WebLogic 10g

Installing the WebLogic 10g server in a clustered environment is a pretty straight-forward process. There are some tips & trick though what can give you big headache, the most notable one being the configuration of the NodeManager.
When creating a new domain, the NodeManager is configured to communicate via SSL with the AdminServer, and will not work in the default configuration unless you start to generate the necessary certificates and provide the correct configuration parameters. However, there is a workaround to make the NodeManager work without a SSL configuration, and to get things up and running fast.

I will try to describe the basic 2-node clustered installation of a WebLogic domain. Of course this can be extended to any number of nodes.

The basic rule of WebLogic domain clustering is to have all middleware software installed in the same locations on all nodes. For example if you installed WebLogic and SOA and WebCenter into /u01/app/oracle/product/middleware/Oracle_SOA1 and /u01/app/oracle/product/middleware/Oracle_WC1, then you need to install these components on all nodes, in the same paths.

Another concern when installing a clustered configuration is WebLogic's Security Store. By default, the Security Store configuration is stored in a XML file, in the domain configuration folder: system-jazn-data.xml. In a clustered configuration, each node will have it's own file and there is a concern of synchronizing the file. The recommended way is either to have a shared configuration folder for all nodes (e.g. shared storage) or to configure WebLogic to use OID as the Security Store.

We will assume further that a default configuration if used and FMW software has been properly installed on all nodes. Start by running the WebLogic Domain Configuration Wizard from the admin node and create a new domain, by selecting the appropriate FMW components:


After this, give your domain a name (e.g. clustered_domain), enter the password for the weblogic user, configure the datasources, and choose to configure Managed Servers, Clusters and Machines:


Next, you must manually create the servers for each node. By default, the configuration wizard creates the default servers (e.g. soa_server1, UCM_server1, WC_Spaces), but you have to changes the names as in the following picture:


You can see that the name of the servers are the same as in the default configuration, the only thing changed is the number added at the end of the name: all servers ending in 01 will go to the first node, and all servers ending with 02 will go to the second node. It's important to keep the base name of the managed server s unchanged, and the ports must be the same on all nodes for the same managed servers. 
Next, you have to create a cluster for each managed server:


 and in the following screen, you must assign the corresponding managed servers to each cluster:


Now the clusters are defined. The only thing left to do is to define the physical machines (for NodeManager configuration), and assign the managed servers to those machines:


Please node that I've created 3 machines: node1 and node2 will have the actual clusters, whereas node0 will only host the Administration Server. This is an optional but recommended configuration. If you don't want to dedicate a machine only for the Administration Server, leave only node1 and node2, and assign the admin server to the node from which you started the configuration wizard.


Now you are ready to create the domain. Proceed to do so, and then start the Administration Server to perform the necessary domain configuration (e.g data sources, users, groups, boot.properties for all managed servers etc.).
After the configuration is complete, edit the nodemanager.properties file and set the following properties:
SecureListener=false
StartScriptEnabled=true
StopScriptEnabled=true
This will instruct the NodeManager to disable SSL communication and to use the startup/shutdown scripts to manage the managed servers.

Now, we must also tell the AdminServer how to connect to the nodemanager. In order to do this, login to the WebLogic console, go to Servers->AdminServer->SSL Tab->Advanced and set the property Hostname Verification to None. This will disable hostname checking for the NodeManager.
Next, go to Machines and for each machine, go to the Node Manager tab and set the Type to Plain. Also check the listen address for each machine to have the correct IP.

Save the configuration, stop the AdminServer and the NodeManager and we can move forward to pack the domain. Go to the wlserver_10.3/common/bin folder the execute the following command:

./pack.sh -managed=true -domain=/<absolute path to domain> -template=mydomain.jar -template_name=my_domain_template
This will create the mydomain.jar file that contains the domain template. Copy this file on all nodes in the same folder (wlserver_10.3/common/bin) and execute the following command on each of the nodes:

./unpack.sh -domain=/<absolute path to domain> -template=mydomain.jar
This will create the domain on all nodes.

Now, edit the nodemanager.properties file on all nodes and set the three properties to the same values as above, and restart all Node Managers.

Now you can start the AdminServer and once it's started, start all Managed Servers from the WebLogic console or Enterprise Manager.

You have now a clustered WebLogic installation.

Enjoy !


2 comments:

  1. Nice instructions that are still relevant today. Thanks!

    ReplyDelete
  2. do you create these oracle homes in any particular order and also could we extend this domain to support oid.please clarify

    ReplyDelete