Friday, November 18, 2011

Integrating Siebel 8 with Active Directory or any other LDAP Directory

The documentation available in the Siebel Security Guide is guide detailed and very difficult to follow at the same time. I want to present a quick and easy way to integrate Siebel 8 with Microsoft Active Directory (or any other LDAP Directory).

I will assume in this post that Siebel is deployed in a UNIX/Linux environment (non-Windows) and the LDAP integration is NOT done via SSL. However, the steps on a Windows environment should be almost similar.

Step 1. Install the IBM LDAP Client SDK

The IBM LDAP Client SDK kit is located in the Siebel 8 install image, in the following location: <YOUR_OS>/Server_Ancillary/IBM_LDAP_6.0_Client/enu/itds60-client-sol-sparc-native.tar

Before installing the Client SDK, you need to make sure that there are no conflicting LDAP utilities installed on the server (like ldapsearch, ldapbind etc.). If there are, please move them to a temporary directory. Example:

# cd /usr/bin
# mkdir old_ldap
# mv ./ldap* ./old_ldap/ 

After you are sure that there are no more LDAP utilities on your sever (try to run ldapbind or ldapsearch from any location), you can proceed to the installation of the IBM LDAP Client SDK. I am giving an example for Solaris here:

# tar -xvf itds60-client-sol-sparc-native.tar
# cd itdsV60Client
# cd itds
# pkgadd -d idsldap.cltbase60.pkg 

The following packages are available:
 1  IDSlbc60     IBM Directory Server - Base Client
                 (sparc) 6.0.0.0
Select package(s) you wish to process (or 'all' to process
all packages). (default: all) [?,??,q]:
Enter

Processing package instance <IDSlbc60> from </KIT/IBM_LDAP_Client_6/itdsV60Client/itds/idsldap.cltbase60.pkg>

IBM Directory Server - Base Client(sparc) 6.0.0.0
          5724-C08
          Copyright (c) IBM Corporation 1994-2003
 Portions Copyright (c) 1991 - 2000  Compuware Corporation

All rights reserved.  This product and its associated documentation are
protected by copyright and are distributed under a license agreement
restricting their use, reproduction, distribution, and decompilation.  No
part of this product or its associated documentation may be reproduced in
any form by any means without the prior written consent of IBM Corporation.

## Processing package information.
## Processing system information.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

This package contains scripts which will be executed with super-user
permission during the process of installing this package.

Do you want to continue with the installation of <IDSlbc60> [y,n,?]
y and Enter

Installing IBM Directory Server - Base Client as <IDSlbc60>

## Executing preinstall script.
## Installing part 1 of 1.
/opt/IBM/ldap/V6.0/bin/ITDSS060000.sys
/opt/IBM/ldap/V6.0/bin/ibmdirctl
/opt/IBM/ldap/V6.0/bin/idsdirctl
/opt/IBM/ldap/V6.0/bin/idsldapadd
/opt/IBM/ldap/V6.0/bin/idsldapchangepwd
….....
[ verifying class <idsldap> ]
## Executing postinstall script.
Installation of <IDSlbc60> was successful.


# pkgadd -d idsldap.clt32bit60.pkg 
 The following packages are available:
 1  IDSl32c60     IBM Directory Server - 32 bit Client
                  (sparc) 6.0.0.0
Select package(s) you wish to process (or 'all' to process
all packages). (default: all) [?,??,q]:
Enter

Processing package instance <IDSl32c60> from </KIT/IBM_LDAP_Client_6/itdsV60Client/itds/idsldap.clt32bit60.pkg>
IBM Directory Server - 32 bit Client(sparc) 6.0.0.0
          5724-C08
          Copyright (c) IBM Corporation 1994-2003
 Portions Copyright (c) 1991 - 2000  Compuware Corporation

All rights reserved.  This product and its associated documentation are
protected by copyright and are distributed under a license agreement
restricting their use, reproduction, distribution, and decompilation.  No
part of this product or its associated documentation may be reproduced in
any form by any means without the prior written consent of IBM Corporation.

## Processing package information.
## Processing system information.
  2 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.
This package contains scripts which will be executed with super-user
permission during the process of installing this package.
Do you want to continue with the installation of <IDSl32c60> [y,n,?]
y and Enter

Installing IBM Directory Server - 32 bit Client as <IDSl32c60>
## Executing preinstall script.
## Installing part 1 of 1.
/opt/IBM/ldap/V6.0/bin/32/ibmdirctl
….....
/opt/IBM/ldap/V6.0/lib/libidsstr.so
/opt/IBM/ldap/V6.0/lib/libldap.so <symbolic link>
[ verifying class <idsldap> ]
## Executing postinstall script.
Installation of <IDSl32c60> was successful.

After the installation is completed, you need to update the siebenv.sh file located in (SIEBEL_HOME/siebsrvr and SIEBEL_HOME/gtwysrvr) and add the path to the newly installed IBM CLient SDK lib folder:

siebenv.sh:
.......
if [ a${LD_LIBRARY_PATH} = ${LD_LIBRARY_PATH}a ]
then LD_LIBRARY_PATH=${SIEBEL_ROOT}/lib:${SIEBEL_ROOT}/lib/odbc/merant:/opt/IBM/ldap/V6.0/lib:${MWHOME}/lib:${SQLANY}/lib:/usr/lib:$ORACLE_HOME/lib
else LD_LIBRARY_PATH=${SIEBEL_ROOT}/lib:${SIEBEL_ROOT}/lib/odbc/merant:/opt/IBM/ldap/V6.0/lib:${MWHOME}/lib:${SQLANY}/lib:/usr/lib:$ORACLE_HOME/lib:${LD_LIBRARY_PATH}
fi
..... 

Step 2. Create Active Directory accounts

You need at least one directory account for the Siebel Administrator: SADMIN. You can create this as a regular Active Directory user, with no special permissions.

Step 3. Create/Check LDAPUSER Siebel account

For the Active Directory integration, the Siebel LDAP Security Adapter needs to use a database account that can impersonate any user. This account is named LDAPUSER and is created by default in any Siebel installation, with a default password the same as the account name (ldapuser). This database account also has a special role granted: SSE_ROLE. Please make sure check that the LAPUSER has this role granted.

Try to login with this account in the Siebel application. If it does not work, create a Siebel user with the same name.

Step 4. Configure Siebel LDAP Security Adapter

Starting with Siebel 8, Oracle recommends to use the LDAP Security Adapter instead of the ADSI Adapter for integrating with Microsoft Active Directory or any other LDAP Directory. The LDAP Adapter has been greatly improved and includes all the features of the ADSI Adapter.


To configure the adapter, login to the Siebel application with a Siebel Administrator user and go to the following view: Site Map-> Administration - Server Configuration > Enterprises > Profile Configuration, and select LDAP Security Adapter from the table. Choose the Parameters tab below and set the following parameters:


Parameter NameValue
Application UserCN=Administrator,CN=Users,DC=mydomain,DC=com
Application Password<Administrator_Pasword>
Base DnDC=mydomain,DC=com
Credentials Attribute Typeurl
Port389
Hash DB CredFalse
Hash User PasswordFalse
Password Attribute TypeuserPassword
Server Namedirectoryserver.mydomain.com
Siebel Username Attribute TypesAMAccountName
Shared DB UsernameLDAPUSER
Shared DB Passwordldapuser
Username Attribute TypesAMAccountName
Propagate ChangeFalse

A few comments on the parameters above:

Shared DB Username and Shared DB Password
You can use these two parameters instead of the Shared Credentials DN and avoid creating a directory account to hold these credentials. Please don't set both the Shared Credentials DN and the Shared DB Username, Password parameters !

The Shared DB Username and Shared DB Password parameters must hold the database account values from Step 3.

Propagate Change
You can set this to True if you want Siebel account details to be propagated back to the directory server. If you specify this option, then you must also set the SecThickClientExtAuthent system preference to TRUE.


Step 5. Configure a Siebel component to use the LDAP Security Adapter

You can configure individual Siebel components to use the LDAP Security Adapter, while leaving others to use the DB Security Adapter.


To specify the adapter for a certain component, go to the Site Map-> Administration - Server Configuration > Enterprises > Component Definitions view, choose the component and set the following values in the Parameters tab:



Parameter NameValue
Security Adapter ModeLDAP
Security Adapter NameLDAPSecAdpt


Step 6. Update $SIEBEL_HOME/sweapp/bin/eapps.cfg

You need to update the $SIEBEL_HOME/sweapp/bin/eapps.cfg or $SIEBEL_HOME/sweapp/bin/eapps_sia.cfg etc. in order to use the new LDAP credentials for the LDAP-enabled Siebel components.


In this file, go to the section related to the LDAP-enabled Siebel component (e.g. [/edemocomponent_enu]) and set the following parameters:
  • EncryptedPassword = false
  • AnonUserName  = SADMIN
  • AnonPassword  =<Directory Password for the SADMIN user in clear text>
Note: If you don't want to use a clear-text password, set the EncryptedPassword parameter to true and encrypt your password using the $SIEBEL_HOME/sweapp/bin/encryptstring utility.

After updating the component parameters, the section should look something like:

[/edemocomponent_enu]
ConnectString = siebel.TCPIP.None.None://siebeldev:2321/SBLDEV/PSCcObjMgr_enu
WebPublicRootDir = /SIEBEL/sweapp/public/enu
SiebEntSecToken = 321pq2LcPpwBDAAfFP==
EncryptedPassword = false
AnonUserName  = SADMIN    
AnonPassword  = asdqwe123



After all is completed, restart both the Siebel server and the corresponding Web Server.

FMW 10gR2 is End of Life

For all of you out there who are sill using FMW 10g, the 10gR2 10.1.2.x * are End-of-Life as of 31 Dec 2011. More specifically, the following products will no longer be supported:

  • BPEL
  • InterConnect
  • B2B
  • Business Activity Monitoring
  • Discoverer
  • Reports
  • Forms
  • Portal
  • Internet Directory
  • Single Sign-On
  • Certificate Authority
  • Containers for J2EE
  • HTTP Server
  • Web Cache
  • Wireless and Developer Suite (Designer, Forms Builder, Reports Builder & Discoverer Administrator)
 You must update to 10gR3 which is End-of-Life in June 2014.

  Source:  http://www.oracle.com/support/library/brochure/lifetime-support-middleware.pdf

Sunday, May 1, 2011

Deploying ADF Faces Applications on Tomcat 6

After a bit of struggling with ADF Faces and Tomcat, I finally got them working together. The version of ADF Faces I am using for this is 11gR1 (11.1.1.4.0) and Tomcat 6.0.28. I don't know if it will work with 11.1.1.3.0 or other version of Tomcat but I will give it a shot in the near future.
What I've got working so far is only the ADF Faces and Data Visualization applications, with binding support (page definitions, taskflows, page templates etc.). I did not yet got into ADF BC and Security (JPS), but this is the next step.
While doing the experiment, I noticed that Oracle is preparing the grounds for adding JBoss support is their next release. They've added the WebSphere application server and it seems pretty logical that JBoss will be the next supported app server.

What you need to do, in order to use Tomcat to deploy ADF Faces is just copy the right JARs in Tomcat's lib folder and configure the JDeveloper web application (ViewController project) to be deployed as a WAR for Tomcat. You can do this by going to the Project Properties for the ViewController project, choose Deployment and there create a new WAR deployment profile and select Tomcat as the target application server. After this, just deploy the project to the war file and copy it to the webapps folder inside Tomcat.

Before starting the server, add the following JAR files to the lib folder inside Tomcat:

adf-controller.jar
adf-controller-api.jar
adf-controller-rt-common.jar
adf-controller-security.jar
adf-dt-at-rt.jar
adf-dynamic-faces.jar
adf-faces-changemanager-rt.jar
adf-faces-databinding-dt-core.jar
adf-faces-databinding-rt.jar
adf-faces-templating-dt-core.jar
adf-faces-templating-dtrt.jar
adflibfilter.jar
adflogginghandler.jar
adfm.jar
adfmweb.jar
adf-pageflow-fwk.jar
adf-pageflow-impl.jar
adf-pageflow-rc.jar
adf-richclient-api-11.jar
adf-richclient-automation-11.jar
adf-richclient-impl-11.jar
adf-share-base.jar
adf-share-ca.jar
adfsharembean.jar
adf-share-security.jar
adf-share-support.jar
adf-share-web.jar
adftags.jar
adf-view-databinding-dt-core.jar
bc4jhtml.jar
bundleresolver.jar
cache.jar
commons-el.jar
datatags.jar
dms.jar
dvt-databinding-dt-core.jar
dvt-databindings.jar
dvt-faces.jar
dvt-facesbindings.jar
dvt-jclient.jar
dvt-trinidad.jar
dvt-utils.jar
facesconfigmodel.jar
glassfish.jsf_1.0.0.0_1-2-15.jar
glassfish.jstl_1.2.0.1.jar
inspect4.jar
javamodel-rt.jar
javatools-nodeps.jar
javax.jsf_1.1.0.0_1-2.jar
javax.management.j2ee_1.0.jar
jewt4.jar
jmxframework.jar
jmxspi.jar
jrf-api.jar
jsf-ri.jar
jsp-el-api.jar
mdsrt.jar
ojdbc6dms.jar
ojdl.jar
ojsp.jar
oracle.logging-utils_11.1.1.jar
oracle.web-common_11.1.1.jar
oracle-el.jar
oracle-page-templates.jar
org.apache.bcel_5.1.jar
resourcebundle.jar
share.jar
taglib.jar
trinidad-api.jar
trinidad-impl.jar
velocity-dep-1.4.jar
xmlef.jar
xmlparserv2.jar

You can find them in any middleware installation, usually in the MW_HOME/oracle_common/modules or MW_HOME/modules folders.

After copying the jars, start Tomcat and enjoy..

Friday, April 29, 2011

Oracle MDS Demo Application

I recently published a post in which I explain the basics of working with the Oracle MDS repository. In this post I will explain how to create an XML document inside the MDS repository and then read it back. You can download the Eclipse project here.

For this, we will use the Database repository type, as it is the most commonly used in a production system. To connect to a MDS database repository, first we need to create the database repository schema. To do this, we will use the Repository Creation Utility from Oracle (RCU), available for download here. Any version starting with 11.1.1.2.0 should work with this sample. After downloading the RCU, create a new MDS schema as shown below:


We only need to select the Metadata Services option and just go Next to set the password and then Finish to create the schema.

Now that we have a database schema, we need to prepare the MDS configuration file. This file is commonly named adf-config.xml and usually it's the default ADF configuration file, where MDS repository information is stored. The MDS framework knows how to parse this file and extract the required configuration. You can look into the provided sample, at the <mds:metadata-store> element, to change the database connection parameters (jdbc-userid, jdbc-password, jdbc-url and partition-name). The database connection will be made to the Metadata Services schema created with the RCU above.

The attached demo application connects to the database repository, creates  a new XML document with two nodes (item1 and item2), adds some text data to the nodes, stores the XML document inside the MDS repository as /upload/sampledoc.xml and then reads it back and outputs it to the System.out stream.

The project is an Eclipse project and you can open it straight away. I didn't create a JDeveloper project because I wanted to emphasize the fact that MDS can be used in any application, not just Oracle or ADF applications.

In order to compile the project, you need to add the required jar files from an Oracle Middleware installation, to the project lib folder. If you don't want to install a separate middleware, you cand install a JDeveloper that also includes a middlware home. There is also a readme.txt file inside the project lib folder that describes where the jar files are located. You need the following libraries to compile and run the project:

  • adflogginghandler.jar
  • adf-share-base.jar
  • adf-share-support.jar
  • cache.jar
  • dms.jar
  • mdsrt.jar
  • ojdbc6.jar
  • ojdl.jar
  • oracle.ucp_11.1.0.jar
  • share.jar
  • xmlef.jar
  • xmlparserv2.jar


Except ojdbc6.jar, you can find them in your Oracle middleware installation, usually in the <MIDDLEWARE_HOME>/oracle_common/modules folder. You can get the ojdbc6.jar file from the <MIDDLEWARE_HOME>/wlserver_10.3/server/lib folder.


After successfully compiling the project and changing the database connection parameters inside the adf-config.xml file, run the com.oraclemw.mds.test.MDSDemo class.

Saturday, April 16, 2011

Integrating Oracle UCM 11g RIDC with WebCenter/ADF 11g

I've seen some blog posts about using the Content Server's RIDC API in ADF, but none cover all aspects of using the full power of this API to work with UCM in a programmatic environment.

In order to get things started you need to install the WebCenter extensions for JDeveloper, create a WebCenter Portal project and configure a Content Server connection to point to a running instance of UCM 11g. More details on how to do this are given by Andrejus Baranovskis on his blog here.

Once you have these in place, you can start programming RIDC. I will describe below the most useful operations that can be performed using the API.

Create a RIDC session
String connectionName = DocLibADFConfigUtils.getPrimaryConnectionName();
Repository repository = ADFConnectionsManager.lookupRepository(connectionName);
Credentials creds = new SimpleCredentials("sysadmin", "".toCharArray());
SessionPool sessionPool = new SessionPool(repository, creds);
Session session = sessionPool.getSession();
IdcClient idcClient = (IdcClient) session.getAttribute(oracle.stellent.ridc.IdcClient.class.getName());
IdcContext idcCtx = (IdcContext) session.getAttribute(oracle.stellent.ridc.IdcContext.class.getName());

The IdcClient and IdcContext classes are the base classes for the RIDC API. All further operations will be performed using these two classes.


Retrieve a folder
You can properly retrieve a DataBinder object for a folder using the RIDC API, as shown below.
The input variable folderPath must contain the full path to the folder.
DataBinder binder = idcClient.createBinder();
binder.putLocal("IdcService", "COLLECTION_INFO");
binder.putLocal("hasCollectionPath", "true");
binder.putLocal("dCollectionPath", folderPath);

DataBinder folder = idcClient.sendRequest(adminUserCtx, binder).getResponseAsBinder();


Create a folder
You can properly create a folder by using the RIDC API, as shown below.
The input variables are self-explaining: folderName, securityGroup, parentFolderID (DataBinder object), user and account.
DataBinder dbinder = idcClient.createBinder();
dbinder.putLocal("IdcService", "COLLECTION_ADD");
dbinder.putLocal("dCollectionName", folderName);
dbinder.putLocal("dCollectionOwner", idcCtx.getUser());
dbinder.putLocal("dSecurityGroup", securityGroup);
dbinder.putLocal("hasParentCollectionID", "true");
dbinder.putLocal("dParentCollectionID", parentFolderID);
dbinder.putLocal("ignoreMaxFolderLimit", "

dbinder.putLocal("dCollectionCreator", user);
dbinder.putLocal("dCollectionModifier", user);
dbinder.putLocal("dDocAccount", account);

Delete a folder

You can properly delete a folder by using the RIDC API, as shown below.
The input variable is the DataBinder object of the folder: folderToDelete
DataBinder binder = idcClient.createBinder();
binder.putLocal("IdcService", "COLLECTION_DELETE");
binder.putLocal("hasCollectionGUID", "true");
binder.putLocal("dCollectionGUID", folderToDelete.getLocal("dCollectionGUID"));
binder.putLocal("deleteImmediate", "true");
binder.putLocal("force", "true");
idcClient.sendRequest(idcContext, binder).getResponseAsBinder();

Uploading a file
The RIDC API doesn't provide functions to upload files. In order to achieve this, you need to use the JCR API.
The input variables are the file name, the InputStream of the file you want to upload and the destination folder path: fileName, inputStream and folderPath
Node parentNode = session.getRootNode(folderPath);
boolean overWrite = true;
Node fileNode = null;
Node contentNode = null;

if (overWrite) {
   fileNode = parentNode.getNode(fileName);
   contentNode = fileNode.getNode(Names.JCR_CONTENT.toString(session));
}
else {
   fileNode = parentNode.addNode(fileName, Names.NT_FILE.toString(session));
   contentNode = fileNode.addNode(Names.JCR_CONTENT.toString(session));
}
contentNode.setProperty(Names.JCR_DATA.toString(session), inputStream);
parentNode.save();

Getting file contents
This operation must also be performed through JCR API. Using the above example, get a Node for the file and query for the Names.JCR_DATA property to get the file content.

Setting permissions for a user or group
You can assign accounts and roles to a user and corresponding permissions for each. Input variables are: principalName (name of the user or group), accountName (name of the account to add), accountPerms (permissions for the account as integer), roleName (name of the role to add), rolePerms (permissions for the role as integer).
List fields = new ArrayList(3);
fields.add(new DataResultSet.Field("dUserName"));
fields.add(new DataResultSet.Field("dApplication"));
fields.add(new DataResultSet.Field("AttributeInfo"));
DataResultSet resultSet = new DataResultSetImpl();
resultSet.setFields(fields);



List accountRow = new ArrayList(3);
accountRow.add(principal.getName());
accountRow.add(appName);
accountRow.add("account," + accountName + "," + accountPerms);
resultSet.addRow(accountRow);



List roleRow = new ArrayList(3);
roleRow.add(principal.getName());
roleRow.add(appName);
roleRow.add("role," + roleName + "," + permsToSet);
resultSet.addRow(roleRow);

 
DataBinder binder = idcClient.createBinder();
if(principal is a user)
  binder.putLocal("IdcService", "ADD_EXTENDED_USER_ATTRIBUTES");
else // principal is a group
  binder.putLocal("IdcService", "SET_EXTENDED_ATTRIBUTE_MAPPINGS");


binder.putLocal("dName", principalName);
binder.addResultSet("ExtUserAttribInfo", resultSet);

idcClient.sendRequest(adminUserCtx, binder);
 Having the examples above, you can harvest the full power of UCM services. Just find the service you want to use (check the Oracle Fusion Middleware Services Reference Guide for Oracle Universal Content Management) and then invoke the services as shown in the examples above with their appropriate parameters described in the guide.

Enjoy!

Clustering WebLogic 10g

Installing the WebLogic 10g server in a clustered environment is a pretty straight-forward process. There are some tips & trick though what can give you big headache, the most notable one being the configuration of the NodeManager.
When creating a new domain, the NodeManager is configured to communicate via SSL with the AdminServer, and will not work in the default configuration unless you start to generate the necessary certificates and provide the correct configuration parameters. However, there is a workaround to make the NodeManager work without a SSL configuration, and to get things up and running fast.

I will try to describe the basic 2-node clustered installation of a WebLogic domain. Of course this can be extended to any number of nodes.

The basic rule of WebLogic domain clustering is to have all middleware software installed in the same locations on all nodes. For example if you installed WebLogic and SOA and WebCenter into /u01/app/oracle/product/middleware/Oracle_SOA1 and /u01/app/oracle/product/middleware/Oracle_WC1, then you need to install these components on all nodes, in the same paths.

Another concern when installing a clustered configuration is WebLogic's Security Store. By default, the Security Store configuration is stored in a XML file, in the domain configuration folder: system-jazn-data.xml. In a clustered configuration, each node will have it's own file and there is a concern of synchronizing the file. The recommended way is either to have a shared configuration folder for all nodes (e.g. shared storage) or to configure WebLogic to use OID as the Security Store.

We will assume further that a default configuration if used and FMW software has been properly installed on all nodes. Start by running the WebLogic Domain Configuration Wizard from the admin node and create a new domain, by selecting the appropriate FMW components:


After this, give your domain a name (e.g. clustered_domain), enter the password for the weblogic user, configure the datasources, and choose to configure Managed Servers, Clusters and Machines:


Next, you must manually create the servers for each node. By default, the configuration wizard creates the default servers (e.g. soa_server1, UCM_server1, WC_Spaces), but you have to changes the names as in the following picture:


You can see that the name of the servers are the same as in the default configuration, the only thing changed is the number added at the end of the name: all servers ending in 01 will go to the first node, and all servers ending with 02 will go to the second node. It's important to keep the base name of the managed server s unchanged, and the ports must be the same on all nodes for the same managed servers. 
Next, you have to create a cluster for each managed server:


 and in the following screen, you must assign the corresponding managed servers to each cluster:


Now the clusters are defined. The only thing left to do is to define the physical machines (for NodeManager configuration), and assign the managed servers to those machines:


Please node that I've created 3 machines: node1 and node2 will have the actual clusters, whereas node0 will only host the Administration Server. This is an optional but recommended configuration. If you don't want to dedicate a machine only for the Administration Server, leave only node1 and node2, and assign the admin server to the node from which you started the configuration wizard.


Now you are ready to create the domain. Proceed to do so, and then start the Administration Server to perform the necessary domain configuration (e.g data sources, users, groups, boot.properties for all managed servers etc.).
After the configuration is complete, edit the nodemanager.properties file and set the following properties:
SecureListener=false
StartScriptEnabled=true
StopScriptEnabled=true
This will instruct the NodeManager to disable SSL communication and to use the startup/shutdown scripts to manage the managed servers.

Now, we must also tell the AdminServer how to connect to the nodemanager. In order to do this, login to the WebLogic console, go to Servers->AdminServer->SSL Tab->Advanced and set the property Hostname Verification to None. This will disable hostname checking for the NodeManager.
Next, go to Machines and for each machine, go to the Node Manager tab and set the Type to Plain. Also check the listen address for each machine to have the correct IP.

Save the configuration, stop the AdminServer and the NodeManager and we can move forward to pack the domain. Go to the wlserver_10.3/common/bin folder the execute the following command:

./pack.sh -managed=true -domain=/<absolute path to domain> -template=mydomain.jar -template_name=my_domain_template
This will create the mydomain.jar file that contains the domain template. Copy this file on all nodes in the same folder (wlserver_10.3/common/bin) and execute the following command on each of the nodes:

./unpack.sh -domain=/<absolute path to domain> -template=mydomain.jar
This will create the domain on all nodes.

Now, edit the nodemanager.properties file on all nodes and set the three properties to the same values as above, and restart all Node Managers.

Now you can start the AdminServer and once it's started, start all Managed Servers from the WebLogic console or Enterprise Manager.

You have now a clustered WebLogic installation.

Enjoy !


Working with Oracle MDS Repository (MetaData Services)

With the release of FMW 11g, the Oracle MDS Repository is now the central place for storing configuration files, personalization elements, page customizations, pages etc. for WebCenter, IDM and SOA Suite. Understanding and working with this repository is essential for custom application developers and implementers.

Oracle provides some basic tools for managing the repository, such as import/export and a JDeveloper browser plugin. Unfortunately, there is no tool currently for managing the files within the repository, so there really is no possibility to change the files at run-time without exporting and then reimporting the repository.

I will explain in this blog post how you can connect to a MDS repository and how you can work with the files stored there.

The MDS is basically an XML store with transaction capabilities, versioning, merging and a persistence framework optimized to work with XML nodes and attributes. It is somehow similar to an ORM, but for XML entities. The persistence framework has two adapters that can persist the store to a database or to a folder on disk. The database store has more advanced features is the recommended way of working with MDS. The file store is useful for at development time, because one can change the files manually.

The same as an ORM needs a configuration file to know the environment (e.g. hibernate.cfg.xml), the MDS uses its own config file named adf-config.xml. You can find this file automatically generated by JDeveloper in any ADF FMW project, with a default configuration.

The file has several configuration sections, but one is of particular interest, and it's called
adf-mds-config:

<adf-mds-config>
  <mds-config>
    <type-config>
      <type-definitions>
        <classpath>/com/tutorial/model/schema/beanType.xsd</classpath>
        ......
      </type-definitions>
    </type-config>
  </mds-config>
  <persistence-config>
    <metadata-namespaces>
      <namespace metadata-store-usage="MetadataStore" path="/custom/" />
      ......
    </metadata-namespaces>
    <metadata-store-usages>
      <metadata-store-usage default-cust-store="true" deploy-target="true" id="MetadataStore">
         <metadata-store class-name="oracle.mds.persistence.stores.db.DBMetadataStore">
           <property name="jdbc-userid" value="DEV_MDS"/>
           <property name="jdbc-password" value="dev"/>
           <property name="jdbc-url" value="jdbc:oracle:thin:@localhost:1521:orcl"/>
           <property name="partition-name" value="p1"/>
           <property name="repository-name" value="mds-SpacesDS"/>
         </metadata-store>
      </metadata-store-usage>
    </metadata-store-usages>
  </persistence-config>
</adf-mds-config>

In the above configuration, the most interesting elements are: type-definitions, namespace and metadata-store:
  •  type-definitions: here you can define custom XSD nodes that describe custom entities. The easiest way to generate XSD definitions is with the help of JAXB: you can define you data model as serializable POJOs  (same as JPA entities) and then use Oracle's MDSBeanGenTool (located in the mdstools.jar file) java utility to generate the schema definition.
  •  namespace: here you must define the virtual paths within the MDS repository, so the persistence framework will know where to store and find the files
  •  metadata-store: this node configures the persistence adaper. It can be either DBMetadataStore, ClassPathMetadataStore, FileMetadataStore or ServletContextMetadataStore.
Once you have these in place, you are ready to connect to the metadata store:
MDSConfig config = new MDSConfig(new File(path to adf-config.xml));
MDSInstance mdsInstance = MDSInstance.getOrCreateInstance("test-instance", config);
MDSSession mdsSession = mdsInstance.createSession(new SessionOptions(null, null, new CustConfig(new CustClassListMapping[0])), null);
The mdsSession variable holds the session instance for the MDS repository. A MDSSession is similar to the Hibernate Session object. It will handle all persistence operations and holds a transactional context.
You can now query the repository to find the desired items. For example, to find all files in repository you would write:
NameQueryImpl query = new NameQueryImpl(mdsSession, ConditionFactory.createNameCondition("/", "%", true));
Iterator<QueryResult> result = query.execute();
while(result.hasNext()) {

    QueryResult qr = result.next(); 
    // ...do your magic here...
}
Using the above query, you would only retrieve the names of the items inside the repository. To get the actual instance of an MDS object ( MetadataObject ), you have to write:
MOReference ref = mdsSession.getMOReference(path inside the repository);
MetadataObject mo = mdsSession.getMetadataObject(ref);
Once you have the MetadataObject instance, you can retrieve the XML document behind with mo.getDocument(); Having the DOM XML Document instance, you can alter the XML file and save it hack to the repository by issuing mdsSession.flushChanges();

Using the information briefly described here and with a little bit of SWING knowledge, you can easily write an MDS browser and XML editor to change what you want inside the repository.