Friday, April 29, 2011

Oracle MDS Demo Application

I recently published a post in which I explain the basics of working with the Oracle MDS repository. In this post I will explain how to create an XML document inside the MDS repository and then read it back. You can download the Eclipse project here.

For this, we will use the Database repository type, as it is the most commonly used in a production system. To connect to a MDS database repository, first we need to create the database repository schema. To do this, we will use the Repository Creation Utility from Oracle (RCU), available for download here. Any version starting with 11.1.1.2.0 should work with this sample. After downloading the RCU, create a new MDS schema as shown below:


We only need to select the Metadata Services option and just go Next to set the password and then Finish to create the schema.

Now that we have a database schema, we need to prepare the MDS configuration file. This file is commonly named adf-config.xml and usually it's the default ADF configuration file, where MDS repository information is stored. The MDS framework knows how to parse this file and extract the required configuration. You can look into the provided sample, at the <mds:metadata-store> element, to change the database connection parameters (jdbc-userid, jdbc-password, jdbc-url and partition-name). The database connection will be made to the Metadata Services schema created with the RCU above.

The attached demo application connects to the database repository, creates  a new XML document with two nodes (item1 and item2), adds some text data to the nodes, stores the XML document inside the MDS repository as /upload/sampledoc.xml and then reads it back and outputs it to the System.out stream.

The project is an Eclipse project and you can open it straight away. I didn't create a JDeveloper project because I wanted to emphasize the fact that MDS can be used in any application, not just Oracle or ADF applications.

In order to compile the project, you need to add the required jar files from an Oracle Middleware installation, to the project lib folder. If you don't want to install a separate middleware, you cand install a JDeveloper that also includes a middlware home. There is also a readme.txt file inside the project lib folder that describes where the jar files are located. You need the following libraries to compile and run the project:

  • adflogginghandler.jar
  • adf-share-base.jar
  • adf-share-support.jar
  • cache.jar
  • dms.jar
  • mdsrt.jar
  • ojdbc6.jar
  • ojdl.jar
  • oracle.ucp_11.1.0.jar
  • share.jar
  • xmlef.jar
  • xmlparserv2.jar


Except ojdbc6.jar, you can find them in your Oracle middleware installation, usually in the <MIDDLEWARE_HOME>/oracle_common/modules folder. You can get the ojdbc6.jar file from the <MIDDLEWARE_HOME>/wlserver_10.3/server/lib folder.


After successfully compiling the project and changing the database connection parameters inside the adf-config.xml file, run the com.oraclemw.mds.test.MDSDemo class.

Saturday, April 16, 2011

Integrating Oracle UCM 11g RIDC with WebCenter/ADF 11g

I've seen some blog posts about using the Content Server's RIDC API in ADF, but none cover all aspects of using the full power of this API to work with UCM in a programmatic environment.

In order to get things started you need to install the WebCenter extensions for JDeveloper, create a WebCenter Portal project and configure a Content Server connection to point to a running instance of UCM 11g. More details on how to do this are given by Andrejus Baranovskis on his blog here.

Once you have these in place, you can start programming RIDC. I will describe below the most useful operations that can be performed using the API.

Create a RIDC session
String connectionName = DocLibADFConfigUtils.getPrimaryConnectionName();
Repository repository = ADFConnectionsManager.lookupRepository(connectionName);
Credentials creds = new SimpleCredentials("sysadmin", "".toCharArray());
SessionPool sessionPool = new SessionPool(repository, creds);
Session session = sessionPool.getSession();
IdcClient idcClient = (IdcClient) session.getAttribute(oracle.stellent.ridc.IdcClient.class.getName());
IdcContext idcCtx = (IdcContext) session.getAttribute(oracle.stellent.ridc.IdcContext.class.getName());

The IdcClient and IdcContext classes are the base classes for the RIDC API. All further operations will be performed using these two classes.


Retrieve a folder
You can properly retrieve a DataBinder object for a folder using the RIDC API, as shown below.
The input variable folderPath must contain the full path to the folder.
DataBinder binder = idcClient.createBinder();
binder.putLocal("IdcService", "COLLECTION_INFO");
binder.putLocal("hasCollectionPath", "true");
binder.putLocal("dCollectionPath", folderPath);

DataBinder folder = idcClient.sendRequest(adminUserCtx, binder).getResponseAsBinder();


Create a folder
You can properly create a folder by using the RIDC API, as shown below.
The input variables are self-explaining: folderName, securityGroup, parentFolderID (DataBinder object), user and account.
DataBinder dbinder = idcClient.createBinder();
dbinder.putLocal("IdcService", "COLLECTION_ADD");
dbinder.putLocal("dCollectionName", folderName);
dbinder.putLocal("dCollectionOwner", idcCtx.getUser());
dbinder.putLocal("dSecurityGroup", securityGroup);
dbinder.putLocal("hasParentCollectionID", "true");
dbinder.putLocal("dParentCollectionID", parentFolderID);
dbinder.putLocal("ignoreMaxFolderLimit", "

dbinder.putLocal("dCollectionCreator", user);
dbinder.putLocal("dCollectionModifier", user);
dbinder.putLocal("dDocAccount", account);

Delete a folder

You can properly delete a folder by using the RIDC API, as shown below.
The input variable is the DataBinder object of the folder: folderToDelete
DataBinder binder = idcClient.createBinder();
binder.putLocal("IdcService", "COLLECTION_DELETE");
binder.putLocal("hasCollectionGUID", "true");
binder.putLocal("dCollectionGUID", folderToDelete.getLocal("dCollectionGUID"));
binder.putLocal("deleteImmediate", "true");
binder.putLocal("force", "true");
idcClient.sendRequest(idcContext, binder).getResponseAsBinder();

Uploading a file
The RIDC API doesn't provide functions to upload files. In order to achieve this, you need to use the JCR API.
The input variables are the file name, the InputStream of the file you want to upload and the destination folder path: fileName, inputStream and folderPath
Node parentNode = session.getRootNode(folderPath);
boolean overWrite = true;
Node fileNode = null;
Node contentNode = null;

if (overWrite) {
   fileNode = parentNode.getNode(fileName);
   contentNode = fileNode.getNode(Names.JCR_CONTENT.toString(session));
}
else {
   fileNode = parentNode.addNode(fileName, Names.NT_FILE.toString(session));
   contentNode = fileNode.addNode(Names.JCR_CONTENT.toString(session));
}
contentNode.setProperty(Names.JCR_DATA.toString(session), inputStream);
parentNode.save();

Getting file contents
This operation must also be performed through JCR API. Using the above example, get a Node for the file and query for the Names.JCR_DATA property to get the file content.

Setting permissions for a user or group
You can assign accounts and roles to a user and corresponding permissions for each. Input variables are: principalName (name of the user or group), accountName (name of the account to add), accountPerms (permissions for the account as integer), roleName (name of the role to add), rolePerms (permissions for the role as integer).
List fields = new ArrayList(3);
fields.add(new DataResultSet.Field("dUserName"));
fields.add(new DataResultSet.Field("dApplication"));
fields.add(new DataResultSet.Field("AttributeInfo"));
DataResultSet resultSet = new DataResultSetImpl();
resultSet.setFields(fields);



List accountRow = new ArrayList(3);
accountRow.add(principal.getName());
accountRow.add(appName);
accountRow.add("account," + accountName + "," + accountPerms);
resultSet.addRow(accountRow);



List roleRow = new ArrayList(3);
roleRow.add(principal.getName());
roleRow.add(appName);
roleRow.add("role," + roleName + "," + permsToSet);
resultSet.addRow(roleRow);

 
DataBinder binder = idcClient.createBinder();
if(principal is a user)
  binder.putLocal("IdcService", "ADD_EXTENDED_USER_ATTRIBUTES");
else // principal is a group
  binder.putLocal("IdcService", "SET_EXTENDED_ATTRIBUTE_MAPPINGS");


binder.putLocal("dName", principalName);
binder.addResultSet("ExtUserAttribInfo", resultSet);

idcClient.sendRequest(adminUserCtx, binder);
 Having the examples above, you can harvest the full power of UCM services. Just find the service you want to use (check the Oracle Fusion Middleware Services Reference Guide for Oracle Universal Content Management) and then invoke the services as shown in the examples above with their appropriate parameters described in the guide.

Enjoy!

Clustering WebLogic 10g

Installing the WebLogic 10g server in a clustered environment is a pretty straight-forward process. There are some tips & trick though what can give you big headache, the most notable one being the configuration of the NodeManager.
When creating a new domain, the NodeManager is configured to communicate via SSL with the AdminServer, and will not work in the default configuration unless you start to generate the necessary certificates and provide the correct configuration parameters. However, there is a workaround to make the NodeManager work without a SSL configuration, and to get things up and running fast.

I will try to describe the basic 2-node clustered installation of a WebLogic domain. Of course this can be extended to any number of nodes.

The basic rule of WebLogic domain clustering is to have all middleware software installed in the same locations on all nodes. For example if you installed WebLogic and SOA and WebCenter into /u01/app/oracle/product/middleware/Oracle_SOA1 and /u01/app/oracle/product/middleware/Oracle_WC1, then you need to install these components on all nodes, in the same paths.

Another concern when installing a clustered configuration is WebLogic's Security Store. By default, the Security Store configuration is stored in a XML file, in the domain configuration folder: system-jazn-data.xml. In a clustered configuration, each node will have it's own file and there is a concern of synchronizing the file. The recommended way is either to have a shared configuration folder for all nodes (e.g. shared storage) or to configure WebLogic to use OID as the Security Store.

We will assume further that a default configuration if used and FMW software has been properly installed on all nodes. Start by running the WebLogic Domain Configuration Wizard from the admin node and create a new domain, by selecting the appropriate FMW components:


After this, give your domain a name (e.g. clustered_domain), enter the password for the weblogic user, configure the datasources, and choose to configure Managed Servers, Clusters and Machines:


Next, you must manually create the servers for each node. By default, the configuration wizard creates the default servers (e.g. soa_server1, UCM_server1, WC_Spaces), but you have to changes the names as in the following picture:


You can see that the name of the servers are the same as in the default configuration, the only thing changed is the number added at the end of the name: all servers ending in 01 will go to the first node, and all servers ending with 02 will go to the second node. It's important to keep the base name of the managed server s unchanged, and the ports must be the same on all nodes for the same managed servers. 
Next, you have to create a cluster for each managed server:


 and in the following screen, you must assign the corresponding managed servers to each cluster:


Now the clusters are defined. The only thing left to do is to define the physical machines (for NodeManager configuration), and assign the managed servers to those machines:


Please node that I've created 3 machines: node1 and node2 will have the actual clusters, whereas node0 will only host the Administration Server. This is an optional but recommended configuration. If you don't want to dedicate a machine only for the Administration Server, leave only node1 and node2, and assign the admin server to the node from which you started the configuration wizard.


Now you are ready to create the domain. Proceed to do so, and then start the Administration Server to perform the necessary domain configuration (e.g data sources, users, groups, boot.properties for all managed servers etc.).
After the configuration is complete, edit the nodemanager.properties file and set the following properties:
SecureListener=false
StartScriptEnabled=true
StopScriptEnabled=true
This will instruct the NodeManager to disable SSL communication and to use the startup/shutdown scripts to manage the managed servers.

Now, we must also tell the AdminServer how to connect to the nodemanager. In order to do this, login to the WebLogic console, go to Servers->AdminServer->SSL Tab->Advanced and set the property Hostname Verification to None. This will disable hostname checking for the NodeManager.
Next, go to Machines and for each machine, go to the Node Manager tab and set the Type to Plain. Also check the listen address for each machine to have the correct IP.

Save the configuration, stop the AdminServer and the NodeManager and we can move forward to pack the domain. Go to the wlserver_10.3/common/bin folder the execute the following command:

./pack.sh -managed=true -domain=/<absolute path to domain> -template=mydomain.jar -template_name=my_domain_template
This will create the mydomain.jar file that contains the domain template. Copy this file on all nodes in the same folder (wlserver_10.3/common/bin) and execute the following command on each of the nodes:

./unpack.sh -domain=/<absolute path to domain> -template=mydomain.jar
This will create the domain on all nodes.

Now, edit the nodemanager.properties file on all nodes and set the three properties to the same values as above, and restart all Node Managers.

Now you can start the AdminServer and once it's started, start all Managed Servers from the WebLogic console or Enterprise Manager.

You have now a clustered WebLogic installation.

Enjoy !


Working with Oracle MDS Repository (MetaData Services)

With the release of FMW 11g, the Oracle MDS Repository is now the central place for storing configuration files, personalization elements, page customizations, pages etc. for WebCenter, IDM and SOA Suite. Understanding and working with this repository is essential for custom application developers and implementers.

Oracle provides some basic tools for managing the repository, such as import/export and a JDeveloper browser plugin. Unfortunately, there is no tool currently for managing the files within the repository, so there really is no possibility to change the files at run-time without exporting and then reimporting the repository.

I will explain in this blog post how you can connect to a MDS repository and how you can work with the files stored there.

The MDS is basically an XML store with transaction capabilities, versioning, merging and a persistence framework optimized to work with XML nodes and attributes. It is somehow similar to an ORM, but for XML entities. The persistence framework has two adapters that can persist the store to a database or to a folder on disk. The database store has more advanced features is the recommended way of working with MDS. The file store is useful for at development time, because one can change the files manually.

The same as an ORM needs a configuration file to know the environment (e.g. hibernate.cfg.xml), the MDS uses its own config file named adf-config.xml. You can find this file automatically generated by JDeveloper in any ADF FMW project, with a default configuration.

The file has several configuration sections, but one is of particular interest, and it's called
adf-mds-config:

<adf-mds-config>
  <mds-config>
    <type-config>
      <type-definitions>
        <classpath>/com/tutorial/model/schema/beanType.xsd</classpath>
        ......
      </type-definitions>
    </type-config>
  </mds-config>
  <persistence-config>
    <metadata-namespaces>
      <namespace metadata-store-usage="MetadataStore" path="/custom/" />
      ......
    </metadata-namespaces>
    <metadata-store-usages>
      <metadata-store-usage default-cust-store="true" deploy-target="true" id="MetadataStore">
         <metadata-store class-name="oracle.mds.persistence.stores.db.DBMetadataStore">
           <property name="jdbc-userid" value="DEV_MDS"/>
           <property name="jdbc-password" value="dev"/>
           <property name="jdbc-url" value="jdbc:oracle:thin:@localhost:1521:orcl"/>
           <property name="partition-name" value="p1"/>
           <property name="repository-name" value="mds-SpacesDS"/>
         </metadata-store>
      </metadata-store-usage>
    </metadata-store-usages>
  </persistence-config>
</adf-mds-config>

In the above configuration, the most interesting elements are: type-definitions, namespace and metadata-store:
  •  type-definitions: here you can define custom XSD nodes that describe custom entities. The easiest way to generate XSD definitions is with the help of JAXB: you can define you data model as serializable POJOs  (same as JPA entities) and then use Oracle's MDSBeanGenTool (located in the mdstools.jar file) java utility to generate the schema definition.
  •  namespace: here you must define the virtual paths within the MDS repository, so the persistence framework will know where to store and find the files
  •  metadata-store: this node configures the persistence adaper. It can be either DBMetadataStore, ClassPathMetadataStore, FileMetadataStore or ServletContextMetadataStore.
Once you have these in place, you are ready to connect to the metadata store:
MDSConfig config = new MDSConfig(new File(path to adf-config.xml));
MDSInstance mdsInstance = MDSInstance.getOrCreateInstance("test-instance", config);
MDSSession mdsSession = mdsInstance.createSession(new SessionOptions(null, null, new CustConfig(new CustClassListMapping[0])), null);
The mdsSession variable holds the session instance for the MDS repository. A MDSSession is similar to the Hibernate Session object. It will handle all persistence operations and holds a transactional context.
You can now query the repository to find the desired items. For example, to find all files in repository you would write:
NameQueryImpl query = new NameQueryImpl(mdsSession, ConditionFactory.createNameCondition("/", "%", true));
Iterator<QueryResult> result = query.execute();
while(result.hasNext()) {

    QueryResult qr = result.next(); 
    // ...do your magic here...
}
Using the above query, you would only retrieve the names of the items inside the repository. To get the actual instance of an MDS object ( MetadataObject ), you have to write:
MOReference ref = mdsSession.getMOReference(path inside the repository);
MetadataObject mo = mdsSession.getMetadataObject(ref);
Once you have the MetadataObject instance, you can retrieve the XML document behind with mo.getDocument(); Having the DOM XML Document instance, you can alter the XML file and save it hack to the repository by issuing mdsSession.flushChanges();

Using the information briefly described here and with a little bit of SWING knowledge, you can easily write an MDS browser and XML editor to change what you want inside the repository.