Monitoring OBIEE Performance for the End User with JMeter from EM12c

November 8th, 2013 by

This is the third article in my two-article set of posts (h/t) on extending the monitoring of OBIEE within EM12c. It comes after a brief interlude discussing Metric Extensions as an alternative using Service Tests to look at Usage Tracking data.

Moving on from the rich source of monitoring data that is Usage Tracking, we will now cast our attention to a favourite tool of mine: JMeter. I’ve written in detail before about this tool when I showed how to use it to build performance tests for OBIEE. Now I’m going to illustrate how easy it can be to take existing OBIEE JMeter scripts and incorporate them into EM12c.

Whilst JMeter can be used to build big load tests, it can also be used as a single user. Whichever way you use it the basis remains the same. It fires a bunch of web requests (HTTP POSTs and GETs) at the target server and looks at the responses. It can measure the response time alone, or it can check the data returned matches what’s expected (and doesn’t match what it shouldn’t, such as error messages).

In the context of monitoring OBIEE we can create simple JMeter scripts which do simple actions such as

  • Login to OBIEE, check for errors
  • Run a dashboard, check for errors
  • Logout

If we choose an execution frequency (“Collection Schedule” in EM12c parlance) that is not too intensive (otherwise we risk impacting the performance/availability of OBIEE!) we can easily use the execution profile of this script as indicative of both the kind of performance that the end user is going to see, as well as a pass/fail of whether user logins and dashboard refreshes in OBIEE are working.

EM12c offers the ability to run “Custom Scripts” as data collection methods in Service Tests (which I explain in my previous post), and JMeter can be invoked “Headless” (that is, without a GUI) so lends itself well to this. In addition, we are going to look at EM12c’s Beacon functionality that enables us to test our JMeter users from multiple locations. In an OBIEE deployment in which users may be geographically separated from the servers themselves this is particularly useful to check that the response times seen from one site are consistent with those from another.

Note that what we’re building here is an alternative version to the Web Transaction Test Type that Adam Seed wrote about here, but with pretty much the same net effect – a Service Test that enables to you say whether OBIEE is up or down from an end user point of view, and what the response time is. The difference between what Adam wrote about and what I describe here is the way in which the user is simulated:

  • Web Transaction (or the similar ATS Transaction) Test Types are built in to EM12c and as such can be seen as the native, supported option. However, you need to record and refine the transaction that is used, which has its own overhead.
  • If you already have JMeter skills at your site, and quite possibly existing JMeter OBIEE scripts, it is very easy to make use of them within EM12c to achieve the same as the aforementioned Web Transaction but utilising a single user replay technology (i.e. JMeter rather than EM12c’s Web Transaction).

So, if you are looking for a vanilla EM12c implementation, Web/ATS transactions are probably more suitable. However, if you already use JMeter then it’s certainly worth considering making use of it within EM12c too

The JMeter test script

You can find details of building OBIEE JMeter scripts here and even a sample one to download here. In the example I am building here the script consists of three simple steps:

  • Login
  • Go to dashboard
  • Logout

A simple OBIEE JMeter script. Note that the result Samplers are disabled

The important bit to check is the Thread Group – it needs to run a single user just once. If you leave in settings from an actual load test and start running hundreds of users in this script called from EM12c on a regular basis then the effect on your OBIEE performance will be interesting to say the least

Test the script and make sure you see a single user running and successfully returning a dashboard

Running JMeter from the command line

Before we get anywhere near EM12c, let us check that the JMeter script runs successfully from the commandline. This also gives us opportunity to refine the commandline syntax without confounding any issues with its use in EM12c.

The basic syntax for calling JMeter is:

./jmeter --nongui -t /home/oracle/obi_jmeter.jmx

With --nongui being the flag that tells JMeter not to run the GUI (i.e. run headless), and -t passing the absolute path to the JMX JMeter script. JMeter runs under java so you may also need to set the PATH environment variable so that the correct JVM is used.

To run this from EM12c we need a short little script that is going to call JMeter, and will also set a return code depending on whether an error was encountered when the user script was run (for example, an assertion failed because the login page or dashboard did not load correctly). A simple way to do this is to set the View Results In Table sampler to write to file only if an error is encountered, and then parse this file post-execution to check for any error entries.

We can then do a simple grep against the file and check for errors. In this script I’m setting the PATH, and using a temporary file /tmp/jmeter.err to capture and check for any errors. I also send any JMeter console output to /dev/null.

export PATH=/u01/OracleHomes/Middleware/jdk16/jdk/bin:$PATH
rm /tmp/jmeter.err
/home/oracle/apache-jmeter-2.10/bin/jmeter --nongui -t /home/oracle/obi_jmeter.jmx 1>/dev/null 2>&1
grep --silent "<failure>true" /tmp/jmeter.err
if [ $? -eq 0 ]; then
        exit 1
else
        exit 0
fi

Note that I am using absolute paths throughout, so that there is no ambiguity or dependency on the folder from which this is executed.

Test the above script that you’ll be running from EM12c, and check the return code that is set:

$ ./run_jmeter.sh ; echo $?

The return code should be 0 if everything’s worked (check in Usage Tracking for a corresponding entry) and 1 if there was a failure (check in nqquery.log to confirm that there was a failure)

Making the script available to run on EM12c server

To start with we’ll be looking at getting EM12c to run this script locally. Afterwards we’ll see how it can be run on multiple servers, possible geographically separated.

So that the script can be run on EM12c, copy across your run_jmeter.sh script, JMeter user test script, and the JMeter binary folder. Check that the script still runs after copying it across.

Building the JMeter EM12c Service Test

So now we’ve got a JMeter test script, and a little shell script harness with which to call it. We hook it into EM12c using a Service Test.

From Targets -> Services, create a new Generic Service (or if you have one already that it makes sense in which to include this, do so).

Give the service a name and associate it with the appropriate System

Set the Service’s availability as being based on a Service Test. On the Service Test screen set the Test Type to Custom Script. Give the Service Test a sensible Name and then the full path to the script that you built above. At the moment, we’re assuming it’s all local to the EM12c server. Put in the OS credentials too, and click Next

On the Beacons page, click Add and select the default EM Management Beacon. Click Next and you should be on the Performance Metrics screen. The default metric of Total Time is what we want here. The other metric we are interested in is availability, and this is defined by the Status metric which gets its value from the return code that is set by our script (anything other than zero is a failure).

Click Next through the Usage Metrics screen and then Finish on the Review screen

From the Services home page, you should see your service listed. Click on its name and then Monitoring Configuration -> Sevice Tests and Beacons. Locate your Service Test and click on Verify Service Test

Click on Perform Test and if all has gone well you should see the Status as a green arrow and a Total Time recorded. As data is recorded it will be shown on the front page of the service you have defined:

One thing to bear in mind with this test that we’ve built is that we’re measuring the total time that it takes to invoke JMeter, run the user login, run the dashboard and logout – so this is not going to be directly comparable with what a user may see in timing the execution of a dashboard alone. However, as a relative measure for performance against itself, it is still useful.

Measuring response times from additional locations

One of the very cool things that EM12c can do is run tests such as the one we’ve defined but from multiple locations. It’s one thing checking the response time of OBIEE from local to the EM12c server in London, but how realistically will this reflect what users based in the New York office see? We do this through the concept of Beacons, which are bound to existing EM12c Agents and can be set as execution points for Service Tests.

To create a Beacon, go to the Services page and click on Services Features and then Beacons:

You will see the default EM Management Beacon listed. Click on Create…, and give the Beacon a name (e.g. New York) and select the Agent with which it is associated. Hopefully it is self-evident that a Beacon called New York needs to be associated with an Agent that is physically located in New York and not Norwich…

After clicking Create you should get a confirmation message and then see your new Beacon listed:

Before we can configure the Service Test to use the Beacon we need to make sure that the JMeter test rig that we put in place on the EM12c server above is available on the server on which the new Beacon’s agent runs, with the same paths. As before, run it locally on the server of the new Beacon first to make sure the script is doing what it should.

To get the Service Test to run on the new Beacon, go back to the Services page and as before go to Monitoring Configuration -> Sevice Tests and Beacons. Under the Beacons heading, click on Add.

Select both Beacons in the list and click Select

When returned to the Service Tests and Beacons page you should now see both beacons listed. To check that the new one is working, use the Verify Service Test button and set the Beacon to New York and click on Perform Test.

To see the performance of the multiple beacons, use the Test Performance page:

Conclusion

As stated at the beginning of this article, the use of JMeter in this way and within EM12c is not necessarily the “purest” design choice. However, if you have already invested time in JMeter then this is a quick way to make use of those scripts and get up and running with some kind of visibility within EM12c of the response times that your users are seeing.

Collecting Usage Tracking Data with Metric Extensions in EM12c

November 6th, 2013 by

In my previous post I demonstrated how OBIEE’s Usage Tracking data could be monitored by EM12c through a Service Test. It was pointed out to me that an alternative for collecting the same data would be the use of EM12c’s Metric Extensions.

A Metric Extension is a metric definition associated with a type of target, that can optionally be deployed to any agent that collects data from that type of target. The point is that unlike the Service Test we defined, a Metric Extension is define-once-use-many, and is more “lightweight” as it doesn’t require the definition of a Service. The value of the metric can be obtained from sources including shell script, JMX, and SQL queries.

The first step in using a Metric Extension is to create it. Once it has been created, it can be deployed and utilised.

Creating a Metric Extension

Let us see now how to create a Metric Extension. First, access the screen under Enterprise -> Monitoring -> Metric Extensions.

To create a new Metric Extension click on Create…. From the Target Type list choose Database Instance. We need to use this target type because it enables us to use the SQL Adapter to retrieve the metric data. Give the metric a name, and choose the SQL Adaptor.

Leave the other options as default, and click on Next.


 

In a Metric Extension, the values of the columns (one or more) of data returned are mapped to individual metrics. In this simple example I am going to return a count of the number of failed analyses in the last 15 minutes (which matches the collection interval).


 

On the next page you define the metric columns, matching those specified in the adaptor. Here, we just have a single column defined:



 

Click Next and you will be prompted to define the Database Credentials, which for now leave set to the default.


 

Now, importantly, you can test the metric adaptor to make sure that it is going to work. Click on Add to create a Test Target. Select the Database Instance target on which your RCU resides. Click Run Test


 

What you’ll almost certainly see now is an error:

Failed to get test Metric Extension metric result.: ORA–00942: table or view does not exist


The reason? The SQL is being executed by the “Default Monitoring Credential” on the Database Instance, which is usually DBSNMP. In our SQL we didn’t specify the owner of the Usage Tracking table S_NQ_ACCT, and nor is DBSNMP going to have permission on the table. We could create a new set of monitoring credentials that connect as the RCU table owner, or we could enable DBSNMP to access the table. Depending on your organisation’s policies and the scale of your EM12c deployment, you may choose one over the other (manageability vs simplicity). For the sake of ease I am going to take the shortest (not best) option, running as SYS the following on my RCU database to create a synonym in the DBSNMP schema and give DBSNMP access to the table.

Now retest the Metric Extension and all should be good:


 

Click Next and review the new Metric Extension


 

When you click on Finish you return to the main Metric Extension page, where your new Metric Extension will be listed.

A note about performance

When building Metric Extensions bear in mind the impact that your data extraction is going to have on the target. If you are running a beast of a SQL query that is horrendously inefficient on a collection schedule of every minute, you can expect to cause problems. The metrics that are shipped with EM12c by default have been designed by Oracle to be as lightweight in collection as possible, so in adding your own Metric Extensions you are responsible for testing and ensuring yours are too.

Deploying a Metric Extension for testing

Once you have built a Metric Extension as shown above, it will be listed in the Metric Extension page of EM12c. Select the Metric Extension and from the Actions menu select Save As Deployable Draft.


You will notice that the Status is now Deployable and on the Actions menu the Edit option has been greyed out. Now, click on the Actions menu again and choose Deploy To Targets…, and specify your RCU Database Instance as the target

Return to the main Metric Extension page and click refresh, and you should see that the Deployed Targets number is now showing 1. You can click on this to confirm to which target(s) the Metric Extension is deployed.

Viewing Metric Extension data

Metric Extensions are defined against target types, and we have created the example against the Database Instance target type in order to get the SQL Adaptor available to us. Having deployed it to the target, we can now go and look at the new data being collected. From the target itself, click on All Metrics and scroll down to the Metric Extension itself, which will be in amongst the predefined metrics for the target:


After deployment, thresholds for Metric Extension data can be set in the same way they are for existing metrics:



Thresholds can also be predefined as part of a Metric Extension so that they are already defined when it is deployed to a target.

Amending a Metric Extension

Once a Metric Extension has been deployed, it cannot be edited in its current state. You first create a new version using the Create Next Version… option, which creates a new version of the Metric Extension based on the previous one, and with a Status of Editable. Make the changes required, and then go through the same Save As Deployable Draft and Deploy to Target route as before, except you will want to Undeploy the original version.

Publishing a Metric Extension

The final stage of producing a Metric Extension is publishing it, which moves it on beyond the test/draft “Deployable” phase and marks it as ready for use in anger. Select Publish Metric Extension from the Actions menu to do this.

A published Metric Extension can be included in a Monitoring Template, and also supports the nice functionality of managed upgrades of Metric Extension versions deployed. In this example I have three versions of the Metric Extension, version 2 is Published and deployed to a target, version 3 is new and has just been published:


Clicking on Deployed Targets brings up the Manage Target Deployments page, and from here I can select my target on which v2 is deployed, and click on Upgrade


After the confirmation message Metric Extension ME$USAGE_TRACKING upgrade operation successfully submitted. return to the Metric Extension page and you should see that v3 is now deployed to a target and v2 is not.

Finally, you can export Metric Extensions from one EM12c deployment for import and use on another EM12c deployment:

Conclusion

So that wraps up this brief interlude in my planned two-part set of blogs about EM12c. Next I plan to return to the promised JMeter/EM12c integration … unless something else shiny catches my eye in between …

Job Vacancy for Training Material Developer

November 5th, 2013 by

We’re looking for a talented technical writer to come and join us at Rittman Mead. You’ll be helping produce our industry-leading training material. We run both public and private courses, off the peg and bespoke, to consistently high levels of approval from those who attend.

You need to have excellent written English, with a particular skill for clear communication and a keen eye for detail in presentation. We run a lean team here and all material you produce must be “print-ready” without the need for further review and editing.

Technically, you must have solid knowledge of OBIEE and preferably ODI and BI Apps too, with additional products such as Endeca and GoldenGate a bonus. Experience of cloud technology and administration will help but is not mandatory. We use a variety of tools in preparing our training material, so you need to be able to adapt quickly to new software (hint: Powerpoint is not the best presentation tool ;-)).

The role is full time, and can be based remotely. If you are interested, please get in touch through this link. We’d like to see an example of your recent writing, such as a blog entry. If you’ve any questions, you can email me directly : robin dot moffatt at rittmanmead dot com (but to register your interest in the role, please do so via the link).

Oracle BI Apps 11.1.1.7.1 – GoldenGate Integration – Part 4: Initial Load and Replication

November 5th, 2013 by

This is the final post in my series on Oracle BI Apps 11.1.1.7.1 and GoldenGate Integration. If you have been following along up to this point, we have the Source Dependent Data Store schema setup and ready to accept data from the OLTP source via replication, the GoldenGate installations are complete on both the source and target servers, and the GoldenGate parameter files are setup and ready to roll. Before the replication is started, an initial load of data from source to target must be performed.

Initial Load

As I mentioned in my previous posts, I plan on performing the initial load of the SDS a slightly different way than described in the Oracle BI Applications documentation. Using the process straight out-of-the-box, we must schedule downtime for the source application, as we do not want to skip any transactions that occur during the processing of data from source to SDS target. With a slight customization to the OBIA-delivered scripts, we can ensure the initial load and replication startup will provide a contiguous flow of transactions to the SDS schema.

Oracle BI Applications Scripts

As with the other set-up processes for the SDS schema and GoldenGate parameter files, there is an ODI Scenario available to execute that will generate the initial load scripts. In ODI Studio, browse to BI Apps Project > Components > SDS > Oracle > Copy OLTP to SDS. Expand Packages > Copy OLTP to SDS > Scenarios and you will find the Scenario “COPY_OLTP_TO_SDS Version 001″.

OBIA: Copy OLTP to SDS

The Scenario calls an ODI Procedure named “Copy SDS Data”. When executed, it will generate a script with an insert statement for each target SDS table using a select over a database link to the OLTP source. The link must be manually created and specifically named DW_TO_OLTP, as the ODI Procedure has the dblink name hard-coded. This means that the link will need to be modified for each additional source, should there be multiple GoldenGate OLTP to SDS replication processes setup.

The standard process would then be to execute the Scenario to generate the insert statements, schedule a downtime for the OLTP application, and run the initial load scripts. Rather than go through those steps, let’s take a look at how to eliminate the source system unavailability with a slight change to the code.

Zero Downtime

The initial load process will be customized to use the Oracle database flashback query capability, selecting data from the transaction log as of a specific point-in-time, based on the source SCN (system change number). Before the initial load is run, the GoldenGate extract process will be started to capture any transactions that occur during the data load. Finally, the GoldenGate replicat process will be started when the initial load completes, after the initial load SCN, eliminating the chance to skip or duplicate transactions from the source.

To perform the customizations, I recommend copying the entire “Copy OLTP to SDS” folder and pasting it in a new location. I simply pasted it in the same folder as the original and renamed it “RM Copy OLTP to SDS”. One thing to note is that the Scenario will not be copied, since it must have a unique name throughout the work repository. We will generate the Scenario with a new name after we make our changes.

OBIA: Copy ODI Project Folder

Open up the ODI Procedure “Copy SDS Data” from the copied directory. Click on the “Details” tab to review the steps. We will need to modify the step “Copy Data”, which generates the DML script to move data from source to target. A review of the code will show that it uses the dictionary views on the source server, across the database link, to get all table and column names that are to be included in the script. The construction of the insert statement is the bit of code we will need to modify, adding the Oracle database flashback query syntax.

As you can see, #INITIAL_LOAD_SCN is a placeholder for an ODI Variable. I chose to use a variable to perform the refresh of the SCN from the source rather than hard-code the SCN value. I created the variable called INITIAL_LOAD_SCN and set the query on the Refreshing tab to execute from the data warehouse over the database link, capturing the current SCN from the source database.

INITIAL_LOAD_SCN refresh code

The user setup to connect to the OLTP source will need to be granted the “select any dictionary” privilege, temporarily, in order to allow the select from V$DATABASE.

Now that the Variable is set and the Procedure code has been modified, we just need to put it all together in a Package and generate a Scenario. The Package “Copy OLTP to SDS” is already setup to call the Procedure “Copy SDS Data”, so we can simply add the ODI Variable as a refresh step at the beginning of the Package.

OBIA: Set INITIAL_LOAD_SCN variable as first step

After saving the Package, we need to generate a Scenario to execute. When generating, be sure to set all Variables except for INITIAL_LOAD_SCN as Startup Variables, as their values will be set manually during the execution of the Scenario. Also, remember to provide a different name than the original Scenario.

Generate Scenario Options

GoldenGate Startup and Script Execution

All of the pieces are in place to kick-off the initial load of the Source Dependent Data Store and fire up the GoldenGate replication. Even though the goal is to have zero downtime for the OLTP application, it would be best if the process were completed during a “slow” period – when a minimal amount of transactions are being processed – if possible.

First, let’s get the GoldenGate extract and data pump processes running and capturing source transactions. On the source server, browse to the GoldenGate directory and run the GGSCI application. Ensure the Manager is running, and execute the “start extract” command for each of the processes that need to be kicked off.

OBIA: Start GoldenGate Extract

Now that the transactions are flowing into the source trail and across the network to the target trail, we can execute the Scenario to generate the initial load script files. When executed, a prompt will appear, allowing you to enter the appropriate value for each variable. The script can be filtered down by a specific list of tables, if necessary, by adding a comma-delimited list to the TABLE_LIST variable. We’ll just use a wildcard value to generate the script for all tables. Other options are to generate a script file (Y or N) and to execute the DML during the execution of the Scenario (even though the Variable is named RUN_DDL). I have chosen to create a script file and run it manually.

OBIA: Generate SDS Initial Load Scripts

The script, named “BIA_SDS_Copy_Data_<session_number>.sql”, will disable constraints, drop indexes, and truncate each table in the SDS prior to loading the data from the source system. After executing the copy data script, we will want to run the “BIA_SDS_Schema_Index_DDL_<session_number>.sql” script to recreate the indexes.

One thing to note – in the SDS Copy Data script the primary key constraints are disabled for a more performant insert of the data. But, the SDS Schema Index DDL code is set to create the constraint via an alter table script, rather than enabling the existing constraints. To work around this bug, I opened the Copy Data script in SQL Developer, copied all of the lines that are set to disable the constraints, pasted them into a new window and switched the “disable” keyword to “enable” with a simple find and replace, and then executed the script against the SDS tables.

After copying the data and recreating the indexes in the SDS (and enabling the PK constraints), we can finally startup the replicat GoldenGate process on the target server. Again, login to GGSCI and ensure the Manager process is running. This time, when we start the process we will use the AfterCSN command, ensuring the replicat only picks up transactions from the trail file after the initial load SCN.

OBIA: GoldenGate start replicat

We now have our initial load of data to the SDS schema completed and GoldenGate replication started, all without any impact to the source OLTP application. The next time the Source Data Extract (SDE) Load Plan is executed, it will be just as if it were running directly against the source database – only faster – since it’s pulling data from the SDS schema on the same server.

Be on the lookout for more blog posts on OBIA 11.1.1.7.1 in the future. And if you need a fast-track to Oracle BI Applications implementation, feel free to drop us a line here at Rittman Mead at info@rittmanmead.com.

Oracle BI Apps 11.1.1.7.1 – GoldenGate Integration

1. Introduction
2. Setup and Configuration
3. SDS and Parameter File Setup
4. Initial Load and Replication

Oracle Data Integrator 12c release : Part 2

November 4th, 2013 by

In the first part of this article, we discovered the new mappings and reusable mappings, the debugger and the support of OWB jobs in the freshly released ODI 12c. Let’s continue to cover the new features in ODI 12c with CAM, the new component KMs and some features to improve performances or ease of use.

ODI12c Logo

WebLogic Management Framework

Another big change with this new version is the new WebLogic Management Framework used to manage the standalone agent in replacement of OPMN (Oracle Process Manager and Notification). Based on Weblogic Server, it brings a unified management of Fusion Middleware components and it provides a wizard for the configuration – the Fusion Middleware Configuration Wizzard.

ODI12c CAM

Practically the standalone agent is now installed outside of the ODI installation, in a domain similar to WLS domains. The benefits of it, apart from the centralised management, is that it supports Weblogic Scripting Tool (WLST) and can be monitored through Enterprise Manage Fusion Middleware Control or Enterprise Manager Cloud Control. The standalone agent can be launched manually or through the Node Manager.


Component Knowledge Modules

Next to the standard KMs – now called Template KMs -, a new type of knowledge modules has been introduced in this release. Going one step further in the reusability, the Component KMs contain steps that are defined once and shared among several of them. Instead of being templates of code with some call to the Susbtitution API at runtime , these KMs are actually compiled libraries that will generate the code based on the components present in a mapping.

The Component KMs are provided out-of-the-box and are not editable. You can check the type of a KM in the physical tab of a mapping.

Component Knowledge Module

Knowledge Module Editor

Don’t worry, you can still import, edit and create Template KMs. You can now find them in the folder <ODI_HOME>/odi/sdk/xml-reference.

Good news, the KM Editor has been reviewed to avoid us to constantly switch from one tab to another. Now when the focus is set on a KM task, it is directly displayed in the Property Editor. In addition the command field supports syntax highlighting and auto-completion (!!). Options are also now directly managed from the KM editor and not from the Project tree.

ODI12c KM Editor

In-Session Parallelism

With ODI 12c, it is now possible to have the extract tasks (LKMs) running in parallel. It’s actually done by default. If two sources are located in the same execution unit on the physical tab, they will run in parallel. If you want a sequential execution, you can drag and drop one of your units onto a blank area. A new execution unit will be created and ODI will choose in which order it will be loaded.

ODI 12c In-session Parallelism

Thanks to this the execution time can be reduced, especially if your sources come from different dataservers.

In the topology, you can now define the number of threads per Session for your physical agents to avoid one session to get all the resources.

maxim_thread_session_ds

Parallel Target Table Load

With ODI 11g, if two interfaces loading data in the same datastore are executed at the same time or if the same interface is executed twice, you can face some problem. For instance a session might delete a worktable in which the other session wants to insert data.

This now belongs to the past. With 12c, a new “Use Unique Temporary Object Names” checkbox appears in the Physical tab of your mapping. If you select it, the worktables for every session will have a unique name. You are now sure that another session won’t delete it or insert other data.

I can hear a little voice : “Ok but what if the execution of the mapping fails? With these unique name, these tables won’t be deleted the next time we run the mapping.” Don’t worry, the ODI team thought about everything. A task in a KM can now be created under the section “Mapping Cleanup”. It will be executed at the end of the mapping even if it fails. That’s a nice feature that I will also use for my logging tasks!

ODI12c Cleanup Tasks

“But eh, what if the agent crashes? These cleanup tasks won’t be executed anyway.” No worries… A new ODI Tool is now available : OdiRemoveTemporaryObjects. It runs every cleanup tasks that have not run normally and by default it is automatically called when a agent restarts. Little voice 0 – 2 ODI Team.

Session Blueprint

When doing real-time integration with ODI, you can have a lot of jobs running every few seconds or minutes. For each call of a scenario, the ODI 11g agent retrieves all the steps and tasks it should execute from the repository and write it synchronously in the logs before starting to execute anything. After the execution it deletes the logs that shouldn’t be kept when the log level is low. At the end of the day, even if you set the logs to a low level, you end up with a lot of redo logs and archive logs in your database.

Instead of retrieving the scenario for every execution, the ODI 12c agent now only retrieves a Session Blueprint once and keep it in the cache. As it is cached in the agent, there is no need to get anything from the repository if the job runs again a few minutes later. The agent also write – asynchronous – only the needed logs defined by the log level, as it only relies on the blueprint for it’s execution and not on the logs. The parameters related to blueprints are available in the definition of the physical agents in the topology.

ODI12c Blueprint

In summary, thanks to the Session Blueprints the overhead of executing session is greatly reduced. The agent needs to retrieve less data from the repository, doesn’t insert-and-delete logs anymore and write it asynchronously. Isn’t it great?

Datastore Change Notification

This happened to me once again last week : we needed to rename a datastore column in ODI but we couldn’t because it was used as a target datastore in an interface. The solution is to remove the mapping expression for this column in every interface where the datastore is used as a target.

It will not happen anymore with 12c. Now, you can change it in the model and a change notification will be displayed in every mapping using this datastore. In the following example, I removed the LOAD_DATE column.

ODI 12c Change Notification

Wallet

Ever faced this dilemma in 11g ? Giving the Master Repository password to everyone is not safe, but saving it in the Login Credentials is not better if a developer forgets to lock his screen when he is away from keyboard.

Once again, 12c is there to help you. You can now store your Login Credentials in a Wallet, itself protected by a password. One password to rule them all… and keep them safe!

ODI 12c Wallet

Try it !

If you want to give a try to ODI 12c and follow the Getting Started Guide, you can download a Pre-Built Virtual Machine. It’s running on Oracle Enterprise Linux with an Oracle XE Database 11.2.0. Of course, you will need Virtual Box to run it.

See you soon

Needless to say I’m very exited about this release. Combining the greatness of the ODI architecture with the new flow-based paradigm and all the new features, ODI is now a very mature tool and the best-of-breed for Data Integration regardless of the source or the target technology.

Keep watching this blog, there are more posts to come about data integration! Stewart Bryson already announced in his last post he would talk about the ODI 12c JEE agent and Michael Rainey started with GoldenGate 12c New Features. Who will shoot next?

[Update] Peter Scott will also present ODI 12c in his session at the Bulgarian Oracle Users Group Autumn Conference on 22nd November 2013.  Stewart and myself will also talk about it at RMOUG Training Days in February.

Website Design & Build: tymedia.co.uk