Oracle Endeca Information Discovery 3.1 and Self- Service Data Mashup capability

November 9th, 2013 by

Oracle Endeca Information Discovery (OEID) 3.1 has been released yesterday, 7th of November. The new version is a good step forward to satisfy business users with providing much better self-service discovery capabilities. OEID 3.1 now enables its non-technical users to securely run agile business intelligence analysis on variety of data sources in much easier way and without need to IT. On the other hand the integration with Oracle Business Intelligence is now even more tight to the extend that Oracle announced OEID 3.1 as “the only complete data discovery platform for the enterprise”. The product data sheet is accessible here and in the same way as older versions of OEID, this software is downloadable via Oracle e-Delivery website which is the Oracle Software Delivery Cloud where you can find downloads for all licensed Oracle products.

On the first release of Provisioning-Service on OEID 3.0 on March 2013, users could upload only one file and it had to be in Excel format. Issue was proving that having data on Endeca Server was a better solution compare to Excel itself. Not being able to join datasources together and having no Text-enrichment analysis on the data was enough to keep the Provisioning service very limited. Good news is that new version has answers to all requirements from a business intelligence provisioning service tool.

Having a quick start, here is a list of new features I came across at first glance:

  • Users now can join information from multiple sources such as files, databases or other pre-built Endeca Servers. Files can be in Excel format for structured data or JSON for semi-structured data mostly coming from social media interactions which will lead to more easier combination of social media with other available data in enterprise.

0102

  • Having Geocode data in your source, it should be matter of a click to add a Map component to the dashboard where it will automatically find information it requires from available data sources. New Map component is not only more powerful in what it does, such as being able to display as Heat Layer, but also looks much better!

03

  • Business users can perform Enrichment on their own unstructured text data to identify hidden entities, sentiment and etc without support from IT. The new application setting has got the capability to add text-enrichment and text taggers to data sources.

04

05

  • Managing Data sets such as reloading resources, adding record, managing attributes or deleting data sets is quite straight forward with in the application settings.

06

Look out for my next post where I’ll go more in-detail in the exciting new features of OEID v3.1.

OEM12cR3: Holistic BI Platform Monitoring using Systems and Services

November 8th, 2013 by

Over the past few weeks myself and some of my colleagues have been posting articles on the blog about monitoring OBIEE using Enterprise Manager 12cR3’s BI Management Pack. In the various articles we’ve looked at managing individual OBIEE installations and monitoring various aspects of the product’s performance, using features like metrics, events and thresholds, integration with usage tracking, service beacons, and metric extensions.

But in each case we’ve looked at an OBIEE installation in isolation, and always from the perspective of the “system” – which makes sense if where you’re coming from is OBIEE’s Fusion Middleware Control, and you’re looking for a better way of working with OBIEE’s built-in instrumentation. But a typical BI system consists of more than just OBIEE – the database providing data for OBIEE is going to play a major part in the performance of your system, and in most cases there’ll be an ETL server loading data into it, such as Oracle Data Integrator or Informatica PowerCenter. In some cases Essbase may be providing subsets of the data or acting as an aggregation layer, and of course all of these infrastructure components run on host servers, either physical or virtualised. Wouldn’t it make more sense to look at this platform as a whole, measuring performance across it and considering all aspects of it when determining if it’s “available”?

Moreover, whilst it makes sense for you to consider just the indicators and metrics coming out of OBIEE when judging the performance of your system, for your end-users, they don’t think in terms of disk throughput or cache hits when considering system performance – what they talk about when they call you with a problem is the time it takes to log in; or the time it takes to bring up their dashboard page; or, indeed, whether they can log into the system at all. In fact, it’s not unknown for users to ring up and say the system is performing terribly, when in fact all the indicators on your DBA dashboard are showing green, and as far as you’re concerned, all is fine. So how can you align your view of the status of your system with what your users are experiencing, and indeed consider all of the BI platform when making this call? In fact, there are two features in Enterprise Manager and the BI Management Pack that make this possible – “systems” and “services” – and whilst they’re not all that well-known, they can make a massive impact on how holistically you view your system, when you put them in-place. Let’s take a look at what’s involved, based on something similar I put in place for a customer this week.

As I mentioned before, most people’s use of Enterprise Manager involves looking at an individual infrastructure component – for example, OBIEE – and setting up one or more metric thresholds and alerts to help monitor its performance.

NewImage

But in reality OBIEE is just part of the overall BI platform that you need to monitor, if you’re going to understand end-to-end performance of your system. In Enterprise Manager terms, this is your “system”, and you can define a specific object called a “system” within your EM metadata, which aggregates all of these components together, giving you your “IT” view of your BI platform.

NewImage

In the screenshot below, I’ve got an EM12cR3 instance set-up, and in the bottom right-hand corner you can see a list of systems managed by EM, including an Exalytics system, a BI Apps system, one running EPM Suite and another running and Oracle database. In fact, the OBIEE system relies on the Database system for its source data, but you wouldn’t be able to tell that from the default way they’re listed, as they’re all shown as independent, separate from each other.

NewImage

What I can do though is aggregate these two installs together as a “system”, along with any other components – the Essbase server in the EPM stack, for example – that play a part in the overall platform. To create this system, I select Targets > Systems from the top-most menu, and then press Add > Generic System when the Systems overview page is displayed.

NewImage

Note the other types of systems available – all of them except for Generic System add particular capabilities for that particular type of setup, but Generic System is just a container into which we can add any random infrastructure components, so we’ll use this to create the system to bring together our BI components.

Once the page comes up to create the new system, when you add components to it, notice how each part of each constituent “product” is available to include at different levels of granularity. For example, you can add OBIEE to the system “container” either at the whole BI Instance level – all the BI servers, BI Presentation servers and so on for a full deployment – or you can add individual system components, Essbase servers, DAC servers if that’s more relevant. In my case, I’ve got a couple of options – as it’s actually an Exalytics server, I could add that as a top-level component (complete with TimesTen, Essbase and so on), or I could just add the BI Instance, which is what I’ll do in this case by selecting that target type and then choosing the BI Instance from the list that’ll then be displayed.

NewImage

In total I add in four targets – the OBIEE and database instances, and the hosts they run on. Later on, once I register my ODI servers using the new DI Management Pack, I can bring those in as well.

NewImage

The next step is to define the associations, or dependencies, in the system. The wizard automatically adds the association between the BI instance and its host, and the database instance and its host, but I can then manually add in the dependency that the BI instance has on the database, so that later on, I can say that BI being down is directly related to its database being down (something called “root causal analysis”, in EM terminology).

NewImage

On the next page of the wizard, I can specify which parts of the system have to be up, in order for the whole system to be considered “available”. In this case,all parts, or “targets” need to be running for the system to be OK, but if I had an ETL element, for example, then this could possibly be down but the overall system still be “available” for use, albeit in degraded form.

NewImage

Next I can select a set of charts, that will be displayed along with the system overview details, from the charts and metrics available for each consistent product. By default a set of database and host charts are pre-selected, but I can add in ones specific to OBIEE – for example, total number of active sessions – from the BI Instance list.

NewImage

Once that final step is completed, the system is then created and I can see the overall status of it, along with any incidents, warnings, alerts and so on, across the platform.

NewImage

If I had multiple systems to manage here, I’d see them all listed in the same place, with their overall status, and a high-level view of their alert status. Drilling into this particular system, I can then see a “single pane of glass” overview of the whole system, including the status of the constituent components.

NewImage

So far, so good. But this is only part of the story. Whilst this is great for the IT department, the terminology it uses – “systems”, “metrics”, “system tests” and so forth – aren’t the terms that the end-users use.  They’re thinking about OBIEE as a “service” – a service providing dashboards, reports, a dashboard login and so on, and so EM has another concept, called a “service”, that builds on the system we’ve just put together, but adds a layer of business-focus to the setup.

NewImage

Adam Seed touched on the concept of a “service” in his post on service beacons the other week, but they’re much more than an enabler of browser-based tests. Creating a service along with our system gives us the ability to add an extra layer of end-user focus to our EM setup, so that when our users call up and say – I can’t log in, or – It takes ages to bring up my dashboard page – we’ve got a set of metrics and tests aligned with their experience, and we’re immediately aware of the issues they’re hitting.

To create a service, we first need a system on which it will be delivered. As we’ve now got this, lets go back to the EM menu and select Targets > Services, and then select Create > Generic Service. On the next page, I name the service – for example. “Production Dashboards”, and then select the system I just created as the one that provides it.

NewImage

Now the key thing about a service, is how we test for its “availability”. With EM’s services, availability can either be determined by the status of the underlying system, or more usefully, we can define one or more “service” tests that checks things from a more end-user perspective. We’ll select “Service Test” in this instance, and then move onto the next page of the wizard.

NewImage

Now there are lots of service test types you can use, and Adam Seed’s post went through the most useful of them, one that records a set of browser actions and replays them to a schedule, simulating a user logging in, navigating around the OBIEE website and then logging out. Unfortunately, this requires Internet Explorer to record the browser session, so I’ll cheat and just set up a host ping, which isn’t really something you’d want in real life but if gets me onto the next stage (Robin Moffatt also covered using JMeter to do a similar thing, in his post the other day on the blog).

NewImage

Next, I say where this test will run from. Again, for simplicity’s sake I just select the main EM server, but in reality you’d want to run this test from where the users are located, by setting up what’s called a “service beacon”, a feature within the EM management agent that can run tests like these geographically close to where the end-users are. That way, you can measure the service they’re actually receiving from their office (potentially, in a different country to where OBIEE is installed), giving you a more realistic measurement of response time.

NewImage

I then go on to say what response times are considered OK, warning and critical, and then I can also associate system-level metrics with this service as well. In this case I add in the average query response time, so that service availability in the end will be determined by the contactability of the OBIEE server (a substitute in this case for a full browser login simulation), and respond time being within a certain threshold.

NewImage

I then save the service definition, and then go and view it within EM. In the screenshot below, I’ve left EM overnight so that the various performance metrics and the service test can run for a while, and you can see that as of now, everything seems to be running OK.

NewImage

Clicking on the Test Performance tab shows me the output of each of my service tests (in this case, just the host ping), whilst the Charts page shows the me output of the system performance metrics that I selected when creating the service. Clicking on Topology, moreover, shows me a graphical view of the service and its underlying system, so I can understand and visualise the relationships between the various components within it.

NewImage

Another important part of services’ end-user-level focus is the ability to create service-level agreements. These are more formal versions of metric thresholds, this time based on service tests rather than system tests, and allow you to define service level indicators based on the tests you’ve created before, and then measure performance against agreed tolerances over a period of time. If you’ve got an SLA agreed with your customer that, for example, 95% of reports render within five seconds, or that the main dashboard is available 97% of the time during working hours, you can capture that SLA here and then automatically report against it over time. More importantly, if you’re starting to fall outside of your SLA, you can use EM to raise events and incidents in the meantime so you’re aware of the issue, and you can work to rectify it before it becomes an issue in your monthly customer meeting.

NewImage

Finally – and this is something I find really neat – the system overview page for the system I created earlier now references the service that it supports, so I can see, at a glance, not only the status of the infrastructure components that I’m managing, but also the status of the end-user service that it’s supporting. Not bad, and a lot better than trying to manage all of these infrastructure components in isolation, and trying to work out myself what their performance means in terms of the end-users.

NewImage

So there you have it – systems and services in EM and the BI Management Pack – a good example of what you get when you move from Fusion Middleware Control to the full version of Oracle’s Enterprise Systems Management platform.

Monitoring OBIEE Performance for the End User with JMeter from EM12c

November 8th, 2013 by

This is the third article in my two-article set of posts (h/t) on extending the monitoring of OBIEE within EM12c. It comes after a brief interlude discussing Metric Extensions as an alternative using Service Tests to look at Usage Tracking data.

Moving on from the rich source of monitoring data that is Usage Tracking, we will now cast our attention to a favourite tool of mine: JMeter. I’ve written in detail before about this tool when I showed how to use it to build performance tests for OBIEE. Now I’m going to illustrate how easy it can be to take existing OBIEE JMeter scripts and incorporate them into EM12c.

Whilst JMeter can be used to build big load tests, it can also be used as a single user. Whichever way you use it the basis remains the same. It fires a bunch of web requests (HTTP POSTs and GETs) at the target server and looks at the responses. It can measure the response time alone, or it can check the data returned matches what’s expected (and doesn’t match what it shouldn’t, such as error messages).

In the context of monitoring OBIEE we can create simple JMeter scripts which do simple actions such as

  • Login to OBIEE, check for errors
  • Run a dashboard, check for errors
  • Logout

If we choose an execution frequency (“Collection Schedule” in EM12c parlance) that is not too intensive (otherwise we risk impacting the performance/availability of OBIEE!) we can easily use the execution profile of this script as indicative of both the kind of performance that the end user is going to see, as well as a pass/fail of whether user logins and dashboard refreshes in OBIEE are working.

EM12c offers the ability to run “Custom Scripts” as data collection methods in Service Tests (which I explain in my previous post), and JMeter can be invoked “Headless” (that is, without a GUI) so lends itself well to this. In addition, we are going to look at EM12c’s Beacon functionality that enables us to test our JMeter users from multiple locations. In an OBIEE deployment in which users may be geographically separated from the servers themselves this is particularly useful to check that the response times seen from one site are consistent with those from another.

Note that what we’re building here is an alternative version to the Web Transaction Test Type that Adam Seed wrote about here, but with pretty much the same net effect – a Service Test that enables to you say whether OBIEE is up or down from an end user point of view, and what the response time is. The difference between what Adam wrote about and what I describe here is the way in which the user is simulated:

  • Web Transaction (or the similar ATS Transaction) Test Types are built in to EM12c and as such can be seen as the native, supported option. However, you need to record and refine the transaction that is used, which has its own overhead.
  • If you already have JMeter skills at your site, and quite possibly existing JMeter OBIEE scripts, it is very easy to make use of them within EM12c to achieve the same as the aforementioned Web Transaction but utilising a single user replay technology (i.e. JMeter rather than EM12c’s Web Transaction).

So, if you are looking for a vanilla EM12c implementation, Web/ATS transactions are probably more suitable. However, if you already use JMeter then it’s certainly worth considering making use of it within EM12c too

The JMeter test script

You can find details of building OBIEE JMeter scripts here and even a sample one to download here. In the example I am building here the script consists of three simple steps:

  • Login
  • Go to dashboard
  • Logout

A simple OBIEE JMeter script. Note that the result Samplers are disabled

The important bit to check is the Thread Group – it needs to run a single user just once. If you leave in settings from an actual load test and start running hundreds of users in this script called from EM12c on a regular basis then the effect on your OBIEE performance will be interesting to say the least

Test the script and make sure you see a single user running and successfully returning a dashboard

Running JMeter from the command line

Before we get anywhere near EM12c, let us check that the JMeter script runs successfully from the commandline. This also gives us opportunity to refine the commandline syntax without confounding any issues with its use in EM12c.

The basic syntax for calling JMeter is:

./jmeter --nongui -t /home/oracle/obi_jmeter.jmx

With --nongui being the flag that tells JMeter not to run the GUI (i.e. run headless), and -t passing the absolute path to the JMX JMeter script. JMeter runs under java so you may also need to set the PATH environment variable so that the correct JVM is used.

To run this from EM12c we need a short little script that is going to call JMeter, and will also set a return code depending on whether an error was encountered when the user script was run (for example, an assertion failed because the login page or dashboard did not load correctly). A simple way to do this is to set the View Results In Table sampler to write to file only if an error is encountered, and then parse this file post-execution to check for any error entries.

We can then do a simple grep against the file and check for errors. In this script I’m setting the PATH, and using a temporary file /tmp/jmeter.err to capture and check for any errors. I also send any JMeter console output to /dev/null.

export PATH=/u01/OracleHomes/Middleware/jdk16/jdk/bin:$PATH
rm /tmp/jmeter.err
/home/oracle/apache-jmeter-2.10/bin/jmeter --nongui -t /home/oracle/obi_jmeter.jmx 1>/dev/null 2>&1
grep --silent "<failure>true" /tmp/jmeter.err
if [ $? -eq 0 ]; then
        exit 1
else
        exit 0
fi

Note that I am using absolute paths throughout, so that there is no ambiguity or dependency on the folder from which this is executed.

Test the above script that you’ll be running from EM12c, and check the return code that is set:

$ ./run_jmeter.sh ; echo $?

The return code should be 0 if everything’s worked (check in Usage Tracking for a corresponding entry) and 1 if there was a failure (check in nqquery.log to confirm that there was a failure)

Making the script available to run on EM12c server

To start with we’ll be looking at getting EM12c to run this script locally. Afterwards we’ll see how it can be run on multiple servers, possible geographically separated.

So that the script can be run on EM12c, copy across your run_jmeter.sh script, JMeter user test script, and the JMeter binary folder. Check that the script still runs after copying it across.

Building the JMeter EM12c Service Test

So now we’ve got a JMeter test script, and a little shell script harness with which to call it. We hook it into EM12c using a Service Test.

From Targets -> Services, create a new Generic Service (or if you have one already that it makes sense in which to include this, do so).

Give the service a name and associate it with the appropriate System

Set the Service’s availability as being based on a Service Test. On the Service Test screen set the Test Type to Custom Script. Give the Service Test a sensible Name and then the full path to the script that you built above. At the moment, we’re assuming it’s all local to the EM12c server. Put in the OS credentials too, and click Next

On the Beacons page, click Add and select the default EM Management Beacon. Click Next and you should be on the Performance Metrics screen. The default metric of Total Time is what we want here. The other metric we are interested in is availability, and this is defined by the Status metric which gets its value from the return code that is set by our script (anything other than zero is a failure).

Click Next through the Usage Metrics screen and then Finish on the Review screen

From the Services home page, you should see your service listed. Click on its name and then Monitoring Configuration -> Sevice Tests and Beacons. Locate your Service Test and click on Verify Service Test

Click on Perform Test and if all has gone well you should see the Status as a green arrow and a Total Time recorded. As data is recorded it will be shown on the front page of the service you have defined:

One thing to bear in mind with this test that we’ve built is that we’re measuring the total time that it takes to invoke JMeter, run the user login, run the dashboard and logout – so this is not going to be directly comparable with what a user may see in timing the execution of a dashboard alone. However, as a relative measure for performance against itself, it is still useful.

Measuring response times from additional locations

One of the very cool things that EM12c can do is run tests such as the one we’ve defined but from multiple locations. It’s one thing checking the response time of OBIEE from local to the EM12c server in London, but how realistically will this reflect what users based in the New York office see? We do this through the concept of Beacons, which are bound to existing EM12c Agents and can be set as execution points for Service Tests.

To create a Beacon, go to the Services page and click on Services Features and then Beacons:

You will see the default EM Management Beacon listed. Click on Create…, and give the Beacon a name (e.g. New York) and select the Agent with which it is associated. Hopefully it is self-evident that a Beacon called New York needs to be associated with an Agent that is physically located in New York and not Norwich…

After clicking Create you should get a confirmation message and then see your new Beacon listed:

Before we can configure the Service Test to use the Beacon we need to make sure that the JMeter test rig that we put in place on the EM12c server above is available on the server on which the new Beacon’s agent runs, with the same paths. As before, run it locally on the server of the new Beacon first to make sure the script is doing what it should.

To get the Service Test to run on the new Beacon, go back to the Services page and as before go to Monitoring Configuration -> Sevice Tests and Beacons. Under the Beacons heading, click on Add.

Select both Beacons in the list and click Select

When returned to the Service Tests and Beacons page you should now see both beacons listed. To check that the new one is working, use the Verify Service Test button and set the Beacon to New York and click on Perform Test.

To see the performance of the multiple beacons, use the Test Performance page:

Conclusion

As stated at the beginning of this article, the use of JMeter in this way and within EM12c is not necessarily the “purest” design choice. However, if you have already invested time in JMeter then this is a quick way to make use of those scripts and get up and running with some kind of visibility within EM12c of the response times that your users are seeing.

Collecting Usage Tracking Data with Metric Extensions in EM12c

November 6th, 2013 by

In my previous post I demonstrated how OBIEE’s Usage Tracking data could be monitored by EM12c through a Service Test. It was pointed out to me that an alternative for collecting the same data would be the use of EM12c’s Metric Extensions.

A Metric Extension is a metric definition associated with a type of target, that can optionally be deployed to any agent that collects data from that type of target. The point is that unlike the Service Test we defined, a Metric Extension is define-once-use-many, and is more “lightweight” as it doesn’t require the definition of a Service. The value of the metric can be obtained from sources including shell script, JMX, and SQL queries.

The first step in using a Metric Extension is to create it. Once it has been created, it can be deployed and utilised.

Creating a Metric Extension

Let us see now how to create a Metric Extension. First, access the screen under Enterprise -> Monitoring -> Metric Extensions.

To create a new Metric Extension click on Create…. From the Target Type list choose Database Instance. We need to use this target type because it enables us to use the SQL Adapter to retrieve the metric data. Give the metric a name, and choose the SQL Adaptor.

Leave the other options as default, and click on Next.

 

In a Metric Extension, the values of the columns (one or more) of data returned are mapped to individual metrics. In this simple example I am going to return a count of the number of failed analyses in the last 15 minutes (which matches the collection interval).

 

On the next page you define the metric columns, matching those specified in the adaptor. Here, we just have a single column defined:

 

Click Next and you will be prompted to define the Database Credentials, which for now leave set to the default.

 

Now, importantly, you can test the metric adaptor to make sure that it is going to work. Click on Add to create a Test Target. Select the Database Instance target on which your RCU resides. Click Run Test

 

What you’ll almost certainly see now is an error:

Failed to get test Metric Extension metric result.: ORA–00942: table or view does not exist

The reason? The SQL is being executed by the “Default Monitoring Credential” on the Database Instance, which is usually DBSNMP. In our SQL we didn’t specify the owner of the Usage Tracking table S_NQ_ACCT, and nor is DBSNMP going to have permission on the table. We could create a new set of monitoring credentials that connect as the RCU table owner, or we could enable DBSNMP to access the table. Depending on your organisation’s policies and the scale of your EM12c deployment, you may choose one over the other (manageability vs simplicity). For the sake of ease I am going to take the shortest (not best) option, running as SYS the following on my RCU database to create a synonym in the DBSNMP schema and give DBSNMP access to the table.

Now retest the Metric Extension and all should be good:

 

Click Next and review the new Metric Extension

 

When you click on Finish you return to the main Metric Extension page, where your new Metric Extension will be listed.

A note about performance

When building Metric Extensions bear in mind the impact that your data extraction is going to have on the target. If you are running a beast of a SQL query that is horrendously inefficient on a collection schedule of every minute, you can expect to cause problems. The metrics that are shipped with EM12c by default have been designed by Oracle to be as lightweight in collection as possible, so in adding your own Metric Extensions you are responsible for testing and ensuring yours are too.

Deploying a Metric Extension for testing

Once you have built a Metric Extension as shown above, it will be listed in the Metric Extension page of EM12c. Select the Metric Extension and from the Actions menu select Save As Deployable Draft.

You will notice that the Status is now Deployable and on the Actions menu the Edit option has been greyed out. Now, click on the Actions menu again and choose Deploy To Targets…, and specify your RCU Database Instance as the target

Return to the main Metric Extension page and click refresh, and you should see that the Deployed Targets number is now showing 1. You can click on this to confirm to which target(s) the Metric Extension is deployed.

Viewing Metric Extension data

Metric Extensions are defined against target types, and we have created the example against the Database Instance target type in order to get the SQL Adaptor available to us. Having deployed it to the target, we can now go and look at the new data being collected. From the target itself, click on All Metrics and scroll down to the Metric Extension itself, which will be in amongst the predefined metrics for the target:

After deployment, thresholds for Metric Extension data can be set in the same way they are for existing metrics:

Thresholds can also be predefined as part of a Metric Extension so that they are already defined when it is deployed to a target.

Amending a Metric Extension

Once a Metric Extension has been deployed, it cannot be edited in its current state. You first create a new version using the Create Next Version… option, which creates a new version of the Metric Extension based on the previous one, and with a Status of Editable. Make the changes required, and then go through the same Save As Deployable Draft and Deploy to Target route as before, except you will want to Undeploy the original version.

Publishing a Metric Extension

The final stage of producing a Metric Extension is publishing it, which moves it on beyond the test/draft “Deployable” phase and marks it as ready for use in anger. Select Publish Metric Extension from the Actions menu to do this.

A published Metric Extension can be included in a Monitoring Template, and also supports the nice functionality of managed upgrades of Metric Extension versions deployed. In this example I have three versions of the Metric Extension, version 2 is Published and deployed to a target, version 3 is new and has just been published:

Clicking on Deployed Targets brings up the Manage Target Deployments page, and from here I can select my target on which v2 is deployed, and click on Upgrade

After the confirmation message Metric Extension ME$USAGE_TRACKING upgrade operation successfully submitted. return to the Metric Extension page and you should see that v3 is now deployed to a target and v2 is not.

Finally, you can export Metric Extensions from one EM12c deployment for import and use on another EM12c deployment:

Conclusion

So that wraps up this brief interlude in my planned two-part set of blogs about EM12c. Next I plan to return to the promised JMeter/EM12c integration … unless something else shiny catches my eye in between …

Job Vacancy for Training Material Developer

November 5th, 2013 by

We’re looking for a talented technical writer to come and join us at Rittman Mead. You’ll be helping produce our industry-leading training material. We run both public and private courses, off the peg and bespoke, to consistently high levels of approval from those who attend.

You need to have excellent written English, with a particular skill for clear communication and a keen eye for detail in presentation. We run a lean team here and all material you produce must be “print-ready” without the need for further review and editing.

Technically, you must have solid knowledge of OBIEE and preferably ODI and BI Apps too, with additional products such as Endeca and GoldenGate a bonus. Experience of cloud technology and administration will help but is not mandatory. We use a variety of tools in preparing our training material, so you need to be able to adapt quickly to new software (hint: Powerpoint is not the best presentation tool ;-)).

The role is full time, and can be based remotely. If you are interested, please get in touch through this link. We’d like to see an example of your recent writing, such as a blog entry. If you’ve any questions, you can email me directly : robin dot moffatt at rittmanmead dot com (but to register your interest in the role, please do so via the link).

Website Design & Build: tymedia.co.uk