Using the OBIEE Aggregate Persistence Wizard

October 26th, 2007 by

One of the interesting new features that came with OBIEE was the Aggregate Persistence Wizard. This utility automates the creation and initial population of aggregates, persists them in a back-end database and configures the BI Server metadata layer so that they’re used when appropriate. What’s particularly interesting about this feature is that you can create aggregates for, say, an Oracle database and store them in an SQL Server, DB2 or Teradata database. It’s one of those OBIEE “rapid application development” features that lets you tackle underlying database performance issues in a pragmatic way – you can’t create summaries in the backend database, so find any old database and create them in there, the BI Server will remember where it’s stored them – and it’s also a useful feature when your metadata layer maps to multiple back-end databases, and there isn’t an obvious single place to create your aggregates as your facts span multiple databases.

So, how you you use this feature? Well, it’s accessed from the BI Administrator tool, and relies on you first having set up dimensions in your logical model to describe the hierarchies and levels within your data. In this example, I’m looking at a simple dataset with three hierarchies; a customer hierarchy that goes from All Customers to Countries, then State, then City and then Customer, a product hierarchy that goes from All Products through Categories to Product, and a time hierarchy that goes from All Times through Years, Quarters, Months and Days.

In this scenario, I’m interested in aggregating the Quantity measure within the Items fact table by Product Category, Year and State. To kick things off, I select Tools > Utilities from the BI Administrator application menu, and then select Aggregate Persistance Wizard from the list of available utilities.

The way the wizard works is to let you select one or more fact table measures, then select the dimension hierarchy levels on which you’ll aggregate, wherapon it then generates a script that tells the BI Server to create the aggregates. You can run this script either through the scheduler (the method the docs recommend) or through the nqcmd.exe command-line utility found in the /OracleBI/server/bin directory (my recommended method, as this seem to be the only way I can get it to work).

To start off then, the first screen displayed by the wizard prompts you to enter a filename for the script it’s going to generate. Note the option underneath to generate a DDL file as well – this is a bit of a red herring, as the main script that it generates already tells the BI Server to generate back-end database tables anyway, I’m not quite sure what the DDL file is for but it’s definately not needed for what we’re doing.

You then select the business model, then the fact table and finally the measure(s) that you wish to aggregate. To keep things simple, I’m just aggregating the quantity measure in my Items fact table.

Then you select the dimension hierarchy levels on which you’ll aggregate the data.

Now you choose the database in which you’ll store the aggregates. In this initial example, I’ll stored the aggregate tables in the same Oracle schema as the source data, but I could just as well store them in an Access database, or an SQL Server database, if I had one handy.

The wizard then shows me a preview of the script it’s going to generate, and asks me if I want to create any more summaries. Here’s how the preview script looks – note that it’s not SQL, it’s a script language that the BI Server understands and uses as the specification for an aggregation job, which it then turns into separate CREATE TABLE, INSERT … SELECT and so on SQL statements, “optimized” for the particular back-end database platform.

I then close the wizard down and take a look at the script that it’s generated.

Before I run the script, I insert another command into the script before the “create aggregates” command, to clear down any aggregates that might already exist:

Now the script is ready, I run it using the “nqcmd.exe” utility in the /OracleBI/server/bin directory.

If I take a look in the schema in which it’s created the aggregates, I see that some system-generated tables are now there:

Looking at the Physical Layer in the BI Administration tool, I can see the new aggregate tables created and registered, and shown in red to show they’re aggregates.

Similarly, new Logical Table Sources have been created in the business model to link these aggregate tables to the existing logical model.

Now, if I run a query through Answers that selects the Quantity measure at this level of aggregation, and then check the NQQuery.log file to check that the new aggregate tables are being used, I see it’s all working ok.

Not bad. Of course, if you’re running solely against an Oracle database, creating and maintaining your aggregates using materialized views probably makes more sense, but if you’re running in a mixed environment, or for whatever reason you either can’t create materialized views or you only have an DB/2, SQL Server or other database lying around to create aggregates in, it’s a useful feature to be aware of.

Thanks to Christopher Bridge for reminding me of this feature, and for a few tips on getting it working. Cheers Chris.


  1. Raymond de Vries Says:

    Fantastic feature, especially if you are not able to use Oracles MV query rewrite option. Up until now, I added self-created aggregate tables manually to the physical layer and in the model layer set the levels. One question I do have is, how are these tables refreshed? The article states that the tables are initially loaded when first created, but how are they kept up to date?

  2. Mark Rittman Says:

    Hi Raymond,

    I think the aggregates are meant to be rebuilt each time, i.e. there’s no inbuilt refresh feature. I guess the use case here is that you load an underlying data warehouse, then you use the wizard to create and populate the aggregates, each time – unless anyone else knows of a way to refresh them (apart from running the script each time?)



  3. Michael Wilcke Says:


    in the beginning you write “You can run this script either through the scheduler (the method the docs recommend) or ….”

    You seem to have problems to get it working with scheduler, however this could be the solution for regular (i.e. scheduled) updates – right?


  4. Mark Rittman Says:

    Michael – yes, I suppose you’re right, although this won’t do the sort of incremental updates that we’re more used to with Oracle materialized views. But yes, it could be used as a way of rebuilding the aggregates to a set schedule. Well spotted.



  5. Christopher Bridge Says:

    Here’s a (Oracle SQL) piece of SQL I find useful to check/review the generated database objects;

    select * from all_objects
    where 1=1
    — 10.1.3.x OBIEE generated object naming
    and (
    object_name like ‘AG__%’
    OR object_name like ‘SA__%’
    and substr(object_name, 3, 1) = ‘_’
    and owner = ‘APPS’
    order by timestamp DESC

    Note the SQL above assumes the ” and owner = ‘APPS’ ”
    as that is where I imported the connection pool. I mention this because I found lots of AG_ INDEXES in my eBS (e.g. ENG, EGO schema).

  6. Marija Says:

    I tried to make use of the Aggregate Persistence Wizard but I did not manage to execute the script neither through the NQCMD and neithe through job manager. I got the following error from job manager :

    [Oracle][ODBC][Ora]ORA-01017: invalid username/password; logon denied

    Connect open failed

    From the command line I got a different error:

    create aggregates
    at levels (“biwh”.”Inventory Status”.”InventoryStatus”)
    using connection pool “biwh”.”Connection Pool”
    in “biwh”.”STAGING_AREA”
    [901][State: S1000] [Oracle][ODBC][Ora]ORA-00901: invalid CREATE command

    Statement execute failed

    Any idea please?


  7. LearnerogOBIEE Says:

    I am getting this error when i try to run the script from the cmd.

    create aggregates
    create aggregates
    [901][State: S1000] [Oracle][ODBC][Ora]ORA-00901: invalid CREATE command

    Statement execute failed

    for “SH”.”Sales Fact”(“Gross Profit Physical”)
    at levels (“SH”.”TimesDim”.”Fiscal Month”)
    using connection pool “sh”.”Connection Pool”
    in “sh”..”SH”
    for “SH”.”Sales Fact”(“Gross Profit Physical”)
    at levels (“SH”.”TimesDim”.”Fiscal Month”)
    using connection pool “sh”.”Connection Pool”
    in “sh”..”SH”
    [900][State: 37000] [Oracle][ODBC][Ora]ORA-00900: invalid SQL statement

    Statement execute failed

    Processed: 2 queries
    Encountered 2 errors

Website Design & Build: