Inside the Oracle BI Server Part 3 : BI Server In-Memory Joins

March 3rd, 2010 by

In the previous two postings in this series, I looked at the architecture of the Oracle BI Server, and how it processes incoming queries from Oracle BI Answers. In the latter article I touched on the concept of BI Server in-memory joins, and in this article I want to expand on this topic and look at just what goes on when the BI Server is called upon to combine data from multiple sources.

When the BI Server executes a query plan, it handles the data in four separate stages:


  • Firstly, filters and functions are applied to the data from each data source
  • Then, the data from these data sources are aggregated as required
  • Then they are joined together (or “stitched” together), and
  • Then, any calculations and/or aggregations that are applied across data sources are applied

In simple OBIEE environments, data used by a request will come from a single database, and therefore any joins that need to be performed by the BI Server will automatically be “pushed down” to the underlying database. In the cases though where more than one physical database is being used to provide data for a query, this join will instead need to be performed by the BI Server “in memory”. This ability to “federate” data sources, and therefore produce reports and analysis that span multiple data sources, but present the data to users as if it was a single database, is one of the key unique features of OBIEE and sets it apart from tools like Discoverer which are really restricted to reporting against single data sources.

So given this capability, how does it work under the covers? When does the BI Server perform a join in-memory, and when does it get done at the underlying database level? Where can we see what is happening, and can we predict what method the BI Server will use when performing a join? Finally, what algorithm does the BI Server use when performing these joins, and how does it use memory and disk when during the process?

To illustrate how the process works, there are a number of join scenarios that we need to consider. Some relate to joining fact and dimension tables together, and others relate to joining fact tables that share conforming dimensions, or hold conforming data sets of differing granularity.

Joining Fact and Dimension Tables Together

The BI Server semantic layer requires you to organize your business model and mapping layer into a star schema. This star schema may have one or more logical dimension tables, that join to one or more logical fact tables. The logical fact tables typically have conforming dimensions, so that you can create requests that span multiple fact tables and multiple dimension tables.

Taking for the moment joins between fact and dimension tables, depending on how the underlying physical or logical table source joins are set up in the semantic model, these may be either inner joins, left outer joins, right outer joins or full outer joins. The simple example to consider is a business model that is mapped to a single physical database, so that all logical table sources point to the same underlying data source, as shown in the screenshot below:


In this case, if we issued a request against this business model that required data from a dimension table and a fact table, the BI Server would push the join between logical table sources down to the underlying database, a single SQL query would be generated and the execution plan from a level 5 query log entry would look like this:

The same would apply to a left outer join between table sources in the same database, a right outer join or a full outer join. The BI Server doesn’t do any work here except to issue a single SQL query, and you can see just the one “RqList” (request list) in the execution plan, indicating again that the BI Server thinks it only needs to put together one query to satisfy the request.

If, however, one of the logical dimension tables had its logical table source re-pointed to a separate physical database, as shown in the screenshot below, the BI Server would now have to do the join itself, as it can’t be pushed down to the underlying database (as there are now two of them).


In this case, two SQL queries would be issued, one against each of the two physical databases, and the BI Server would do the join in-memory (or to disk, I’ll elaborate on this later on). The corresponding logical execution plan from a level 5 log file would now look like this:

Notice the “InnerJoin <<2492>> On D901.c2 = D902.c2; actual join vectors: [ 0 ] = [ 1 ]“ that is in the middle of the execution plan, between the two main Rqlists – this tells you that the BI Server is doing the join, as it would only appear here if it couldn’t be pushed down to the underlying database. You might also find references to LeftOuterJoin, RightOuterJoin and FullOuterJoin here, depending on how the join between the tables is defined in the physical or logical table source joins in your semantic layer.

Joining Facts with Conforming Dimensions Together

Another situation occurs when you are joining fact tables together that share conforming dimensions. A simple example of this is where you create a request that requires data from two or more fact tables that share conforming dimensions, such as those shown in the screenshot below:


As requests such as these can potentially lead to “fan trap” issues (explained in this blog post), the BI Server knows that it has to generate two logical queries and join, or “stitch” them together to avoid the fan trap. If both fact tables are sourced from the same physical database, and this database supports subquery factoring (the “WITH” clause that you see in Oracle 10gR2/11g SQL statements) then it will generate the following execution plan, which has a FullOuterStitchJoin between the two inner RqList blocks:

The BI Server Navigator then generates a single SQL statement off of this execution plan, which queries both fact tables using subquery factoring, and then brings the results together in the main body of the statement:

If the physical database doesn’t support subquery factoring, such as Oracle Database 10gR1 or higher, then the BI Server generates a slightly different execution plan, again with a FullOuterStitchJoin, like this:

This is then resolved for this database into two separate SQL statements, which then joined “in-memory” together by the BI Server.

Joining Table Sources within a Logical Fact

Another situation is a fact table may have more than one logical table source, because individual measures are sourced from different data sources or perhaps measures may be mapped in at differing levels of granularity (this blog post describes such a scenario). In this case, again the BI Server will initially try and push the join down to the underlying database, something that may be possible if a single physical database is used and we can use a technique like subquery factoring; more likely though it will require the BI Server to issue two or more physical SQL statements and then bring the results back together again using a FullOuterStitchJoin.

Again, notice the FullOuterStitchJoin in the execution plan – this indicates that facts (as opposed to facts and dimensions) are being joined together.

This in turn leads to two separate SQL statements. The one against the “orcl” database is more complex because the results then need to be mapped to the aggregation level that the second source, “quotas”, comes in at:

So, to summarize things so far:

  • Where possible, the BI Server will try and generate a single SQL statement to resolve a request
  • And if possible, any joins that are required between tables will be pushed down to the database
  • If table data sources are located on separate physical databases, the BI Server will request the individual data source data blocks, and then join the results together in-memory using an inner, left outer, right outer or full outer join as appropriate
  • If facts (or measures within a fact) are being joined together, the BI Server will need to generate one logical query per logical table source, and bring the data together with a full outer stitch join
  • As mentioned above, if it’s possible to do this stitch join at the database level (using, for example, a WITH clause), it’ll do so
  • Otherwise the BI Server will generate separate SQL statements and join the data together in-memory

When an in-memory BI Server join happens between two tables, it will bring back both sets of data from the two (or more) table sources and then perform a sort-merge join to bring the data together. If possible, it will push the sort back to the underlying database and just do the “merge” part of the join, and it’ll in all likelihood page some of the temporary data to TMP files in $ORACLEBIDATA/tmp depending on the load on the server, available memory and the number of concurrent queries that it is running. The NQSConfig.INI BI Server parameter VIRTUAL_TABLE_PAGE_SIZE determines the point at which temporary data is paged to disk, and on a Unix server you can experiment with increasing it from its default setting if you have lots of unused memory available (the docs suggest that this will probably not have much of a positive effect, though).

Fragmented Data Sources

Another variation on a join that the BI Server can do is a “union” between two queries. This is most common when you have fragmented data sources, such as the example below where part of the data in the sales table comes from one table, and part from another.


In this case, the logical execution plan will contain RqUnionAll between the inner RqList request lists, to show that the BI Server knows it needs to union all the two queries.

Then, depending on whether the BI Server can resolve this using a single query or multiple queries against separate data source, either a single SQL statement like the one below will be issued, or separate statements will be issued and the BI Server will do the union all in memory.

Driving Tables (Parameterized Nested Loop Joins)

I mentioned in the paragraph above that BI Server joins are typically done using the sort-merge algorithm. One variation on this though is when you set one of the two tables in a business model and mapping logical join to be a driving table, typically because you are federating fact and dimension tables and one table is much smaller than the other, as shown in the screenshot below.


The first thing to understand with driving tables is that they are regarded as a “hint” by the BI Server, and the BI Server may well choose to ignore the setting if it makes more sense to perform the join as normal (presumably, when both tables are relatively small). If the driving table instruction is followed, though, the BI Server will always do the join in-memory, even if both tables come from logical table sources pointing to the same physical database. In the execution plan shown below, you can see the InnerJoin (left drive) that indicates a parameterized nested loop join (PNLJ) will be required, and as the name suggests the BI Server will perform a nested loop join rather than the sort-merge join that it usually uses to join tables together.

Then then leads to the following parameterized SQL statements being issued, with the first statement representing the “driving” query, and the second the “probing” one against the larger table.

In reality you rarely see driving table joins being used as there are much better solutions to bringing together small and large tables together – the main one being to co-locate the tables and then push the join down to the database, rather than bring both datasets together and have the BI Server join them in memory instead (this also applies to a lesser degree to all BI Server joins). But this could be a useful “quick fix” until such time as you can co-locate the data, and its useful to remember that these types of joins are always done by the BI Server due to the need to iterate through drive/probe operations.

Persist Connnection Pools

One final variation on BI Server execution plans and join types is when you set up a “persist connection pool”. Persist connection pools are typically used in two scenarios; firstly, where Oracle/Siebel Marketing is being used, and secondly, where the underlying physical database doesn’t handle large numbers of values in an IN-list. In this case, you can set up a second connection pool within a physical database and specify it as the persist connection pool, as shown in the screenshot below:


I’ve never encountered a persist connection pool “in the wild”, so to speak, but an example query log output from when one was used is shown below. In this instance, the first query was sent to a MS Analysis Services database, and a persist connection pool was used to materialize the in-list results into a database table which is then joined back to the ORDERS table in the final query, rather than have the BI Server do the join in-memory.


So, there you have it. The join strategy of the BI Server, as is the case with functions and calculations, is to wherever possible push them down to the underlying database. If this can’t be done, because either the database version doesn’t support features like subquery factoring, or if the data for the request is being sourced from more than one physical databas, the BI Server will do the join itself, initially in-memory but usually with temporary data being paged to disk.

There are two main types of BI Server join; regular (inner, left outer, right outer and fullouter) joins for bringing together fact and dimension tables; and full outer stitch joins, for bringing together facts and measures. There are also variations for handling joins from very small tables to very large tables (driving tables, or parameterized nested loop joins), or when the physical database doesn’t support large in-lists, however these issues are usually better handled by co-locating data or upgrading the database.

Finally, even though the BI Server is pretty clever at doing these types of joins, you’re usually better trying to invest your time in physically bringing your data together into a data mart or data warehouse than spending too much time fine-tuning these joins, though a knowledge of how they work (and how to read a level 5 execution plan) can be useful if you have to understand, or tune, an existing system in-place. Of course the level 5 execution plan doesn’t really tell you anything you couldn’t determine by looking at the design of the RPD – there’s nothing that goes on beyond this that might change the execution plan for a certain set of data, unlike the Oracle database which changes the plan from database to database depending on the distribution and nature of the data – but its interesting to get a peek into the workings of the BI Server Navigator module.


  1. Palani Says:

    Nice article. I face the same behaviour when your tables are from different schema of same DB. You might even hit with this error [nQSError: 46008] Internal error: File .\DataType\SUKeyCompare.cpp

    Finally fixed it with creating public synonyms and having only one database in phy layer. Thus pushing BI to come with one single SQL.

    The same can be done for Cross DB Join(with DB Link) to gain good performance benefit, as long as they are Oracle DB.

  2. ravih Says:

    Brilliant .. you touched on the very topics i was wondering about on the previous post.



Website Design & Build: