Rittman Mead BI Forum 2014 Now Open for Registration!

March 2nd, 2014 by

I’m very pleased to announce that the Rittman Mead BI Forum 2014 running in Brighton and Atlanta, May 2014, is now open for registration. Keeping the format as before – a single stream at each event, world-class speakers and expert-level presentations, and a strictly-limited number of attendees – this is the premier Oracle BI tech conference for developers looking for something beyond marketing and beginner-level content.

This year we have a fantastic line-up of speakers and sessions, including:

  • Oracle ACE and past BI Forum best speaker winner Kevin McGinley, on adding third-party visualisations to OBIEE
  • Tony Heljula, winner of multiple best speaker awards and this year presenting on Exalytics and TimesTen Columnar Storage
  • Sessions from TimesTen PMs Chris Jenkins and Susan Cheung on what’s coming with TimesTen
  • Edward Roske, author of multiple books on Essbase, on Essbase optimisation
  • Oracle’s Andrew Bond, and our own Stewart Bryson (Oracle ACE) with an update to Oracle’s reference BI, DW and Big Data Architecture
  • Sessions from Oracle’s Jack Berkowitz, Adam Bloom and Matt Bedin on what’s coming with OBIEE and Oracle BI Applications
  • Endeca sessions from Chris Lynskey (PM), Omri Traub (Development Manager) on Endeca, along with ones from Branchbird’s Patrick Rafferty and Truls Bergersen
  • And sessions from Rittman Mead’s Robin Moffatt (OBIEE performance), Gianni Ceresa (Essbase) and Michael Rainey (ODI, with Nick Hurt from IFPI)
NewImage

We’ve also got some excellent keynote sessions including one in the US from Maria Colgan on the new in-memory database option, and another in Brighton from Matt Bedin and Adam Bloom on BI in the Cloud – along with the opening-night Oracle product development keynote in both Brighton and Atlanta.

We’re also very exited to welcome Lars George from Cloudera to deliver this year’s optional one-day masterclass, this year on Hadoop, big data, and how Oracle BI&DW developers can get started with this technology. Lars is Cloudera’s Chief Architect in EMEA and an HBase committer, and he’ll be covering topics such as:

  • What is Hadoop, what’s in the Hadoop ecosystem and how do you design a Hadoop cluster
  • Using tools such as Flume and Sqoop to import data into Hadoop, and then analyse it using Hive, Pig, Impala and Cloudera Search
  • Introduction to NoSQL and HBase
  • Connecting Hadoop to tools such as OBIEE and ODI using JDBC, ODBC, Impala and Hive

If you’ve been meaning to take a look at Hadoop, or if you’ve made a start but would like a chance to discuss techniques with someone who’s out in the field every week designing and building Hadoop systems, this session is aimed at you – it’s on the Wednesday before each event and you can book at the same time as registering for the main BI Forum days.

NewImage

Attendance is limited to around seventy at each event, and we’re running the Brighton BI Forum back at the Hotel Seattle, whilst the US one is running at the Renaissance Midtown Hotel, Atlanta. We encourage attendees to stay at the hotel as well so as to maximise networking opportunities, and this year you can book US accommodation directly with the hotel so you can collect any Marriott points, corporate discounts etc. As usual, we’ll take good care of you over the two or three days, with meals each night, drinks receptions and lots of opportunities to meet colleagues and friends in the industry.

Full details are on the BI Forum 2014 web page including links to the registration sites. Book now so you don’t miss-out – each year we sell-out in advance, so don’t leave it to the last minute if you’re thinking of coming. Hopefully see you all in Brighton and Atlanta in May 2014!

Thoughts on Using Amazon Redshift as a Replacement for an Oracle Data Warehouse

February 20th, 2014 by

Recently, my colleague, Pete Carpenter, described a proof of concept we carried out using Amazon Redshift as the data warehouse storage layer in a system capturing data from Oracle E-Business Suite (EBS) using Attunity CloudBeam in conjunction with Oracle Data Integrator (ODI) for specialised ETL processing and Oracle Business Intelligence (OBI) as the reporting tool.

In this blog I will look at Amazon Redshift and how it compares with a more traditional DW approach using, as my example, Oracle. I am not going to talk performance in absolute terms as your mileage is going to vary.

What is Redshift?

Redshift is the Amazon Cloud Data Warehousing server; it can interact with Amazon EC2 and S3 components but is managed separately using the Redshift tab of the AWS console. As a cloud based system it is rented by the hour from Amazon, and broadly the more storage you hire the more you pay. Currently, there are 2 families of Redshift servers, the traditional hard-disk based, and the recently introduced SSD family, which has less storage but far more processing power and faster CPUs. For our trials we looked at the traditional disk based storage on a 2 node cluster to give us 4TB of disk spread across 4 CPU cores. Apart from single node configurations, Redshift systems consist of a leader node and two or more database nodes; the leader node is supplied free of charge (you only pay for the storage nodes) and is responsible for acting as the query parser, coordinating the results from the database nodes, and being a central network address for user access.

The Redshift product has its origins in ParAccel and that in turn Postgres and thus supports ANSI SQL and the ODBC and JDBC Postgres drivers. In basic terms it is a share-nothing parallel processing columnar store database that supports columnar compression.

At the cluster level all sorts of robustness features come in to play to handle routine hardware failures such as a node or disk; regular automatic backups occur and on-demand backups can be made to S3 storage for DR or replication to other AWS networks. It is possible to dynamically change the number and or type of Redshift nodes in use, in effect a new cluster is spun up and the data copied from the existing system to the new before dropping the old system. The original database remains open for query (but not update) during the scale-out (or scale-down) process. As Pete Carpenter described, creating a new Redshift instance is a simple matter of completing a few web forms and waiting for the cluster to come up. Once up you can connect to the database using the master credentials you specified at cluster creation and then create databases, users, and schemas as required.

Databases, users, schemas and security

Although it is possible to run a Redshift database using the master user and the default database, good practice suggests that we do a bit more than this. In some ways Redshift is a little like the Oracle 12c database in that we can create additional databases within the master database, much in the style of plugable databases; a major difference comes with the concept of a USER. In Oracle 12c a user belongs to a plugable database, in Redshift all users belong to the master (container) database and can see any of the contained databases (subject to grants.) Schemas are logical groupings for objects and need not be aligned to database user names. Standard object and role grants allow users to access specific databases, schemas, and tables or to have role-rights such as administrator. The final aspect of security is outside the database and is in effect a firewall rule to permit any nominated AWS user or specified IP addresses to speak to the database listener; by default the rule is no inbound access. The diagram below is a block representation of how databases, users, schemas and firewall interrelate. Note user names are descriptive and not valid names!

Screen Shot 2014 02 15 at 13 12 34

Database Design

A key point of difference between Amazon Redshift and Oracle is in how the data is stored or structured in the database. An understanding of this is vital in how to design a performant data warehouse. With Oracle we have shared storage (SAN or local disk) attached to a pool of processors (single machine or a cluster); however, Redshift uses a share-nothing architecture, that is the storage is tied to the individual processor cores of the nodes. As with Oracle, data is stored in blocks, however the Redshift  block size is much larger (1MB) than the usual Oracle block sizes; the real difference is how tables are stored in the database, Redshift stores each column separately and optionally allows one of many forms of data compression. Tables are also distributed across the node slices so that each CPU core has its own section of the table to process. In addition, data in the table can be sorted on a sort column which can lead to further performance benefits; I will discuss this in the section on tables.

Not all of the database features we come to expect in an Oracle data warehouse are available to us in Redshift. The Redshift Developer Guide has the full rundown on what is available, but for now here is a short list of common DW features that are not going to be available to us.

  • Tablespaces
  • Indexes
  • Partitions
  • Constraints
    • Check
    • Primary, Unique, Foreign Key (all usable by optimizer but not enforced)
  • Spatial (Locator) functionality
  • Sequences (although there is an AUTO NUMBER column type)
  • MERGE – we have to code as UPDATE and INSERT in two steps
  • In-database PL/SQL-like language
    • Triggers
    • User defined functions
    • Procedures
  • Timestamps (with timezone)
  • XML types
  • Pseudo columns
  • Various SQL functions (not a full list, but functions I often use in ETL processes)
    • Regular expressions
    • Regression functions
    • SUBSTR
    • TRANSLATE
    • ROW_NUMBER
    • TO_TIMESTAMP

In addition data types may not be exactly the same as those used in Oracle; for example DATE in Oracle has a resolution of 1 SECOND, DATE in Redshift has a resolution of 1 DAY.

Tables

The basic Oracle syntax to create a table works (as does CTAS, Create Table As Select), however there are additional items we can, and should, specify at table creation.

By default the data distribution style is EVEN, that is data is distributed between node-slices  in a round-robin fashion, for performance we may wish to specify a distribution key column to allow a particular column to control how data is distributed; a similar concept to Oracle hash partitioning, and with the same sort of performance characteristics. We aim to create an even distribution of rows per slice (else one slice will take longer than the others to process its data) and by applying the same distribution to other tables that are commonly joined we can benefit from improved table joining performance as all of the rows are stored in the same node-slice. Sometimes it is more appropriate to replicate the whole table to each slice so that the data is always available to join without the need to move data to the same slice before joining; In such cases we set the distribution style to be ALL.

The second thing we can set on a table is the SORTKEY this specifies one or more columns on the table by which the data is ordered on data load (it can be the same column as the distribution key). Redshift maintains information on the minimum and maximum values of the sort key in each database block and at query time uses this information to skip blocks that do not contain data of interest.

Finally, we can elect to compress columns in the database. If we do not specify compression, the default is RAW (i.e. uncompressed) is used. For compressed data we can specify the compression algorithm used, different algorithms are better for certain data types and values. Compression may be data block based (DELTA, BYTE-DICTIONARY, RUN LENGTH, TEXT255 and TEXT32K) or value base (LZO and the MOSTLY compressions). This sounds daunting but there are two ways we can get compression suggestions from the database: using the ANALYZE COMPRESSION command on a loaded table and the AUTO COMPRESS feature of the COPY command, this however requires an empty non-compressed target table; copy is the Redshift equivalent of SQL/Loader and takes a flat file and inserts it into the database.

Let’s consider a simple table T1 with three columns, C1, C2 and C3. We can create this using a simple piece of DDL:

CREATE TABLE T1
(
C1 INTEGER NOT NULL,
C2 VARCHAR(20) NOT NULL,
C3 DATE
);

I have not used any of the Redshift nice-to-have features for sorting, distribution, and compression of data. Note too, that I am using NOT NULL constraints, this is the only constraint type enforced in the database. This simple create statement creates database objects on each slice of the cluster, with one block per column per slice (1 slice = 1 CPU core) see the following diagram, note there is no table object stored in the database, it is a collection of columns.

Table slice

Without specifying a distribution key data is evenly spread across all slices. When a 1MB block for a column is full a new block is created for subsequent inserts on the slice. An empty table will occupy block size * number of columns * number of cores and our block size is 1MB this would be columns * cores megabytes

Using a distribution key effectively hashes the data on the key column by the number of cores. Adding a sort key declares that the rows in the table are ordered and potentially allows block elimination to kick in. If our sort key is, say, transaction date, it is likely that our data loads occur in transaction date order, however if we sorted on product code we might find each data load has data that needs to be inserted between existing rows. This does not happen, the data is still appended to the table and the table now needs to be reorganised to put the rows in order. There are two ways to achieve this, the VACUUM command that does an on-line reorg of the table and the potentially faster route of creating a copy table, populating it and then dropping the original and renaming the copy, of course this gives a little downtime when the original table is not available for access.

Applying compression, sort and distribution we get a DDL statement like:

CREATE TABLE T2 
(
C1 INTEGER NOT NULL,
C2 VARCHAR(20) NOT NULL SORTKEY DISTKEY,
C3 DATE ENCODE DELTA
);

This table uses column C2 as both the sort key and the distribution key; column c3 is compressed using delta compression – this is an efficient compression algorithm where most dates are ±127 days of the date of the previous row. If we wanted to use a multi-column sort key the DDL syntax would be like:

CREATE TABLE T1 
(
C1 INTEGER NOT NULL,
C2 VARCHAR(20) NOT NULL DISTKEY,
C3 DATE 
)
SORTKEY (C3,C2);

Multi-column distribution keys are not supported.

Designing for Performance

Redshift is designed for query and bulk insert operations; we can optimise query performance by structuring data so that less data is transferred between nodes in a join operations or less data is read from disk in a table scan. Choosing the right data sortkeys and distkeys is vital in this process. Ideally these key columns should not be compressed. Adding primary and foreign keys to the tables tells the optimizer about the data relationships and thus improves the quality of query plan being generated. Of course up to date table stats are a given too; tables must be ANALYZEd when ever the contents changes significantly and certainly after initial load. I feel that we should collect stats after each data load.

For a FACT + DIMENSIONS data model (such as in the performance layer of Oracle’s Reference Data Warehouse Architecture) it would be appropriate to distribute data on the dimension key of the largest dimension on both the dimension and the fact tables, this will reduce the amount of data being moved between slices to facilitate joins.

For optimal performance we should always ensure we include both the distribution keys and the sort keys in any query, even if they appear to be redundant. The presence of these keys forces the optimizer to access the tables in an efficient way.

For best data load performance we insert rows in bulk and in sortkey order. Redshift claim best performance comes from using the COPY command to load from flat files and as second best the bulk insert SQL commands such as CTAS and INSERT INTO T1 (select * from T2);. Where Redshift performs less well is when we use certain kinds of ETL steps in our process, particularly those that involve updating rows or single row activities. In addition loading data without respecting the sort key leads to performance problems on data query. If data update is essential we have two real options: we move our ETL processes to a conventional database hub server (perhaps using ODI) and just use Redshift to store pre-transformed data; or we revise our ETL processes to mimimize update activity on the Redshift platform. There is some scope to optimize updates by distributing data on the update key but another approach is to use temporary tables to build the results of the update and to replace the table with the results of the merge. This requires a bit of inventiveness with the ETL design but fortunately many of our required SQL constructs including analytic functions are there to help us.

Building an Amazon Redshift Data Warehouse using ODI and Attunity Part 2 : Using ODI and OBIEE with Redshift as a Target/Source

February 19th, 2014 by

In my post yesterday we stepped through the initial set up and staging load for a data warehouse using Amazon Redshift and Attunity, for eventual use with OBIEE and ODI. Now that we have our source data in Redshift, let’s look at transforming it into a star schema using ODI, by initially looking how we set up the connection to Redshift in ODI’s Topology Navigator.

As I mentioned in yesterday’s post and on a blog article we wrote on non-Oracle databases a few months ago, Redshift is based on ParAccel technology, but uses PostGreSQL ODBC and JDBC drivers to connect to it. Therefore. we can create a Physical Architecture connection in ODI through to it using the PostgresSQL technology type, like this:

Odi config 1aFor the Redshift JDBC URL you’ll need, refer to the AWS Cluster summary page back on the Amazon AWS Console.

Odi config 2a

We can then add our schemas:

Odi config 3a

Then, if your connection to the Redshift database is working correctly, you should see a list of schemas in the drop down menus:

Odi config 4a

Once all the schemas have been added, when then go through the familiar ODI process of logical architecture and context mapping:

Odi config 5a

Next, we can create a model and reverse engineer in our data stores -

Odi config 6a

It’s a good idea at this point to review each datastore and ensure that all attributes have been correctly assigned a datatype. To save a little reading time, a set of dimension and fact tables were created in the performance schema, and these were also reverse engineered in:

Odi config 7a

So from here on out, the process of creating and using mappings, packages, variables etc to populate your target tables is fairly straightforward, one which ODI Developers will be familiar with. You add your source and target data stores and implement the transformation logic.

Odi config 8a

For the initial load, the SQL Control Append IKM was used and for the most part, this KM worked well without customisation. However, one of the early issues encountered during execution was with unordered outer joins – this appears to be a an issue with the PostGresQL technology. This was resolved by amending the mapping and setting an order for the joins:

Odi config 9a

Merging / Incremental loading

One of the things to be aware with Redshift at the time of writing is the absence of Merge functionality (we’re so spoilt in Oracle DBMS land ;) ) You can of course achieve this by issuing separate insert and update statements. There is also the SQL Incremental Update IKM in ODI, but you may want to review the KM Task steps against your requirements before proceeding, as invariably some customisation will be required to improve performance. The same applies to SCD functionality – you should consider utilising analytic window functions (Redshift supports several aggregate and ranking window functions) to achieve your desired format.

Additionally, as mentioned previously Attunity supports a variety of replication tasks – you can do one off initial loading, an initial load + apply changes, and/or collect deltas into a change tracking table in the Redshift database. This in itself means you have the basic building blocks to create and maintain a Foundation layer as part of your Data Warehouse Design.

So, once we have data in our target fact and dimension tables, we can model these in OBIEE and create some analyses.

OBIEE Configuration

 As per the AWS documentation for Redshift, the recommended ODBC driver was downloaded from the Amazon AWS website, and an ODBC data source was created on the OBIEE server:

Obiee config 1a

For the server entry, you can use either the public or private leader node IP, depending on your network/security configuration. Just backtracking a little here – you may recall that the usr_dw account was created after the initial Redshift cluster build. In order for it to be able to access the tables in the performance schema, we need to grant the required select privileges.  

Once the ODBC connection is defined, we can create an RPD, define our database connection properties, import our tables and start modelling.

Obiee config 2a

Obiee config 3a

 

When it came to setting the features I used the query DBMS function and then went with what it set. Bear in mind that because PostgreSQL and Redshift aren’t supported source databases for OBIEE, depending on the analyses you are creating you may or may not hit functionality issues, so you may find yourself tweaking these settings.

Obiee config 4a

So if your connection is defined correctly and there are no other issues, you should now be able to import the tables:

Obiee config 5a

Once you’ve made your selection, you can then define the relationships between the fact and dimension tables (if you have primary and foreign keys defined these should propagate through), create any aliases etc. and then you’re about ready to start building up the the Business Model and Presentation areas. 

Obiee config 6a

Once the RPD modelling is completed, we can upload it and create some content.

Obiee config 8a

Obiee config 7a

Admittedly the analyses created for this dashboard were fairly basic, so no functionality issues were encountered during creation and execution – you might hit some with more “edge-cases” such as analytic functions or nested subtotals, or you might end-up hitting performance issues when you’ve got lots of concurrent users, as the SQL issued by OBIEE might not be as efficient as it would be for a supported data source.

Performance Considerations for Redshift

Pete Scott will be covering the ins and outs of designing for performance in a blog post tomorrow, for which I’ll add a link to this post once it’s up. However, one thing that should be mentioned is the importance of setting sort and distribution keys appropriately on the fact and dimension tables, as well as Primary and Foreign keys (whilst these constraints are not enforced, they are used by the optimiser when determining execution plans). The sort and distribution keys determine how the data is stored on disk, and how it is distributed across the compute nodes, and can make a big impact on query response times, as well as any issues around lack of official support in OBIEE for Redshift as a data source.

So to conclude -  whilst not officially supported, it is possible to create and maintain a Data Warehouse in Redshift and use the Oracle Product set to drive ETL and reporting. Some of the features that make Redshift an attractive DB platform is it’s ease of management, it’s scaleability and sizing options, and the ability to get a cluster up and running in hours rather than days. Combined with ODI and OBIEE and therefore a transferable skills base, it makes for an intriguing DW solution.

Building an Amazon Redshift Data Warehouse using ODI and Attunity Part 1 : Configuring Redshift, and CDC Using Attunity

February 18th, 2014 by

One of our longstanding Oracle customers recently asked us to put together a proof-of-concept DW system using Amazon Redshift as the data warehouse database, rather than Oracle Database. The main driver for this was the economics of running Redshift in the cloud vs. Oracle on-premise, or using Amazon RDS, and they were also interested in the potential performance benefits of running their data warehouse on a column-store database. They were also interested in trying out Attunity Replicate with Cloudbeam as the CDC solution, a product conceptually similar to Oracle GoldenGate but hosted on the Amazon AWS platform, and licensed by data volume and month rather than per CPU, perpetual, as GoldenGate is.

Because the client is a big user of Oracle ETL technology though – currently OWB, moving to ODI in the near future – they still wanted to use ODI to do the main data integration work, so an interesting question for us was whether ODI was a suitable tool for use with Redshift, which isn’t an officially supported source or target platform for ODI. It’s important to understand what “not supported” or “not certified” means in this context – it doesn’t mean it won’t work, it just means you’re on your own if it doesn’t. What gave us hope though was that Redshift uses Postgres-compatible ODBC and JDBC drivers, and we got it working with OBIEE a few months ago, so the premise for the PoC seemed reasonable to us. The diagram below shows the architecture we went with at the start of the project.

Architecture

 

A little about the non-Oracle components:

  • Amazon Redshift – Amazon’s Fully Managed Data Warehouse in the cloud offering. Based on ParAccel technology (which itself has roots in PostGresQL), the service utilises MPP, columnar storage and compression principles to enhance query performance.  
  • Attunity Replicate & Cloudbeam – Attunity provide log based Change Data Capture and replication functionality, which includes Redshift as a target database. Attunity supports on going source to target replication as well as storing change data in audit / change tracking tables. In order to replicate data to Redshift, you need to set up an account with Attunity and then sign up for a CloudBeam subscription. 
  • Amazon S3Amazon’s cloud storage service, which is used by Attunity to stage replication/change data prior to loading into the Redshift database.

Initial Configuration and Load

Source Database

The source RDBMS used for this proof of concept was an Oracle EBS database running on 11.1.0.7. In particular, we used the GL schema as this housed some sizeable data. The database requirements for Attunity Replicate are: 

  • Add supplemental logging (for this experiment full logging was added on the GL tables)
  • Switch the database into archive log mode
  • Optionally create a dedicated database account for Attunity – requires select privs on various V$ tables, the GL schema tables and execute on the LogMiner packages. It’s at this point you may want to touch base with your resident DBA :) 

S3 Storage

The Attunity CloudBeam service initially stages files in an S3 bucket before loading into the target database schema. In order to create this, browse to the S3 section of the AWS Console, and click Create Bucket. Once the bucket has been created, you then need to create a folder within this, in this instance the folder was named ORA_STG.

S3 1ds

 

Redshift Cluster

You deploy a Redshift database from the AWS management console. From the Redshift page, select a Region, and then Launch Cluster. From here, fill out the cluster details, for example:

Redshift config 1a

Leaving the database name blank will create a database named ‘dev’.  

After clicking Continue, the next page allows you to size your cluster.  For our test case, we opted for a two node cluster (which in fact has 3 nodes – 1 Leader node and 2 Compute nodes):
 

Redshft config 2a

The next configuration screen allows you to set various security and network options:

Redshift config 3a

Finally you are presented with a summary screen confirming your configuration options. The important thing to note here is that as soon as the cluster is launched and running, you will be start being charged. 

Reshift config 4a

After you click Launch, the cluster build will begin, and after some time the Status will switch from creating to Available. Clicking the Cluster Name hyperlink will show you a summary screen, detailing security config, cluster status, information about backups and URLs for connecting to your cluster. 

Redshift config 5a

At the bottom of the screen area list of public and private IPs for the Leader and Compute Nodes of the cluster: 

Redshift config 6a

We used the public IP of the Leader node for our inbound client connections.

The final step in terms of cluster configuration was to configure a list of IPs that were authorised to access the cluster. This is handled from within the security screen on the Redshift Management Console.

Redshift config 7a

Once you’ve authorised your machine, you can connect to the Redshift database via the JDBC/ODBC URLs shown on the cluster summary screen previously. Amazon recommend using SQL Workbench, and also link to JDBC and ODBC drivers which you should install. So, the next step was to enter the connection details, connect to the database and create some target users and schemas.

The connection details were defined using the downloaded jdbc driver, and the jdbc url from the AWS summary screen:

Redshift config 8a

Once connected, users and schemas were created with the following statements:

create schema performance;
create schema staging;
create schema foundation;
 
create user usr_stg with password '<password>';
create user usr_fnd with password '<password>';
create user usr_dw with password '<password>';
create user odi_work with password '<password>';

NB: in Redshift, passwords must contain one uppercase character,a number, and be at least 8 chars long.

Once the users and schemas are created, we can then grant privileges  to the odi_work account to be able to create objects across the schemas:

grant usage,create on schema staging to odi_work;
grant usage,create on schema performance to odi_work;
grant usage,create on schema foundation to odi_work;

Of course, how you would manage security and database privileges in your own environment is entirely up to you. You can configure object level privileges as your database security guidelines dictate. 

Once your users are set up, you can create objects and grant privileges as required. 

Attunity

Once the Attunity Replicate software is installed, it’s a case of adding the source and target databases.

Source: Oracle EBS database

 

Clicking Manage Databases and then Add Database brings up the following dialogue, in which you fill in the details of the database. The Advanced tab contains details around how Attunity implements and executes CDC/log Mining. 

Add db 1a

Attunity 1a

Target: Redshift Cluster

You follow the same process as above, but select Amazon Redshift as the Type and specify it as a target. Within this screen, you enter details of the cluster endpoint address, the port (refer back to the AWS cluster summary screen), and a user name and password to access the database. If you have entered the details correctly and you can reach the cluster, you should be able to drop down the Database Name list and see the dev database. The next section on this dialog is to enter details about your Attunity CloudBeam subscription, and then your S3 bucket and folder. It should be noted here that in your environment you may not want to use your master user  db account, but instead opt for a slightly less privileged user. 

Attunity 2a

Once you have filled out each section, it is worth clicking the Test button on the bottom left of the dialog to ensure all details are correct.

So, now that we have our source and target databases configured, the next step is to create a Replication Task. The task can be a combination of the following options: 

  • Full Load – replicate a table from source to target
  • Apply Changes – combined with the above, maintains the target copy with changes that occur on source
  • Store Changes – CDC deltas stored in a separate table 

Once you’ve decided on the type of replication task, you drag and drop the source and targets onto the palette, configure which tables you want to replicate from source, and specify any transformation rules e.g. to move to a different target schema. In our test case, we created an initial load task and a rule was created to map the GL schema in the source to the staging schema in the target. 

Attunity 4a

Attunity 6a

Within the task settings, you can choose to either create new target tables, or use pre-created ones. This is useful if you want to create the tables with certain options, e.g. specifying the sort and distribution keys. 

Attunity 5a

Once you finished reviewing and tweaking settings, you can run the task. Once kicked off, you can review progress from the Monitor screen: 

Attunity 7a

Any errors will be flagged up and can be reviewed in the Messages section. When the load completes, you can verify the existence of the tables in the Redshift database: 

Attunity 8a

We can now need to grant select privileges on the above objects to odi_work (unless you configured Attunity to use the odi_work account) before it can access them, as despite residing in a schema odi_work can create objects in, they are owned by a different user and therefore permissions still need to be granted.

grant select on <schema>.<object> to odi_work;

 

So the first stage of getting our data from source to Redshift is complete. Tomorrow, we will look at transforming our initial load data into a target star schema using ODI.

Using Groovy instead of WLST for OBIEE Systems Management

February 17th, 2014 by

I remember some years back when Rittman Mead first received the OBIEE 11g beta and I tried installing it for the first time. It was a nightmare. If you think the 11.1.1.3 release was challenging, you should have tried working with any one of the betas. It was with these first few releases that Weblogic was injected into my world. Some time around the first month of starting the beta, I recall Mark Rittman saying something along the lines of: “all the Fusion Middleware stuff looks interesting, but I’m glad we won’t have to know anything about that.” You see… I just wanted a nice career in BI, but here I am writing a blog post on working with JMX MBeans.

JMX MBeans are complicated, meaning that careers probably exist for working with JMX MBeans and little else. I’m truly sorry to hear that. For the layperson (and by this, I mean people who work in BI), trying to understand MBeans probably ranks somewhere between getting decent service at a restaurant in the UK, and understanding why the Anomaly needs to join with the Source. It’s complex enough that most vendors provide utilities to simplify the process… which means using anything other than Java to work with them. For Weblogic, we have Weblogic Scripting Tool (WLST).

WLST is really just a specific implementation of Jython designed into a command-line utility that in many ways feels like SQL-Plus or RMAN. It’s designed to work interactively or in scripted fashion, and Rittman Mead has offered up some free WLST scripts to the community over the years via GitHub. If you take a look at our deploy_rpd.py script for example, you can make some sense of how WLST works. I’ve reworked that script slightly in this post to use hardcoded values instead of variables, to remove the exception handling, etc., to make the logic easier to follow. It’s shown here in two parts… the first part which connects to the Weblogic Administration Server and locks the domain:

This lock method is equivalent to the “Lock and Edit” button inside Fusion Middleware Control. We use the cd command to navigate to the BIDomain MBean because the lock method (as well as commit and rollback ) is in that MBean. The invoke command is used to execute MBean methods in WLST and accept parameters for those methods. To do this we use generic Object and String arrays because the invoke command is generic and needs to support an ever-changing series of parameters. The rest of the pared script is below:

In the second part of this script, we navigate to the ServerConfiguration MBean, which contains the uploadRepository method. We populate the params array with the RPD path and RPD password, we upload the repository, and then navigate back to the BIDomain MBean to perform the commit.

WLST strikes me as an attempt to take a programmatic interface, one that is usually accessed through Java, and make it accessible to non-developers. In this regard, I see it as a major failure. The MBean methods have been abstracted to the point of being almost useless. This seems like the exact opposite of simple to me. It seems overly complex to have to construct arrays and populate them just to pass parameters to methods. I would prefer to just pass parameters as, well… parameters. Additionally… notice how we had to cd to the BIDomain MBean, cd away to go use the ServerConfiguration MBean, and then cd back again. I would prefer to use the MBeans the way they were meant to be used… as objects, so we would be able to instantiate and hold multiple MBean objects at the same time.

If WLST is indeed easier to use than Java, then this is quite an indictment of Java. It works pretty well when we are writing processes that are JMX specific… when the commands that we need to issue are all calls to MBeans and nothing else. But what happens if we want to write a process that involves interacting with MBeans as well as some other processes or APIs, such as calls to any of the Oracle BI Server XML API, in a unified script? WLST is terrible at this. In my first attempt at writing something like this, I used Perl to execute both validaterpd commands as well as MBean commands. I ended up having to maintain a separate Jython script along with the Perl script, which didn’t sit well with me as a developer. What we really need is a language that is groovy to write and maintain, compiles to pure Java (just like Jython), and has the ability to encapsulate MBeans natively in a similar fashion to WLST. What we need is Groovy.

Groovy is a dynamic language that can be used to write scripts, or write object-oriented packages and classes, and can also be used to compile to Java byte code. Because it’s simple to write and has native MBean object handling, it’s easier (in my opinion) than WLST. How easy? Well, let’s go through our deploy_rpd.py example, and see if it’s easier to write and understand. Here’s our upload repository script in Groovy:

That’s the whole script. There’s a few things to point out here. First… establishing connectivity to the Weblogic Admin Server is slightly harder in Groovy. WLST hides a good bit of the complexity with connectivity, but the JMXConnectorFactory class in Groovy is a close second. However, once the connectivity is done, Groovy wins the rest of the way. Notice how we are able to configure two different MBean objects: biDomain for BIDomain and servConfig for ServerConfiguration. We don’t have to cd back and forth across the MBean hierarchy to work with the different MBeans; we can simply instantiate as many MBeans at once as we need to.  Also, notice that we don’t have the generic invoke command to call the different methods in MBeans. We just call the method directly: if we want to call lock(), or commit(), or uploadRepository(), we just call it, and pass the parameters that it accepts. No need to build the generic Object and String arrays for passing in to invoke.

Because Groovy is very easy to execute external command-line processes (such as biserverxmlexec or biserverxmlgen) it’s a great choice for unified OBIEE processes that need to work with executables and JMX MBeans together in a single script. Also… Groovy accepts Java syntax, so if we can’t find the Groovy code samples we need from the internet, we can just plug in Java samples instead.

Website Design & Build: tymedia.co.uk