Oracle Openworld 2013 Day 1 : User Group Forum, News on Oracle Database In-Memory Option
The Sunday before Oracle Openworld proper is “User Group Forum Sunday”, with each of the major user groups and councils having dedicated tracks for topics such as BI&DW, Fusion Development, Database and so on. Stewart, myself and Venkat were honoured to be presenting on behalf of ODTUG, IOUG and APAC covering topics around the new 11g release of the BI Applications, Hyperion/EPM Suite, and agile development using OBIEE, ODI and Golden Gate. Links to the presentation PDFs from each of our sessions are listed below:
- Deep Dive into OBIA 188.8.131.52.1 – Overview (Mark Rittman)
- Deep Dive into OBIA 184.108.40.206.1 – Data Integration (Stewart Bryson)
- Agile BI Development using OBIEE, ODI and Golden Gate (Stewart Bryson)
- Hyperion Profitability & Cost Management – Integration of Standard & Detailed Profitability (Venkatakrishnan J)
All of the sessions drew a good crowd, and I was especially pleased to see the number of people that came along to the BI Apps 220.127.116.11.1 sessions, and that there were a few early-adopters in the audience who’d either completed their initial implementations, or had carried out pilot or PoC exercises. Feedback from those attendees was as I’d expected – some initial early adopter issues but generally positive feedback on the simplified architecture, and use of ODI. Stewart’s session on the data integration aspects of this new release included content on its new Golden Gate integration, and the new Source-Dependent Store ODI concept that it supports, again which went down well with an audience looking for more technical details on how this new release works.
After the user group sessions finished, it was time to go over to Moscone North for Larry Ellison’s opening keynote, where three new products were announced. First up was the new In-Memory option for the Oracle Database, which adds an in-memory, column-store capability to Oracle databases on all platforms, not just Exadata. Aimed at the upcoming 12.1.2 release, this new feature will provide a column-store capability alongside the existing on-disk row-store, with the column-store being used for DW-style queries whilst the row-store will continue to be used for OLTP.
The way this in-memory column store will work, is as follows:
- The DBA will enable the in-memory feature by setting the database parameter “in memory_size = XX GB”, with the memory then being allocated in the database’s SGA (System Global Area, one of the shared areas in the overall database memory allocation)
- Tables, partitions or sets of columns will be enabled for in-memory storage by an “alter table … in memory” DDL command
- Existing query indexes on the source tables can then be dropped
The database will then take care of copying these tables, columns or partitions into the in-memory column-store area, and then refreshing those tables on a regular basis, so that the database will have both row-store, and column-store versions of the tables available at the same time.
Oracle’s assertion is that the overhead in maintaining both the row- and column-store versions of the tables will be balanced out by the removal of the need to maintain query indexes on the source tables, and performance improvements of 100x to 1000x were quoted for DW-style queries, and 2x for OLTP-style queries. Unlike the Hybrid Columnar Compression feature announced a couple of years ago at Openworld, none of this is Exadata-specific, but it will be an option for the Enterprise Edition of the database, and it will require the 12.1.2 release, so you’ll need to budget for it and you’ll need to be on the most recent release to make use of it.
Other than the in-memory option, the other two product announcements in the keynote were:
- The M6-32 “Big Memory Machine”, with 32TB of DRAM and a SPARC M6 chip architecture – positioned as the ideal server for the in-memory option
- The “Oracle Database Backup, Logging and Recovery” appliance, a server designed to receive and then store incremental database backups for private and public clouds, and then restore those databases as necessarily – basically a backup server optimised for database backup and recovery.
So that was it for today – more news tomorrow once the main conference sessions and keynotes start.