Analytics with Kibana and Elasticsearch through Hadoop – part 2 – Getting data into Elasticsearch

November 4th, 2014 by

Introduction

In the first part of this series I described how I made several sets of data relating to the Rittman Mead blog from various sources available through Hive. This included blog hits from the Apache webserver log, tweets, and metadata from WordPress. Having got it into Hive I now need to get it into ElasticSearch as a pre-requisite for using Kibana to see how it holds up as a analysis tool or as a “data discovery” option. Here’s a reminder of the high-level architecture, with the parts that I’ve divided it up into covering over the three number of blog posts indicated:

kib31

In this article we will see how to go about doing that load into ElasticSearch, before getting into some hands-on with Kibana in the final article of this series.

Loading data from Hive to Elasticsearch

We need to get the data into Elasticsearch itself since that is where Kibana requires it to be for generating the visualisations. Elasticsearch holds the data and provides the analytics engine, and Kibana provides the visualisation rendering and the generation of queries into Elasticsearch. Kibana and Elasticsearch are the ‘E’ and ‘K’ of the ELK stack, which I have written about previously (the ‘L’ being Logstash but we’re not using that here).

Using the elasticsearch-hadoop connector we can load data exposed through Hive into Elasticsearch. It’s possible to load data directly from origin into Elasticsearch (using, for example, Logstash) but here we’re wanting to bring together several sets of data using Hadoop/Hive as the common point of integration.

Elasticsearch has a concept of an ‘index’ within which data is stored, held under a schema known as a ‘mapping’. Each index can have multiple mappings. It’s dead easy to run Elasticsearch – simply download it, unpack the archive, and then run it – it really is as easy as that:

You can load data directly across into Elasticsearch from Hive without having to prepare anything on Elasticsearch – it will create the index and mapping for you. But, for it to work how we want, we do need to specify the mapping in advance because we want to tell Elasticsearch two important things:

  • To treat the date field as a date – crucial for Kibana to do its time series-based magic
  • Not to “analyze” certain fields. By default Elasticsearch will analyze each string field so that you can display most common terms within it etc. However if we want to report things like blog title, breaking it down into individual words doesn’t make sense.

This means that the process is as follows:

  1. Define the Elasticsearch table in Hive
  2. Load a small sample of data into Elasticsearch from Hive
  3. Extract the mapping and amend the date field and mark required fields as non-analysed
  4. Load the new mapping definition to Elasticsearch
  5. Do a full load from Hive into Elasticsearch

Steps 2 and 3 can be sidestepped by crafting the mapping by hand from the outset but it’s typically quicker not to.

Before we can do anything in terms of shifting data around, we need to make elasticsearch-hadoop available to Hadoop. Download it from the github site, and copy the jar file to /usr/lib/hadoop and add it to HIVE_AUX_JARS_PATH in /usr/lib/hive/conf/hive-env.sh.

Defining the Hive table over Elasticsearch

The Hive definition for a table stored in Elasticsearch is pretty simple. Here’s a basic example of a table that’s going to hold a list of all blog posts made. Note the _es suffix, a convention I’m using to differentiate the Hive table from others with the same data and denoting that it’s in Elasticsearch (es). Also note the use of EXTERNAL as previously discussed, to stop Hive trashing the underlying data if you drop the Hive table:

The ROW FORMAT and STORED BY are standard, but the TBLPROPERTIES values should be explained (you’ll find full details in the manual):

  1. es.nodes – this is the hostname of the Elasticsearch server. If you have multiple nodes it will discover the others from this.
  2. es.resource – this is the index and mapping where the data should be stored. We’ll see more about these later, because they’re important.

Time for a tangent …

The biggest issue I had getting data from Hive into Elasticsearch was timestamps. To cut a very long story (involving lots of random jiggling, hi Christian!) short, I found it was easiest to convert timestamps into Unix epoch (number of seconds since Jan 1st 1970), rather than prat about with format strings (and prat about I did). For timestamps already matching the ISO8601 standard such as those in my WordPress data, I could leverage the Hive function UNIX_TIMESTAMP which returns exactly that

For others though that included the month name as text such as Wed, 17 Sep 2014 08:31:20 +0000 I had to write a very kludgy CASE statement to first switch the month names for numbers and then concatenate the whole lot into a ISO8601 that could be converted to unix epoch. This is why I also split the apache log SerDe so that it would bring in the timestamp components (time_dayDD, time_monthMMM, etc) individually, making the epoch conversion a little bit neater:

Because if you thought this was bad, check out what I had to do to the twitter timestamp:

As with a few things here, this was all for experimentation than streamlined production usage, so it probably could be rewritten more efficiently or solved in a better way – suggestions welcome!

So the nett result of all of these is the timestamp as epoch in seconds – but note that Elasticsearch works with millisecond epoch, so they all need multiplying by 1000.

As I’ve noted above, this feels more complex than it needed to have been, and maybe with a bit more perseverence I could have got it to work without resorting to epoch. The issue I continued to hit with passing timestamps across as non-epoch values (i.e. as strings using the format option of the Elasticsearch mapping definition, or Hive Timestamp, and even specifying es.mapping.timestamp) was org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: TimestampParsingException, regardless of the careful format masks that I applied.

Back on track – loading a sample row into Elasticsearch

We want to send a sample row of data to Elasticsearch now for two reasons:

  1. As a canary to prove the “plumbing” – no point chucking thousands of rows across through MapReduce if it’s going to fall over for a simple problem (I learnt my lesson during the timestamp fiddling above).
  2. Automagically generate the Elasticsearch mapping, which we subsequently need to modify by hand and is easier if it’s been created for us first.

Since the table is defined in Hive, we can just run a straightforward INSERT to send some data across, making use of the LIMIT clause of HiveQL to just send a couple of rows:

Hive will generate a MapReduce job that pushes the resulting data over to Elasticsearch. You can see the log for the job – essential for troubleshooting – at /var/log/hive/hive-server2.log (by default). In this snippet you can see a successful completion:

But if you’ve a problem with your setup you’ll most likely see this generic error instead passed back to beeline prompt:

Meaning that you need to go to the Hive log file for the full diagnostics.

Amending the Elasticsearch mapping

So assuming the previous step worked (if you got the innocuous No rows affected from beeline then it did) you now have an index and mapping (and a couple of “documents” of data) in Elasticsearch. You can inspect the mapping in several ways, including with the GUI for Elasticsearch admin kopf.

You can also interogate Elasticsearch directly with its REST API, which is what we’re going to use to update the mapping so let’s use it also to view it. I’m going to use curl to do the HTTP call, and then pipe it | straight to jq to prettify the resulting JSON that Elasticsearch sends back.

We can see from this that Elasticsearch has generated the mapping to match the data that we’ve sent across from Hive (note how it’s picked up the ts_epoch type as being numeric not string, per our Hive table DDL). But, as mentioned previously, there are two things we need to rectify here:

  1. ts_epoch needs to be a date type, not long. Without the correct type, Kibana won’t recognise it as a date field.
  2. Fields that we don’t want broken down for analysis need marking as such. We’ll see the real difference that this makes when we get on to Kibana later.

To amend the mapping we just take the JSON document, make the changes, and then push it back with curl again. You can use any editor with the JSON (I’ve found Atom on the Mac to be great for its syntax highlighting, brace matching, etc). To change the type of the date field just change long to date. To mark a field not for analysis add "index": "not_analyzed" to the column definition. After these changes, the amended fields in my mapping JSON look like this:

The particularly eagle-eyed of you will notice that I am loading post_title in twice. This is because I want to use the field both as a label but also to analyse it as a field itself, looking at which terms get used most. So in the updated mapping, only post_title is set to not_analyzed; the post_title_a is left alone.

To remove the existing mapping, use this API call:

and then the amended mapping put back. Note that the "all_blog" / "mappings" outer levels of the JSON have been removed from the JSON that we send back to Elasticsearch:

Full load into Elasticsearch

Now we can go ahead and run a full INSERT from Hive, and this time the existing mapping will be used. Depending on how much data you’re loading, it might take a while but you can always tail the hive-server2.log file to monitor progress. So that we don’t duplicate the ‘canary’ data that we sent across, use the INSERT OVERWRITE statement:

To check the data’s made it across we can do a count from Hive:

But this requires a MapReduce job to run and is fairly slow. Much faster is direct from the horse’s mouth – from Elasticsearch itself where the data is. Just as we called a REST API to get and set the mapping, Elasticsearch can also give us statistics back this way too:

Here I’ve used a bit more jq to parse down the stats in JSON that Elasticsearch sends back. If you want to explore more of what jq can do, you’ll find https://jqplay.org/ useful.

Code

For reference, here is the set of three curl/DDL/DML that I used:

  • Elasticsearch index mappings

  • Hive table DDL

  • Hive DML – load data to Elasticsearch

Summary

With the data loaded into Elasticsearch we’re now ready to start our analysis against it. Stay tuned for the final part in this short blog series to see how we use Kibana to do this.

Analytics with Kibana and Elasticsearch through Hadoop – part 1 – Introduction

November 3rd, 2014 by

Introduction

I’ve recently started learning more about the tools and technologies that fall under the loose umbrella term of Big Data, following a lot of the blogs that Mark Rittman has written, including getting Apache log data into Hadoop, and bringing Twitter data into Hadoop via Mongodb.

What I wanted to do was visualise the data I’d brought in, looking for patterns and correlations. Obviously the de facto choice at our shop would be Oracle BI, which Mark previously demonstrated reporting on data in Hadoop through Hive and Impala. But, this was more at the “Data Discovery” phase that is discussed in the new Information Management and Big Data Reference Architecture that Rittman Mead helped write with Oracle. I basically wanted a quick and dirty way to start chucking around columns of data without yet being ready to impose the structure of the OBIEE metadata model on it. One of the tools I’ve worked with recently is a visualisation tool called Kibana which is part of the ELK stack (that I wrote about previously for use in building a monitoring solution for OBIEE). In this article we’ll take a look at making data available to Kibana and then the kind of analytics and visualisations you can do with it. In addition, we’ll see how loading the data into ElasticSearch has the benefit of extremely fast query times compared to through Hive alone.

The Data

I’ve got three sources of data I’m going to work with, all related to the Rittman Mead website:

  • Website logs, from Apache webserver
  • Tweets about Rittman Mead blog articles, via Datasift
  • Metadata about blog posts, extracted from the WordPress MySQL database

At the moment I’ve focussed on just getting the data in, so it’s mostly coming from static files, with the exception of the tweets which are held in a noSQL database (MongoDB).

The Tools

This is where ‘big data’ gets fun, because instead of “Acme DI” and “Acme Database” and “Acme BI”, we have the much more interesting – if somewhat silly – naming conventions of the whackier the better. Here I’m using:

  • Kibana – data visualisation tool for Elasticsearch
  • Elasticsearch – data store & analytics / search engine
  • HDFS – Hadoop’s distributed file system
  • MongoDB – NoSQL database
  • Hive – enables querying data held in various places including HDFS (and Elasticsearch, and MongoDB) with a SQL-like query language
  • Beeline – Hive command line interface
  • Datasift – online service that streams tweets matching a given pattern to a nominated datastore (such as MongoDB)
  • mongo-hadoop – a connector for MongoDB to Hadoop including Hive
  • elasticsearch-hadoop – a connector for Elasticsearch to Hadoop including Hive

Kibana only queries data held in Elasticsearch, which acts as both the data store and the analytics engine. There are various ways to get data into Elasticsearch directly from source but I’ve opted not to do that here, instead bringing it all in via HDFS and Hive. I’ve done that because my – albeit fairly limited – experience is that Elasticsearch is great once you’ve settled on your data and schema, but in the same way I’m not building a full OBIEE metadata model (RPD) yet, nor did I want to design my Elasticsearch schema up front and have to reload from source if it changed. Options for reprocessing and wrangling data once in Elasticsearch seem limited and complex, and by making all my data available through Hive first I could supplement it and mash it up as I wanted, loading it into Elasticsearch only when I had a chunk of data to explore. Another approach that I haven’t tried but could be useful if the requirement fits it would be to load the individual data elements directly into their own Elasticsearch area and then using the elasticsearch-hadoop connector run the required mashups with other data through Hive, loading the results back into Elasticsearch. It all depends on where you’re coming from with the data.

Overview

Here’s a diagram of what I’m building:

I’ll explain it in steps as follows:

  1. Loading the data and making it accessible through Hive
  2. Loading data from Hive to Elasticsearch
  3. Visualising and analysing data in Kibana

Getting the data into Hive

Strictly speaking we’re not getting the data into Hive, so much as making it available through Hive. Hive simply enables you to define and query tables sitting on top of data held in places including HDFS. The beauty of the Hadoop ecosystem is that you can physicalise data in a bunch of tools and the components will most often support interoperability with each other. It’s only when you get started playing with it that you realise how powerful this is.

The Apache log files and WordPress metadata suit themselves fairly well to a traditional RDBMS format of [de]normalised tables, so we can store them in HDFS with simple RDBMS tables defined on top through Hive. But the twitter data comes in JSON format (like this), and if we were going to store the Twitter data in a traditional RDBMS we’d have to work out how to explode the document into a normalised schema, catering for varying structures depending on the type of tweet and data payload within it. At the moment we just want to collect all the data that looks useful, and then look at different ways to analyse it afterwards. Instead of having to compromise one way (force a structure over the variable JSON) or another (not put a relational schema over obviously relational data) we can do both, and decide at run-time how to best use it. From there, we can identify important bits of data and refactor our design as necessary. This “schema on read” approach is one of the real essences of Hadoop and ‘big data’ in general.

So with that said, let’s see how we get the data in. This bit is the easy part of the article to write, because a lot of it is pretty much what Mark Rittman has already written up in his articles, so I’ll refer to those rather than duplicate here.

Apache log data

References:

I’ve used a variation on the standard Apache log SerDe that the interwebs offers, because I’m going to need to work with the timestamp quite closely (we’ll see why later) so I’ve burst it out into individual fields.

The DDL is:

The EXTERNAL is important on the table definition as it stops Hive moving the HDFS files into its own area on HDFS. If Hive does move the files it is annoying if you want to also access them through another program (or Hive table), and downright destructive if you DROP the table since it’ll delete the HDFS files too – unless it’s EXTERNAL. Note the LOCATION must be an HDFS folder, even if it just holds one file.

For building and testing the SerDe regex Rubular is most excellent, but note that it’s Java regex you’re specifying in the SerDe which has its differences from Python or Ruby regex that Rubular (and most other online regex testers) support. For the final validation of Java regex I use the slightly ugly but still useful regexplanet, which also gives you the fully escaped version of your regex which you’ll need to use for the actual Hive DDL/DML.

A sample row from the apache log on disk looks like this:

and now in Hive:

Twitter data

Reference:

The twitter data we’ve got includes the Hive ARRAY datatype for the collections of hashtag(s) and referenced url(s) from within a tweet. A point to note here is that the author_followers data appears in different locations of the JSON document depending on whether it’s a retweet or not. I ended up with two variations of this table and a UNION on top.

The table is mapped on data held in MongoDB and as with the HDFS data above the EXTERNAL is crucial to ensure you don’t trash your data when you drop your table.

The other point to note is that we’re now using mongo-hadoop for Hive to connect to MongoDB. I found that I had to first build the full set of jar files by running ./gradlew jar -PclusterVersion='cdh5', and also download the MongoDB java driver, before copying the whole lot into /usr/lib/hadoop/lib. This is what I had by the end of it:

After all that, the data as it appears in Hive looks like this:

For reference, without the mongo-hadoop connectors I was getting the error

and with them installed but without the MongoDB java driver I got:

WordPress metadata

WordPress holds its metadata in a MySQL database, so it’s easy to extract out:

  1. Run a query in MySQL to generate the CSV export files, such as:

  2. Copy the CSV file to your Hadoop machine, and copy it onto HDFS. Make sure each type of data goes in its own HDFS folder:

  3. Define the Hive table on top of it:

Rinse & repeat for the category data, and post->category relationships.

The data once modelled in Hive looks like this:

The WordPress metadata quite obviously joins together, as it is already from the relational schema in which it was held on MySQL. Here is an example of where “schema on read” comes into play, because you could look at the above three tables (posts / post_cats / categories) and conclude it was redundant to export all three from WordPress and instead a single query listings posts and their respective category would be sufficient. But, some posts have more than one category, which then leads to a design/requirements decision. Either we retain one row per post – and collapse down the categories, but in doing so lose ability to easily treat categories as individual data – or have one row per post/category, and end up with multiple rows per post which if we’re doing a simple count of posts complicates matters. So we bring it in all raw from source, and then decide how we’re going to use it afterwards.

Bringing the data together

At this point I have six tables in Hive that I can query (albeit slowly) with HiveQL, a close relation to SQL with a few interesting differences running through the Hive client Beeline. The data is tweets, website visits, and details about the blog posts themselves.

As well as time, the other common element running throughout all the data is the blog article URL, whether it is a post, a visit to the website, or a tweet about it. But to join on it is not quite as simple as you’d hope, because all the following are examples of recorded instances of the data for the same blog post:

So whether it’s querying the data within Hive, or loading it joined together to another platform, we need to be able to unify the values of this field.

Tangent: RegEx

And now it’s time, if you’d not already for your SerDe against the Apache file, to really immerse yourself in Regular Expressions (RegEx). Part of the “schema on read” approach is that it can get messy. You need to juggle and wrangle and munge data in ways that it really might not want to, and RegEx is an essential tool with which to do this. Regex isn’t specific to Hadoop – it’s used throughout the computing world.

My journey with regex over quite a few years in computing has gone in stages something like this:

  1. To be a fully rounded geek, I should learn regex. Looks up regex. Hmm, looks complicated….Squirrel!
    1. To be a fully round (geddit?!) geek, I should keep eating these big breakfasts
  2. I’ve got a problem, I’ve got a feeling regex will help me. But my word it looks complicated … I’ll just do it by hand.
  3. I’ve got another problem, I need to find this text in a file but with certain patterns around it. Here’s a regex I found on google. Neat!
  4. Hmmm another text matching problem, maybe I should really learn regex instead of googling it to death each time
  5. Mastered the basic concepts of regex
  6. Still a long way to go…

If you think you’ll nail RegEx overnight, you won’t (or at least, you’re a better geek than me). It’s one of those techniques, maybe a bit like SQL, that to fully grok takes a period of exposure and gradually increasing usage, before you have an “ah hah!” moment. There’s a great site explaining regex here: www.regular-expressions.info. My best advice is to take a real example text that you want to work with (match on, replace bits of, etc), and stick it in one of these parsers and experiment with the code:

Oh and finally, watch out for variations in regex – what works in a Java-based program (most of the Hadoop world) may not in Python and visa versa. Same goes for PHP, Ruby, and so on – they all have different regex engines that may or may not behave as you’d expect.

Back on track : joining data on non-matching columns

So to recap, we want to be able to analyse our blog data across tweets, site hits and postings, using the common field of the post URL, which from the various sources can look like any of the following (and more):

So out comes the RegEx. First off, we’ll do the easy one – strip the http:// and server bit. Using the Hive function REGEXP_REPLACE we can use this in the query:

This means, take the ref_url column and if you find http://www.rittmanmead.com then replace it with nothing, i.e. delete it. The two backslashes before each forward slash simply escape them since a forward slash on its own has a special meaning in regex. Just to keep you on your toes – Java regex requires double backspace escaping, but all other regex (including the online parser I link to below) uses a single one.

So now our list possible join candidates has shrunk by one to look like this:

The variation as you can see is whether there is a trailing forward slash (/) after the post ‘slug’ , and whether there is additional cruft after that too (feed, foobar+foorbar, etc). So let’s build it up a piece at a time. On each one, I’ve linked to an online parser that you can use to see it in action.

  1. We’ll match on the year and month (/2014/01/) because they’re fixed pattern, so using \d to match on digits and {x} to match x repetitions: (see example on Rubular.com)

    This will match /2014/01/.

  2. Now we need to match the slug, but we’re going to ditch the forward slash suffix if there is one. This is done with two steps.

    First, we define a “match anything except x” group, which is what the square brackets (group) and the caret ^ (negate) do, and in this case x is the forward slash character, escaped.

    Secondly, the plus symbol + tells regex to match at least one repetitions of the preceeding group – i.e. any character that is not a forward slash. (example)

    Combined with the above regex from the first step we will now match /2014/01/automated-regression-testing-for-obiee.

  3. The final step is to turn the previous REGEXP_REPLACE on its head and instead of replacing out content from the string that we don’t want, instead we’ll extract the content that we do want, using a regex capture group which is defined by regular brackets (parantheses, just like these). We’ve now brought in a couple of extra bits to make it hang together, seen in the completed regex here:

    1. The \S* at the beginning means match any non-whitespace character, which will replace the previous regex replace we were doing to strip out the http://www.rittmanmead.com
    2. After the capture group, which is the content from steps one and two above, surround by parentheses (\/\d{4}\/\d{2}\/[^\/]+) there is a final .* to match anything else that might be present (eg trailing forward slash, foobar, etc etc)

    Now all we need to do is escape it for Java regex, and stick it in the Hive REGEXP_EXTRACT function, specifying 1 as the capture group number to extract: (example)

So now all our URLs will look like this, regardless of whether they’re from tweet data, website hits, or wordpress:

Which is nice, because it means we can use it as the common join in our queries. For example, to look up the title of the blog post that someone has tweeted about, and who wrote the post:

Note here also the use of LATERAL VIEW EXPLODE () as a way of denormalising out the Hive ARRAY of referenced url(s) in the tweet so there is one row returned per value.

Summary

We’ve got our three sources of data available to us in Hive, and can query across them. Next we’ll take a look at loading the data into Elasticsearch, taking advantage of our conformed url column to join data that we load. Stay tuned!

UKOUG Partner of the Year Awards

November 3rd, 2014 by

A few days ago Rittman Mead won 5 awards at the UKOUG Partner of the Year Awards.

  • Business Intelligence Partner of the Year (Silver)
  • Training Partner of the Year (Silver)
  • Managed Services (Outsourcing and Operations) Partner of the Year (Silver)
  • Emerging (New Products) Partner of the Year (Silver)
  • Operating Systems, Storage and Hardware Partner of the Year (Gold)

We have consistently done well at this event and feel the awards reflect the effort we put in to both our clients and the user community alike.

The number and diversity of the awards demonstrates how much Rittman Mead has grown over the years. Enterprise Business Intelligence is still at the heart of the work we do, however awards for Operating Systems, Storage and Hardware Partner of the Year and Emerging (New Products) Partner of the Year show the investment we have made in the engineered systems, big data and cloud markets.

Likewise the Training Partner of the Year and Managed Services (Outsourcing and Operations) Partner of the Year awards show the end to end services we now offer.

I would like to say thank you very much to everyone who voted for us and most of all thanks to all our staff who have put in the effort to make this happen.

Oracle BI Cloud Service for SaaS Application Reporting Part 1: Integrating BICS to Salesforce.com using REST APIs

November 3rd, 2014 by

Last month Mark Rittman covered a series of posts detailing the Oracle BI Cloud Service (BICS), aimed at departmental users who want the power of OBIEE 11g without the need to stand-up their own infrastructure. If you’re coming in late, here’s the link to the series.

Before the GA Rittman Mead participated in the beta program for release one of Oracle’s Business Intelligence Cloud Service (BICS), the global aim of the beta was both to understand the capabilities and to identify potential use cases of the new BICS platform.  As Mark wrote, an excellent use case for BICS is to report on top of any SaaS application that expose the data (stored in the cloud) using REST APIs by taking advantage of the ApEx capabilities hosted in the Database Schema Service that comes with BICS. SaaS applications like Oracle’s Fusion CRM, Taleo or Salesforce.com – that was used during the beta program – can be easily integrated and queried with BICS.

The technical goal of our beta program has been to check the features, options and limitations of Oracle BICS by connecting it to a Salesforce.com instance, accessing the data exposed through REST APIs by using the ApEx native functions,  storing the data in the Database Schema Service, creating the Repository using the new Model Editor and showing the data in dashboard by keeping the same data security settings configured in the source platform.

Salesforce.com to Oracle BI Cloud Service

This series of post is going to explain all the details about our successful PoC, over the next few days we’ll be covering the following topics, and we’ll update the list with hyperlinks once the articles are published:

  • Oracle BI Cloud Service for SaaS Application Reporting Part 1: Integrating BICS to Salesforce.com using REST APIs
  • Oracle BI Cloud Service for SaaS Application Reporting Part 2: BICS ApEx components
  • Oracle BI Cloud Service for SaaS Application Reporting Part 3: BICS Repository and Front-end configurations

In the first post of the series we’re going to define the steps required to setup a Salesforce.com demo environment that can be accessed with REST APIs. Later in the post we intend to analyse the most interesting Salesforce.com REST APIs that are used to extract the data from the platform. Before starting digging into the BICS-Salesforce.com integration, some definitions may be needed:

  • RESTful APIs: A Web service API can be defined as RESTful if it conforms to the REST architectural constraints. Some more details about RESTful APIs can be found here.
  • Salesforce.comSalesforce.com is company specialised in software as a service (SaaS) and one of the major providers of CRM in the cloud. In addition to CRM Salesforce.com offers Force.com a cloud platform as a service (PaaS) that developers can use in order to build multitenant applications hosted on Salesforce.com servers. All the Force.com applications can be accessed by using RESTful APIs.

Environments Setup

In a new BI project it is never safe to directly extract data from a live (production) environment. Thus the first step needed in order to test BICS-Salesforce.com connection is to obtain a Salesforce.com environment that we can use without the risk of slowing it down, or even worse crashing and impacting users. A free demo Salesforce.com platform can be obtained by subscribing for the developers program and can be customised as needed. After requesting the Salesforce.com demo environment the creation of a security token is needed, the token is associated with the user invoking the RESTful API. To generate the token: login to Salesforce.com with the specific user -> click on the username -> My Settings -> Reset My Security Token. The new token should then be sent to the user’s email.

Salesforce Reset token Procedure

The third step in order to access Salesforce.com data with REST APIs is to define a Connected App. The creation of a Connected App generates a Consumer Key and a Consumer Secret that is going to be used later during the REST login calls. Once defined a Connected App, the last bit missing is to populate the Salesforce.com instance since it’s empty by default, it can be populated manually or by writing population scripts based on the create REST API statements explained here. An Oracle BICS instance is needed in order to analyse the Salesforce.com data, all the info regarding Oracle BICS and how to activate one instance can be found at this link. Having covered all the basics, it’s time to start analysing how the Salesforce.com data can be extracted.

Salesforce.com REST APIs

In the following sections we are going to analyse some of the REST APIs used in our process. The Salesforce.com REST APIs will be called with cURL commands, cURL is a command line tool and library for transferring data with URL syntax.

Salesforce.com Authentication

The first step in order to download the data from any Salesforce.com instance is to execute the authentication and retrieve the access token, the access token will then be used in all the following REST calls. There are various authentication mechanisms that can be used against Salesforce.com, for the aim of the beta program we used the one called “Username-Password OAuth Authentication Flow” that is described in detail here and in the image below.

Salesforce.com User Password OAuth

The REST API authentication command is

where the following parameters must be assigned:

  • CLIENT_ID:  The client ID generated during the creation of the Connected App (step described in Basic Setup)
  • CLIENT_SECRET: The client Secret generated during the creation of the Connected App (step described in Basic Setup)
  • USERNAME: The username you want to use in order to access Salesforce.com data
  • PASSWORD_TOKEN: The concatenation of password and user security token (e.g. if the password is ABC and the security token is 123 then PASSWORD_TOKEN is ABC123)

The response (if all the parameters are correct) should be like the following

where the most interesting parameters are:

  • instance_url: defines the salesforce instance to use.
  • access_token: defines the time-limited security token to use for all the following calls.

Salesforce.com List of Objects and Retrieve Metadata for an Object

Two of the Salesforce.com REST APIs were very useful when trying to build a general ETL that could be applied to any customised Salesforce.com instance:

  • Get list of Objects: retrieves the list of all objects available in Salesforce.com (custom or not) with some additional information for each object.
  • Retrieve Metadata for an Object: retrieves the list of columns for the selected object with related data types and additional information.

The Get list of Objects call is the following

where

  • INSTANCE is the instance_url parameter retrieved from the authentication call.
  • TOKEN is the access_token parameter retrieved from the authentication call.
  • VERSION is the REST APIs version used, in our beta test we used v29.0.

The response should be similar to the following in which you could see for each object (identified by the “name” field) all the metadata available.

The Retrieve Metadata of an Object can be called for each object listed in Get list of Object response, the code is the following:

where:

  • INSTANCE and TOKEN  and VERSION are the same parameters defined for Get list of objects call
  • OBJECT is the Salesforce.com object to retrieve

The response for the Account object is similar to the following

Interesting parameters for each column are:

  • name: is the columns name.
  • retrievable: if true it means that the particular column can be part of a query
  • soapType and length or byteLength: these fields provide the field type information and the length of the field.

With the three REST APIs analysed we could recreate all the objects (custom or not) in any Salesforce.com (or Force.com) instance as tables in the Oracle Database Schema Service. In the next section we will see how to extract the data from the list of objects by using the Salesforce.com query capabilities.

Salesforce.com query

Salesforce provides two methods of querying the objects: including or excluding deleted object. The difference in the code is minimal, the first uses the /queryAll suffix while the second uses the /query. A Salesforce.com REST API query call is:

where:

  • INSTANCE and TOKEN parameters are the ones retrieved from the Authentication call
  • VERSION is the Salesforce.com REST API version used
  • OBJECT is the Salesforce.com object to query
  • LIST_OF_COLUMNS is the comma delimited list of columns to retrieve

The response of a query “select+Name+from+Account” is:

In the JSON result you can find:

  • totalsize: the number of records retrieved
  • Name: for each record the name of the Account

If the resultSet is too big, Salesforce.com will start paging results. If the initial query returns only part of the results, the end of the response will contain a field called nextRecordsUrl. For example:

In order to get the next page of data a call like the following is needed passing the access token as a parameter.

Salesforce.com limits

It’s important to be aware of the limit set by Salesforce.com on the number of REST API calls per day. In order to check at any time the remaining number of calls available execute the following code.

The DailyApiRequest -> Remaining field contained in the response shows the amount of calls still available.

Resultset output format

Salesforce.com default output format is JSON, which is a common format for most of the cloud application. However there is the opportunity to also retrieve the result in XML format by using the HTTP ACCEPT header set to “application/xml”. This first release of the tool is based on Oracle DB11g which supports native XML and not JSON. For this reason during our beta program with Oracle BI Cloud Service we decided to use the XML output. Once BICS will be bundled with the Oracle DB 12c that supports JSON and XML native, the resultset output format could be kept as default in JSON. In this post we defined the steps required in order to setup a Salesforce.com demo environment that can be accessed with REST APIs and which are the most interesting Salesforce.com REST APIs that we used to extract the data in order to analyse it with Oracle BI Cloud Service platform. In the next post we will look more in detail at the BICS ApEx part of it by analysing how the Salesforce.com REST APIs can be called from ApEx.

OBIEE How-To: A View Selector for your Dashboard

October 30th, 2014 by

A common problem report developers face is user groups having different needs and preferences, and as a consequence these user groups want to see their data presented in different ways. Some users prefer to see a graph when others want a table is a classic example. So, how do we do this? It’s a no brainer… we use a view selector. View selectors give us a great amount of flexibility by allowing us to swap out one analysis view for another. You might even take it a step further and use a view selector to swap out an entire compound layout for another one, giving the user an entirely different set of views to look at. Truly powerful stuff, right?

But view selectors do have one limitation… they’re only available at the analysis level. What if you wanted this selector functionality at the dashboard level so that you could swap out an analysis from one subject area for one from different subject area? Or what if you wanted to be able to switch one dashboard prompt for another one? You’re out of luck, it’s just not possible…

Just kidding… of course it’s possible. As it turns out, it’s fairly straightforward to build your own dashboard level view selector using other objects already provided by OBIEE out-of-the-box.

Create a dashboard variable prompt to drive the content. We need a way for the users to select the view they want to see. View selectors have a built in dropdown prompt to accomplish this at the analysis level. To do this at the dashboard level we’re going to use a dashboard prompt.

So, the first step is to create a new dashboard prompt object and add a variable prompt. You can name the variable whatever you wish, for this example we’re just going to call it P_SECTION. You can set the User Input to whatever you want, but it’s important that only one option is selected at a time… multiple values should not be allowed. Let’s set the user input to “Choice List” and add some custom values.

What you name these custom values isn’t important but the labels should be descriptive enough so that the users understand the different options. Just keep in mind, the values you use here will need to exactly match the analysis we create in the next step. For this example, let’s use ‘Section1′, ‘Section2′, and ‘Section3′ to keep things simple.

JFB - View Selector - P_SECTION Prompt

 Create an analysis to drive the conditional logic. We need to create an analysis that will return a set number of rows for each of the options in the prompt we just created. The number of rows returned then drives which section we see on the dashboard.

Ultimately, the logic of this analysis doesn’t matter, and there are a dozen ways to accomplish this. To keep things simple, we’re just going to use CASE statements. While not an elegant solution, it’ll work just fine for our basic example.

Add three columns to the criteria, we’ll use a Time dimension and modify the column formula with the following CASE statements. Make sure that the text strings match the Custom Values used in the prompt.

JFB - View Selector - Table

Now we need to update the filter so that the appropriate rows are shown based upon what the user selects. Basically, we need the request to return 1, 2, or 3 rows based upon our P_SECTION presentation variable.

For our example we’re going to create a filter for each of the options and set them equal to the presentation variable we created earlier in our dashboard prompt. Only one filter will be true at a time so the operator between these filters has been set to OR. Also you’ll notice that the default value for the presentation variable has been set to ‘Section1′, across the board. If, for whatever reason, the P_SECTION variable isn’t set we want the dashboard to default to the first section.

JFB - View Selector - Filter

So, let’s quickly walk through how this works. The end user selects ’Section1’ from the dashboard prompt. That selection is stored in our P_SECTION presentation variable, which is then passed to and used by our filter. With ‘Section1’ selected only the 1st line of the filter will hold true which will result in a single row returned. When ‘Section2’ is chosen, the second row of the filter is true which returns two rows, and so on.

We’re almost done, in the next step we’ll create some conditions on the individual dashboard sections and put it all together.

Create sections and set some conditions. We just need to create our sections and set some conditions so that they are shown/hidden appropriately. Create a new dashboard page. Edit the dashboard page and drag three empty sections on to the page. Create a condition on the first section using the Analysis created in the last step. The first condition we need to create should be True If Row Count is equal to 1.

JFB - View Selector - Condition

Are you beginning to see how this is going to work? The only time we’ll get a single row back is when the presentation variable is set to ‘Section1’. When P_SECTION is set to ‘Section2’ we’ll get two rows back from our analysis. Go ahead and create a second condition that is True If Row Count is equal to 2 for section 2. For section 3 create a condition that’s True If Row Count is equal to 3.

JFB - View Selector - Dashboard Editor

Since we aren’t adding content to these sections, you’ll want to make sure to enable the option to “Show Section Title” or add a couple text fields so that you can easily identify which section is rendered on the page. Lastly, drag the dashboard prompt onto the page. Save the dashboard page and let’s take a look.

When the page first renders, you should see something similar to the following screenshot. The prompt is set to ‘Section1’ and sure enough, Section 1 appears below it. If you change the selection to ‘Section2’ or ‘Section3’ and hit apply, Section 1 will be hidden and the corresponding content will appear. All that’s left now would be to go back and add content to the sections.

JFB - View Selector - Result

So, using only out-of-the-box features, we were able to create an extremely versatile and dynamic bit of functionality… and all it took was a dashboard prompt, an analysis to hold our conditional logic, and some sections and conditions.

This approach is just another tool that you can use to help deliver the dynamic content your users are looking for. It provides flexibility within the context of a single dashboard page and also limits the need to navigate (and maintain) multiple pages. Admittedly, the example was just walked through isn’t all that exciting, but hopefully you can see the potential.

Some of your users want a minimalist view allowing them to filter on just the basics, while others want to slice and dice by everything under the sun? Create two prompts, a basic and an advanced, and allow the users to switch between the two.

JFB - View Selector - BasicAdv

Want to pack a large amount of charts into a page while still minimizing scrolling for those poor souls working with 1024×768? No problem, have a low-res option of the dashboard.

JFB - View Selector - LowRes

 The finance department wants a to see a dashboard full of bar charts, but the payroll department is being totally unreasonable and only wants to see line graphs? Well, you get the idea…

Website Design & Build: tymedia.co.uk