Friday, April 2, 2010

Loading BPC Data the Old Fashioned Way; Using SQL

Thanks to Gert Andries van den Berg on the SAP BPC forums for making sense of all this. In a nutshell, by writing records to either the WB or FAC2 table, the cube is updated. No need to build SSIS package with Dumpload task. This can be done with SQL. Writing to the FAC2 table seems to be the destination of choice. Though I've yet to determine if running optimization is required to see data in reports.

From Gert Andries van den Berg.
As per the tuning doc:

WB – real time data input (ROLAP partition)
This is data that is the most current data sent to the system. Data sent by BPC for Excel data sends and Investigator browser data sends is placed in real-time storage.
FAC2 – short term and Data Manager imports (MOLAP partition)
This is data that is not real-time data, but is also not in long-term storage yet. When you load data via Data Manager (automatic data load from external data sources), it loads the data to short-term storage so that the loaded data does not affect system performance. Only the cube partition associated with this table is processed, so the system is not taken offline.
Fact – long term history (MOLAP partition)
This is the main data storage. All data eventually resides in long-term storage. Data that is not accessed very often remains in long-term storage so that the system maintains performance
This structure allows SAP BPC to maintain the same performance over time even when there is a large increase in data volumes.
Periodically clearing real-time data greatly optimizes the performance of the system and an “Optimization” process is required (this could be scheduled automatically based on given parameters like a numbers of records threshold).

Lite Optimization:

Clears Real-time data storage (WRITEBACK) and moves it to short-term data storage (FAC2). This option doesn’t take the system offline, and can be scheduled during normal business activity.

Incremental Optimization:

Clears both real-time and Short-term data storage (WB and FAC2) and moves both to Long-term data storage (FACT).

This option should be run when the system is offline, but it will not take the system offline so it should be run during off-peak periods of activity.

Full Process Optimization:

Clears both real-time and short-term data storage and processes the dimensions.

This option takes the system offline and takes longer to run than the incremental optimization.

It is best run scheduled at down-time periods – for example after a month-end close.
The Compress Database option is available to rationalize the Fact Tables. “Compress” sums multiple entries for the same CurrentView into one entry so that data storage space is minimized. Compressed databases also process more quickly.


More info on this topic from Sorin Radulescu:
First you have to be aware about structure of BPC cubes:
Each cube has 3 partitions:
1. fact - MOLAP
2. fac2 - MOLAP
3. WB - ROLAP

When you insert records into WB table because WB is ROLAP partitions you will see the impact of that insert into cube in Real Time.
If you insert records into any of MOLAP partitions without processin the partition you are not able to see these records into cube.
I think now you have a clear picture about BPC cube and you undertsood diference between MOLAP and ROLAP partitions.

Lite Optimize is necessary just to keep under contrl the number of records from WB table.
For SSAS if a Rolap Partitions has more than 100 000 records retrieve data from that cube it will be very slow if in the same time users are doing insert into WB Table.
So Lite optimize is schedule usually every 15 minutes when number of records is over 20 000.
That's means every 15 minutes this dtsx package check if WB has 20 000 records.
If yes then is running this process
If not then is not doing anything.

LITE Optimize process
It is doing the follow steps:
1. Copy records from wb to fac2 and marlk the records from wb move into fac2
2. Create a temporary partitions and start to process this partition just for these records move from wb table
3. When it is finishing the process of partition then the system is doing in transaction the follow:
- merge partition fac2 with temp partition
- delete the records marked from wb