Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 Analysis Services (2000)
 Using Star Schema

Author  Topic 

Joozh
Posting Yak Master

145 Posts

Posted - 2004-12-27 : 01:58:55
Hi,

I have tried the following out and would appreciate feedback from experienced users regarding if the following is a good/bad approach:

After bring all the data in my Data Mart, I have created a view which has all the data in a big flat table (totally unnormalized). Then based on this BIG FLAT UNNORMALIZED VIEW :) I have created my various dimensions using the 1st option i.e. Start Schema.

Based on the little testing that I have done, I seem to be getting the correct results across various dimensions... However, can someone kindly comment on this approach and the pros/cons.

Thanks

Thanks & Regards.

-J

rockmoose
SQL Natt Alfen

3279 Posts

Posted - 2004-12-27 : 06:29:37
Well it is not necessary to construct a "BIG FLAT UNNORMALIZED VIEW".
When You edit a dimension, and You right click the grey workspace -> insert tables, then You have effectively created a "snowflake" dimension.
The only diff between "starschema" and "snowflake" in the wizard is the number of tables that are used for creating the dimension.

Personally I build each dimension from the underlying dimensiontables in the datamart.
If I need to filter or further massage the data before building the cubes/dimensions then I find views helpful.

Cons:
It will take much longer to process each dimension, specially if the # of rows is large.

Pros:
Concievably You can format/scrub/filter the data in the view

I don't recommend the "BIG FLAT UNNORMALIZED VIEW" approach.

rockmoose
Go to Top of Page

Joozh
Posting Yak Master

145 Posts

Posted - 2004-12-27 : 07:01:35
Hmmm....

Thanks again. I just tried this "new" approach and thought that I'd get opinion from experienced users and thus this question.

BTW, the approach that I am ACTUALLY using is prerry much like the one that you seem to be using i.e. Building each Dimension from the underlying Dimension tables in the data mart

However, I am NOT using the snowflake option and in the cube Editor All I have is the fact table in the "diagram area" and thenthe needed dimensions on the upper left panel. Does this soundlike an okay apporach or should I changemy way of thinking and add tables in the "diagram area" ---> which as per your reply would result in a SnowFlake schema. Right?

Or should I continue doing things the way that I currently am (assuming that my approach is okay).

Ihope my question is making sense; I tell youI findit quite difficult to put my "BI" related questions into words :(

Thanks.
Go to Top of Page

rockmoose
SQL Natt Alfen

3279 Posts

Posted - 2004-12-27 : 08:52:40
Hmmm...
Maybe we are not communicating clearly.

You should pick the dimension member names and member keys from columns in the dimension tables in your datamart,
not from columns in your fact table.
If You do that, then in the cube edtors "diagram area" You will see the fact table(yellow) and the dimension tables(blue).

Well anyway, keep experimenting.
I'm off on holiday for a week now ( no internet )


rockmoose
Go to Top of Page

Joozh
Posting Yak Master

145 Posts

Posted - 2004-12-30 : 01:26:27
Ooopppsss... Me and my silly questions.... at times I just end up posting the most absurd questions :(

MY apologies for creating the confusion and of course you are right about the following:

"If You do that, then in the cube edtors "diagram area" You will see the fact table(yellow) and the dimension tables(blue)."

Happy holiday and I think one more post and I'll call it a year (and be back with more questions in 2005).



Thanks & Regards.

-J
Go to Top of Page

rockmoose
SQL Natt Alfen

3279 Posts

Posted - 2005-01-02 : 16:04:53
The question was not silly at all.

When designing tables in a reporting area, (such as a starschema in a datamart), then I think a certain degree of denormalisation is Ok.
The main purpose of the denormalization would be to facilitate the usage for the clients&users of the datamart (reduce the number of joins, preformat & filter the data).

Unfortunately when dealing with starschemas, datamarts, BI etc.. the denormalization process is often taken too far, resulting in hard to maintain schemas with lots of repeated data.

rockmoose
Go to Top of Page
   

- Advertisement -