Showing posts with label accelerator. Show all posts
Showing posts with label accelerator. Show all posts

Thursday, March 8, 2012

BI Accelerator: problem with current day setting

Hi, All!
I'm experimenting with MSSABI and look this problem - after running
SetCurentDay.dts package every measure values mutiplied by 6.
Here is my steps:
1. Create Analytical, SM and staging databeses with Analytics builder
utility for SMA template (unmodified).
2. Manualy run Master Import DTS package for loading sample data in Satging
DB.
3. Manualy run Master Update DTS package for loading sample data in Subject
Matter DB (with gbManualProcessDim and gbManualProcessFact set to -1 for
automatic processing disable).
4. Manualy run processing of Sales cube in Ananlytical DB and browse data
after its finishing.
Measure Actual Invoice Count is 2843 for 2001 year (Time.Standard dim -
in rows), calculated members in Time.Standard dim - empty due to current
day is not set.
5. Manualy run SetCurrentDay.dts package with gsCurrentDay = '10.05.2001'
(October 5 2001)
6. Again manualy run processing of Sales cube in Ananlytical DB and browse
data after its finishing.
Measure Actual Invoice Count is 17058 for 2001 year (Time.Standard dim -
in rows), calculated members in
Time.Standard dim have some values.
This behaviour fist time was occured in my custom Analytical app.
constructed using MSSABI, the standard template (Sales and marketing)
behaviour is the same.
Where is the problem - is MSSABI or I'm doing something wrong?Hi
I have already seen this problem a while ago. Checkout
http://www.dbforums.com/showthread.php?t=895520
I don't really understand what the guy that replied means by "Setting the al
l tu current".
What I did at the time (and of course that is not the solution) was to remov
e the Weel.Standard dimension from the cube.
Did you found a solution.?
quote:
Originally posted by Eugene Frolov
Hi, All!
I'm experimenting with MSSABI and look this problem - after running
SetCurentDay.dts package every measure values mutiplied by 6.
Here is my steps:
1. Create Analytical, SM and staging databeses with Analytics builder
utility for SMA template (unmodified).
2. Manualy run Master Import DTS package for loading sample data in Satging
DB.
3. Manualy run Master Update DTS package for loading sample data in Subject
Matter DB (with gbManualProcessDim and gbManualProcessFact set to -1 for
automatic processing disable).
4. Manualy run processing of Sales cube in Ananlytical DB and browse data
after its finishing.
Measure Actual Invoice Count is 2843 for 2001 year (Time.Standard dim -
in rows), calculated members in Time.Standard dim - empty due to current
day is not set.
5. Manualy run SetCurrentDay.dts package with gsCurrentDay = '10.05.2001'
(October 5 2001)
6. Again manualy run processing of Sales cube in Ananlytical DB and browse
data after its finishing.
Measure Actual Invoice Count is 17058 for 2001 year (Time.Standard dim -
in rows), calculated members in
Time.Standard dim have some values.
This behaviour fist time was occured in my custom Analytical app.
constructed using MSSABI, the standard template (Sales and marketing)
behaviour is the same.
Where is the problem - is MSSABI or I'm doing something wrong?

|||Hi!
Checkout this:
http://support.microsoft.com/defaul...b;en-us;834285.
Data in cubes become correct after delete ALL date dims exept only one with
custom rollups.
"rsada" <rsada.1fzsd0@.mail.webservertalk.com> /
: news:rsada.1fzsd0@.mail.webservertalk.com...
> Hi
> I have already seen this problem a while ago. Checkout
> http://www.webservertalk.com/showthread.php?t=895520
> I don't really understand what the guy that replied means by "Setting
> the all tu current".
> What I did at the time (and of course that is not the solution) was to
> remove the Weel.Standard dimension from the cube.
> Did you found a solution.?
>
>
> Eugene Frolov wrote:
>
> --
> rsada
> ---
> Posted via http://www.webservertalk.com
> ---
> View this thread: http://www.webservertalk.com/message447789.html
>

BI Accelerator: problem with current day setting

Hi, All!
I'm experimenting with MSSABI and look this problem - after running
SetCurentDay.dts package every measure values mutiplied by 6.
Here is my steps:
1. Create Analytical, SM and staging databeses with Analytics builder
utility for SMA template (unmodified).
2. Manualy run Master Import DTS package for loading sample data in Satging
DB.
3. Manualy run Master Update DTS package for loading sample data in Subject
Matter DB (with gbManualProcessDim and gbManualProcessFact set to -1 for
automatic processing disable).
4. Manualy run processing of Sales cube in Ananlytical DB and browse data
after its finishing.
Measure Actual Invoice Count is 2843 for 2001 year (Time.Standard dim -
in rows), calculated members in Time.Standard dim - empty due to current
day is not set.
5. Manualy run SetCurrentDay.dts package with gsCurrentDay = '10.05.2001'
(October 5 2001)
6. Again manualy run processing of Sales cube in Ananlytical DB and browse
data after its finishing.
Measure Actual Invoice Count is 17058 for 2001 year (Time.Standard dim -
in rows), calculated members in
Time.Standard dim have some values.
This behaviour fist time was occured in my custom Analytical app.
constructed using MSSABI, the standard template (Sales and marketing)
behaviour is the same.
Where is the problem - is MSSABI or I'm doing something wrong?
Hi
I have already seen this problem a while ago. Checkout
http://www.webservertalk.com/showthread.php?t=895520
I don't really understand what the guy that replied means by "Setting
the all tu current".
What I did at the time (and of course that is not the solution) was to
remove the Weel.Standard dimension from the cube.
Did you found a solution.?
Eugene Frolov wrote:
> *Hi, All!
> I'm experimenting with MSSABI and look this problem - after running
> SetCurentDay.dts package every measure values mutiplied by 6.
> Here is my steps:
> 1. Create Analytical, SM and staging databeses with Analytics
> builder
> utility for SMA template (unmodified).
> 2. Manualy run Master Import DTS package for loading sample data in
> Satging
> DB.
> 3. Manualy run Master Update DTS package for loading sample data in
> Subject
> Matter DB (with gbManualProcessDim and gbManualProcessFact set to -1
> for
> automatic processing disable).
> 4. Manualy run processing of Sales cube in Ananlytical DB and browse
> data
> after its finishing.
> Measure Actual Invoice Count is 2843 for 2001 year (Time.Standard dim
> -
> in rows), calculated members in Time.Standard dim - empty due to
> current
> day is not set.
> 5. Manualy run SetCurrentDay.dts package with gsCurrentDay =
> '10.05.2001'
> (October 5 2001)
> 6. Again manualy run processing of Sales cube in Ananlytical DB and
> browse
> data after its finishing.
> Measure Actual Invoice Count is 17058 for 2001 year (Time.Standard
> dim -
> in rows), calculated members in
> Time.Standard dim have some values.
> This behaviour fist time was occured in my custom Analytical app.
> constructed using MSSABI, the standard template (Sales and
> marketing)
> behaviour is the same.
> Where is the problem - is MSSABI or I'm doing something wrong? *
rsada
Posted via http://www.webservertalk.com
View this thread: http://www.webservertalk.com/message447789.html
|||Hi!
Checkout this:
http://support.microsoft.com/default...;en-us;834285.
Data in cubes become correct after delete ALL date dims exept only one with
custom rollups.
"rsada" <rsada.1fzsd0@.mail.webservertalk.com> /
: news:rsada.1fzsd0@.mail.webservertalk.com...
> Hi
> I have already seen this problem a while ago. Checkout
> http://www.webservertalk.com/showthread.php?t=895520
> I don't really understand what the guy that replied means by "Setting
> the all tu current".
> What I did at the time (and of course that is not the solution) was to
> remove the Weel.Standard dimension from the cube.
> Did you found a solution.?
>
>
> Eugene Frolov wrote:
>
> --
> rsada
> Posted via http://www.webservertalk.com
> View this thread: http://www.webservertalk.com/message447789.html
>

BI Accelerator: multiply source databeses

Hi All!
I'm trying to use MS BI Accelerator for creating DW which must contain data
from different sources. Every source is our company departmnet standard
database. I'm looking for best practice for collecting fact and dim data
from this databases in BI Acc staging database, i.e. creating Source Data
ETLM process using Master_Import and its sub - DTS packages with mimimal
re-writing of its. For example, I need to get customers for Dim_Customer_Std
dimension table from Department_1, then Department_2 and so on for other
dept's and dim's. Fact table must be populated the same way - sales data
from Department_1 must be consolidated with Department_2 ...
What is the best way to do so: create a different sub-packages for every
department and then add Execute Package task into Master_Import package or
exist another way?
If you read the PAG, you will see that the Master Import packages were
design as a convienence for customers wanting to load from flat files. We
fully expected that customers will need to load the staging database with
their own data (possibly from multiple data sources). You need to implement
that piece of the system yourself, i.e. come up with your own "Master
Import" where the data comes from your own data sources. Then you plug that
in place of Master Import. The PAG also discusses ways that you could make
simple changes to the Master Import packages if what you want to do is
similar to what Master Import does -- in your case, this does not seem to
apply, so you would just replace Master Import with your own system -- then
Master Update takes the data from the staging database and moves it through
the system from there.
Hope that helps.
Dave Wickert [MSFT]
dwickert@.online.microsoft.com
Program Manager
BI SystemsTeam
SQL BI Product Unit (Analysis Services)
This posting is provided "AS IS" with no warranties, and confers no rights.
"Eugene Frolov" <john@.alice.ru> wrote in message
news:%23om6%23VntEHA.2536@.TK2MSFTNGP11.phx.gbl...
> Hi All!
> I'm trying to use MS BI Accelerator for creating DW which must contain
data
> from different sources. Every source is our company departmnet standard
> database. I'm looking for best practice for collecting fact and dim data
> from this databases in BI Acc staging database, i.e. creating Source Data
> ETLM process using Master_Import and its sub - DTS packages with mimimal
> re-writing of its. For example, I need to get customers for
Dim_Customer_Std
> dimension table from Department_1, then Department_2 and so on for other
> dept's and dim's. Fact table must be populated the same way - sales data
> from Department_1 must be consolidated with Department_2 ...
> What is the best way to do so: create a different sub-packages for every
> department and then add Execute Package task into Master_Import package or
> exist another way?
>
|||Thank you for answer, but: what is a PAG? I'm reading ALL 3 guides comes
with MS BI Accelerator (Development, Deployment and Maintenance).
"Dave Wickert [MSFT]" <dwickert@.online.microsoft.com> /
: news:OHtoAPttEHA.2128@.TK2MSFTNGP11.phx.gbl...
> If you read the PAG, you will see that the Master Import packages were
> design as a convienence for customers wanting to load from flat files. We
> fully expected that customers will need to load the staging database with
> their own data (possibly from multiple data sources). You need to
implement
> that piece of the system yourself, i.e. come up with your own "Master
> Import" where the data comes from your own data sources. Then you plug
that
> in place of Master Import. The PAG also discusses ways that you could make
> simple changes to the Master Import packages if what you want to do is
> similar to what Master Import does -- in your case, this does not seem to
> apply, so you would just replace Master Import with your own system --
then
> Master Update takes the data from the staging database and moves it
through
> the system from there.
> Hope that helps.
> --
> Dave Wickert [MSFT]
> dwickert@.online.microsoft.com
> Program Manager
> BI SystemsTeam
> SQL BI Product Unit (Analysis Services)
> --
> This posting is provided "AS IS" with no warranties, and confers no
rights.[vbcol=seagreen]
> "Eugene Frolov" <john@.alice.ru> wrote in message
> news:%23om6%23VntEHA.2536@.TK2MSFTNGP11.phx.gbl...
> data
data[vbcol=seagreen]
Data[vbcol=seagreen]
> Dim_Customer_Std
or
>
|||That is what we call the PAG (Prescriptive Architecture Guides).
Dave Wickert [MSFT]
dwickert@.online.microsoft.com
Program Manager
BI SystemsTeam
SQL BI Product Unit (Analysis Services)
This posting is provided "AS IS" with no warranties, and confers no rights.
"Eugene Frolov" <john@.alice.ru> wrote in message
news:OMgGTkztEHA.2788@.TK2MSFTNGP09.phx.gbl...[vbcol=seagreen]
> Thank you for answer, but: what is a PAG? I'm reading ALL 3 guides comes
> with MS BI Accelerator (Development, Deployment and Maintenance).
> "Dave Wickert [MSFT]" <dwickert@.online.microsoft.com> /
> : news:OHtoAPttEHA.2128@.TK2MSFTNGP11.phx.gbl...
We[vbcol=seagreen]
with[vbcol=seagreen]
> implement
> that
make[vbcol=seagreen]
to[vbcol=seagreen]
> then
> through
> rights.
standard[vbcol=seagreen]
> data
> Data
mimimal[vbcol=seagreen]
other[vbcol=seagreen]
data[vbcol=seagreen]
every[vbcol=seagreen]
package
> or
>

BI Accelerator: multiply source databeses

Hi All!
I'm trying to use MS BI Accelerator for creating DW which must contain data
from different sources. Every source is our company departmnet standard
database. I'm looking for best practice for collecting fact and dim data
from this databases in BI Acc staging database, i.e. creating Source Data
ETLM process using Master_Import and its sub - DTS packages with mimimal
re-writing of its. For example, I need to get customers for Dim_Customer_Std
dimension table from Department_1, then Department_2 and so on for other
dept's and dim's. Fact table must be populated the same way - sales data
from Department_1 must be consolidated with Department_2 ...
What is the best way to do so: create a different sub-packages for every
department and then add Execute Package task into Master_Import package or
exist another way?If you read the PAG, you will see that the Master Import packages were
design as a convienence for customers wanting to load from flat files. We
fully expected that customers will need to load the staging database with
their own data (possibly from multiple data sources). You need to implement
that piece of the system yourself, i.e. come up with your own "Master
Import" where the data comes from your own data sources. Then you plug that
in place of Master Import. The PAG also discusses ways that you could make
simple changes to the Master Import packages if what you want to do is
similar to what Master Import does -- in your case, this does not seem to
apply, so you would just replace Master Import with your own system -- then
Master Update takes the data from the staging database and moves it through
the system from there.
Hope that helps.
--
Dave Wickert [MSFT]
dwickert@.online.microsoft.com
Program Manager
BI SystemsTeam
SQL BI Product Unit (Analysis Services)
--
This posting is provided "AS IS" with no warranties, and confers no rights.
"Eugene Frolov" <john@.alice.ru> wrote in message
news:%23om6%23VntEHA.2536@.TK2MSFTNGP11.phx.gbl...
> Hi All!
> I'm trying to use MS BI Accelerator for creating DW which must contain
data
> from different sources. Every source is our company departmnet standard
> database. I'm looking for best practice for collecting fact and dim data
> from this databases in BI Acc staging database, i.e. creating Source Data
> ETLM process using Master_Import and its sub - DTS packages with mimimal
> re-writing of its. For example, I need to get customers for
Dim_Customer_Std
> dimension table from Department_1, then Department_2 and so on for other
> dept's and dim's. Fact table must be populated the same way - sales data
> from Department_1 must be consolidated with Department_2 ...
> What is the best way to do so: create a different sub-packages for every
> department and then add Execute Package task into Master_Import package or
> exist another way?
>|||Thank you for answer, but: what is a PAG? I'm reading ALL 3 guides comes
with MS BI Accelerator (Development, Deployment and Maintenance).
"Dave Wickert [MSFT]" <dwickert@.online.microsoft.com> /
: news:OHtoAPttEHA.2128@.TK2MSFTNGP11.phx.gbl...
> If you read the PAG, you will see that the Master Import packages were
> design as a convienence for customers wanting to load from flat files. We
> fully expected that customers will need to load the staging database with
> their own data (possibly from multiple data sources). You need to
implement
> that piece of the system yourself, i.e. come up with your own "Master
> Import" where the data comes from your own data sources. Then you plug
that
> in place of Master Import. The PAG also discusses ways that you could make
> simple changes to the Master Import packages if what you want to do is
> similar to what Master Import does -- in your case, this does not seem to
> apply, so you would just replace Master Import with your own system --
then
> Master Update takes the data from the staging database and moves it
through
> the system from there.
> Hope that helps.
> --
> Dave Wickert [MSFT]
> dwickert@.online.microsoft.com
> Program Manager
> BI SystemsTeam
> SQL BI Product Unit (Analysis Services)
> --
> This posting is provided "AS IS" with no warranties, and confers no
rights.
> "Eugene Frolov" <john@.alice.ru> wrote in message
> news:%23om6%23VntEHA.2536@.TK2MSFTNGP11.phx.gbl...
> data
data[vbcol=seagreen]
Data[vbcol=seagreen]
> Dim_Customer_Std
or[vbcol=seagreen]
>|||That is what we call the PAG (Prescriptive Architecture Guides).
--
Dave Wickert [MSFT]
dwickert@.online.microsoft.com
Program Manager
BI SystemsTeam
SQL BI Product Unit (Analysis Services)
--
This posting is provided "AS IS" with no warranties, and confers no rights.
"Eugene Frolov" <john@.alice.ru> wrote in message
news:OMgGTkztEHA.2788@.TK2MSFTNGP09.phx.gbl...
> Thank you for answer, but: what is a PAG? I'm reading ALL 3 guides comes
> with MS BI Accelerator (Development, Deployment and Maintenance).
> "Dave Wickert [MSFT]" <dwickert@.online.microsoft.com> /
> : news:OHtoAPttEHA.2128@.TK2MSFTNGP11.phx.gbl...
We[vbcol=seagreen]
with[vbcol=seagreen]
> implement
> that
make[vbcol=seagreen]
to[vbcol=seagreen]
> then
> through
> rights.
standard[vbcol=seagreen]
> data
> Data
mimimal[vbcol=seagreen]
other[vbcol=seagreen]
data[vbcol=seagreen]
every[vbcol=seagreen]
package[vbcol=seagreen]
> or
>

BI Accelerator Version Error

I am evaluating the BI Accelerator samples Excel 2003 SP1, however I receive
this warning "The Analytics Builder Workbook requires Excel XP, SP1 or
greater" and then the Excel file closes.
Is there a work around for this please?Please look at this page.
http://www.webservertalk.com/archiv...p/t-954076.html
I had the same problem.
http://blogs.sqlpassj.org/nagasaki/...03/22/8178.aspx
(--Japanese)
Tomoyoshi NAGASAKI
Message posted via http://www.droptable.com

BI Accelerator Version Error

I am evaluating the BI Accelerator samples Excel 2003 SP1, however I receive
this warning "The Analytics Builder Workbook requires Excel XP, SP1 or
greater" and then the Excel file closes.
Is there a work around for this please?
Please look at this page.
http://www.webservertalk.com/archive.../t-954076.html
I had the same problem.
http://blogs.sqlpassj.org/nagasaki/a...3/22/8178.aspx
(--Japanese)
Tomoyoshi NAGASAKI
Message posted via http://www.sqlmonster.com

BI Accelerator SMA

Hello people,
I want to inspect the code corresponding to the "BI Accelerator - SMA
Application" But I dont have the password to do this va MS Excell. Anyone
of you know how to do that.
Thanks. Federico Haedo
Unfortunately we don't release the source for the BI Accelerator MS Excel
code. Sorry.
Dave Wickert [MSFT]
dwickert@.online.microsoft.com
Program Manager
BI SystemsTeam
SQL BI Product Unit (Analysis Services)
This posting is provided "AS IS" with no warranties, and confers no rights.
"Federico Haedo" <fhaedo@.harriague.com.ar> wrote in message
news:OBygmwTrEHA.324@.TK2MSFTNGP11.phx.gbl...
> Hello people,
> I want to inspect the code corresponding to the "BI Accelerator - SMA
> Application" But I dont have the password to do this va MS Excell.
Anyone
> of you know how to do that.
> Thanks. Federico Haedo
>

BI Accelerator SetCurrentDay Problem

I'm having a problem with the SetCurrentDay DTS package
of the BI Accelerator.

I'm loading a set of data using text files therefore I'm using
Master_Import.dts and Master_Update.dts unchanged. Then I run the VBScripts that are alsgo generated by Accelerator (VAMPanorama1_Load_VAMPanorama.vbs).

This loads the data from text files into the Staging Database and SM Database, and also runs the Set Current Day package with a fixed date of "2001-12-31".

The problem is data for the year of the Current Date is multiplied by a factor of six. Now, when I run SetCurrentDay Package Alone, to try to set a new date (say "2003-07-01") either opening the package in Enterprise Manager and changing the global variable, or using DTSRUN utility I get the same problem. The data es multiplied by a factor of 6 for the year 2003 and data for 2002 and 2001 shows up correctly at the year, qtr and month level.

I tried this on the Retail Analysis sample project that comes with Accelerator and had exactly the same problem.

Has anoyone seen something like this? am I doing something wrong here?

Any help is appreciated,I assume you've done something else by now, but I had the same problem. After verifying that the proper number of facts were loaded, and that a join wasn't creating additional records, I stumbled on the solution: There is no problem with the cube.

The problem is that the auto-generated base time dimensions each have an 'All' level and they should not. Setting all of them (for me they are Time.Standard and Week.Standard) to 'Current' will fix the problem and show the proper number of transactions. The reason they are multiplied is that 'All' aggregates across all time periods including the actual periods and the calculated ones like Current, Last Year, Change from Last, etc.

Originally posted by rsada
I'm having a problem with the SetCurrentDay DTS package
of the BI Accelerator.

I'm loading a set of data using text files therefore I'm using
Master_Import.dts and Master_Update.dts unchanged. Then I run the VBScripts that are alsgo generated by Accelerator (VAMPanorama1_Load_VAMPanorama.vbs).

This loads the data from text files into the Staging Database and SM Database, and also runs the Set Current Day package with a fixed date of "2001-12-31".

The problem is data for the year of the Current Date is multiplied by a factor of six. Now, when I run SetCurrentDay Package Alone, to try to set a new date (say "2003-07-01") either opening the package in Enterprise Manager and changing the global variable, or using DTSRUN utility I get the same problem. The data es multiplied by a factor of 6 for the year 2003 and data for 2002 and 2001 shows up correctly at the year, qtr and month level.

I tried this on the Retail Analysis sample project that comes with Accelerator and had exactly the same problem.

Has anoyone seen something like this? am I doing something wrong here?

Any help is appreciated,|||It's been some time since I last saw this thing. But could you refresh my memery
What do you mean by "Setting all of them (for me they are Time.Standard and Week.Standard) to 'Current'"? . Is this changing the name of the All Member or removing the All member ?

thanks|||It's actually constraining the dates to 'Current' but I found the real source of the problem.

The auto-generated date dimensions have custom members built into the dimension table, and one of them is called 'Current'. My original recommendation was that you could not leave the date dimensions set to 'All' (meaning that they are unconstrained) because that caused the totals to be much higher than they should be. Setting them to 'Current' or to some other specific value (a particular year, month, etc.) would fix the totals.

Here's the real problem: the unary operator property is not set by the spreadsheet generator. If you go into Analysis Manager and edit the Time.Standard dimension, you can fix it. For each level, look at the advanced properties and click the ellipse button beside the value for Unary Operators. A dialog comes up that lets you check "Enable Unary Operators" and tell it there is an existing field of <level name>_Oper (e.g. Year_Oper). This one change (setting it for all levels) fixes all of the auto-generated date dimensions.

What's happening is that without the unary operator, calculated dates like Current, Previous, Change, %Change, etc. are treated just like any other year value (like 2004) so the cube adds together all of the values causing data to be added both for the actual date that it occurs, and to the calculated dates (current, etc.). The unary operator lets the value in <level name>_Oper determine if the values will aggregate or not, and thankfully, the attribute has the right values.|||Thanks a lot Barclay I got it workin fine now..!! :)

BI Accelerator in Slq Server 2005?

Has the BI Accelerator tool been dropped from MS or does anyone know of
any plans to "upgrade" this tool to Sql Server 2005? We are working
on a BI project where we would like to use the BI Accelerator to help
generate the schema and data movement and would like to use Reporting
Services as the interface, but was unsure as to the future of this
tool. Any insight would be appreciated.Most all of the functionality of SSABI has been natively provided by AS2K5.
You can design a cube and then generate the RDBMS from it; or you can start
with a RDBMS and then use intellisense to generate the UDM model.
While the actually implementation is quite different (there is no
spreadsheet to hold the design), the ideas are integrated in AS2K5.
--
Dave Wickert [MSFT]
dwickert@.online.microsoft.com
Program Manager
BI Systems Team
SQL BI Product Unit (Analysis Services)
--
This posting is provided "AS IS" with no warranties, and confers no rights.
<dchaffee@.chartermi.net> wrote in message
news:1135786256.247764.242560@.g43g2000cwa.googlegroups.com...
> Has the BI Accelerator tool been dropped from MS or does anyone know of
> any plans to "upgrade" this tool to Sql Server 2005? We are working
> on a BI project where we would like to use the BI Accelerator to help
> generate the schema and data movement and would like to use Reporting
> Services as the interface, but was unsure as to the future of this
> tool. Any insight would be appreciated.
>

BI Accelerator in Slq Server 2005?

Has the BI Accelerator tool been dropped from MS or does anyone know of
any plans to "upgrade" this tool to Sql Server 2005? We are working
on a BI project where we would like to use the BI Accelerator to help
generate the schema and data movement and would like to use Reporting
Services as the interface, but was unsure as to the future of this
tool. Any insight would be appreciated.
Most all of the functionality of SSABI has been natively provided by AS2K5.
You can design a cube and then generate the RDBMS from it; or you can start
with a RDBMS and then use intellisense to generate the UDM model.
While the actually implementation is quite different (there is no
spreadsheet to hold the design), the ideas are integrated in AS2K5.
Dave Wickert [MSFT]
dwickert@.online.microsoft.com
Program Manager
BI Systems Team
SQL BI Product Unit (Analysis Services)
This posting is provided "AS IS" with no warranties, and confers no rights.
<dchaffee@.chartermi.net> wrote in message
news:1135786256.247764.242560@.g43g2000cwa.googlegr oups.com...
> Has the BI Accelerator tool been dropped from MS or does anyone know of
> any plans to "upgrade" this tool to Sql Server 2005? We are working
> on a BI project where we would like to use the BI Accelerator to help
> generate the schema and data movement and would like to use Reporting
> Services as the interface, but was unsure as to the future of this
> tool. Any insight would be appreciated.
>

BI Accelerator generators (SSABI)

When you install the Accelerator it only has the ProClarity Generator.
is there any other available?
That was the only one written. Obviously you can still use other client
tools. The ability of the generator was only to customize existing reports
if you rolled out changes to the names of dimensions, levels, etc. -- other
than that -- any front-end tool can be used. Plus I think that the version
of Proclarity we used is now quite old. I don't know if it will work with
the latest version.
All-in-all, I don't believe that this was a frequently used feature of the
product.
Dave Wickert [MSFT]
dwickert@.online.microsoft.com
Program Manager
BI SystemsTeam
SQL BI Product Unit (Analysis Services)
This posting is provided "AS IS" with no warranties, and confers no rights.
"Gabriel Cor" <gabriel@.paradigma.com.uy> wrote in message
news:OLv$t$NyEHA.3808@.tk2msftngp13.phx.gbl...
> When you install the Accelerator it only has the ProClarity Generator.
> is there any other available?
>
|||Thanks Dave, I thought it was a tool to autogenerate some reports, but based
on your response, I understand it is of limited use.
Thanks again.
"Dave Wickert [MSFT]" <dwickert@.online.microsoft.com> escribi en el mensaje
news:uKcmbmRyEHA.1300@.TK2MSFTNGP14.phx.gbl...
> That was the only one written. Obviously you can still use other client
> tools. The ability of the generator was only to customize existing reports
> if you rolled out changes to the names of dimensions, levels, etc. --
other
> than that -- any front-end tool can be used. Plus I think that the version
> of Proclarity we used is now quite old. I don't know if it will work with
> the latest version.
> All-in-all, I don't believe that this was a frequently used feature of the
> product.
> --
> Dave Wickert [MSFT]
> dwickert@.online.microsoft.com
> Program Manager
> BI SystemsTeam
> SQL BI Product Unit (Analysis Services)
> --
> This posting is provided "AS IS" with no warranties, and confers no
rights.
> "Gabriel Cor" <gabriel@.paradigma.com.uy> wrote in message
> news:OLv$t$NyEHA.3808@.tk2msftngp13.phx.gbl...
>
|||It is more about autogenerating a BI solution. You enter in a
multi-dimensional design, i.e. cubes, dimensions, levels, etc. into the
spreadsheet -- click on the "Generate" button and we generated the
supporting relational and Analysis services objects, DTS packages, etc. that
support that design.
It was designed as a supporting tool for quick prototyping and
proff-of-concept systems that have the basic infrastructure and best
practices to then go into production.
The tailoring of existing reports fits into that as a final step for
customization, but we found that by-hand work is probable more important as
that is what the end-users will finally see -- thus it makes sense that most
of your time should be spent in that area. One of our basic tenets was that
infrastructure is something which soaks up too much time when intially
generating BI solutions -- that consultants and architects should spend more
time on design and front-end, end-user facing components -- which is what
the BI Accelerator tool allows them to do.
Dave Wickert [MSFT]
dwickert@.online.microsoft.com
Program Manager
BI SystemsTeam
SQL BI Product Unit (Analysis Services)
This posting is provided "AS IS" with no warranties, and confers no rights.
"Gabriel Cor" <gabriel@.paradigma.com.uy> wrote in message
news:%234Ls1F9yEHA.1396@.tk2msftngp13.phx.gbl...
> Thanks Dave, I thought it was a tool to autogenerate some reports, but
based
> on your response, I understand it is of limited use.
> Thanks again.
> "Dave Wickert [MSFT]" <dwickert@.online.microsoft.com> escribi en el
mensaje[vbcol=seagreen]
> news:uKcmbmRyEHA.1300@.TK2MSFTNGP14.phx.gbl...
reports[vbcol=seagreen]
> other
version[vbcol=seagreen]
with[vbcol=seagreen]
the
> rights.
>

BI Accelerator generators (SSABI)

When you install the Accelerator it only has the ProClarity Generator.
is there any other available?That was the only one written. Obviously you can still use other client
tools. The ability of the generator was only to customize existing reports
if you rolled out changes to the names of dimensions, levels, etc. -- other
than that -- any front-end tool can be used. Plus I think that the version
of Proclarity we used is now quite old. I don't know if it will work with
the latest version.
All-in-all, I don't believe that this was a frequently used feature of the
product.
--
Dave Wickert [MSFT]
dwickert@.online.microsoft.com
Program Manager
BI SystemsTeam
SQL BI Product Unit (Analysis Services)
--
This posting is provided "AS IS" with no warranties, and confers no rights.
"Gabriel Cor" <gabriel@.paradigma.com.uy> wrote in message
news:OLv$t$NyEHA.3808@.tk2msftngp13.phx.gbl...
> When you install the Accelerator it only has the ProClarity Generator.
> is there any other available?
>|||Thanks Dave, I thought it was a tool to autogenerate some reports, but based
on your response, I understand it is of limited use.
Thanks again.
"Dave Wickert [MSFT]" <dwickert@.online.microsoft.com> escribi en el men
saje
news:uKcmbmRyEHA.1300@.TK2MSFTNGP14.phx.gbl...
> That was the only one written. Obviously you can still use other client
> tools. The ability of the generator was only to customize existing reports
> if you rolled out changes to the names of dimensions, levels, etc. --
other
> than that -- any front-end tool can be used. Plus I think that the version
> of Proclarity we used is now quite old. I don't know if it will work with
> the latest version.
> All-in-all, I don't believe that this was a frequently used feature of the
> product.
> --
> Dave Wickert [MSFT]
> dwickert@.online.microsoft.com
> Program Manager
> BI SystemsTeam
> SQL BI Product Unit (Analysis Services)
> --
> This posting is provided "AS IS" with no warranties, and confers no
rights.
> "Gabriel Cor" <gabriel@.paradigma.com.uy> wrote in message
> news:OLv$t$NyEHA.3808@.tk2msftngp13.phx.gbl...
>|||It is more about autogenerating a BI solution. You enter in a
multi-dimensional design, i.e. cubes, dimensions, levels, etc. into the
spreadsheet -- click on the "Generate" button and we generated the
supporting relational and Analysis services objects, DTS packages, etc. that
support that design.
It was designed as a supporting tool for quick prototyping and
proff-of-concept systems that have the basic infrastructure and best
practices to then go into production.
The tailoring of existing reports fits into that as a final step for
customization, but we found that by-hand work is probable more important as
that is what the end-users will finally see -- thus it makes sense that most
of your time should be spent in that area. One of our basic tenets was that
infrastructure is something which soaks up too much time when intially
generating BI solutions -- that consultants and architects should spend more
time on design and front-end, end-user facing components -- which is what
the BI Accelerator tool allows them to do.
Dave Wickert [MSFT]
dwickert@.online.microsoft.com
Program Manager
BI SystemsTeam
SQL BI Product Unit (Analysis Services)
--
This posting is provided "AS IS" with no warranties, and confers no rights.
"Gabriel Cor" <gabriel@.paradigma.com.uy> wrote in message
news:%234Ls1F9yEHA.1396@.tk2msftngp13.phx.gbl...
> Thanks Dave, I thought it was a tool to autogenerate some reports, but
based
> on your response, I understand it is of limited use.
> Thanks again.
> "Dave Wickert [MSFT]" <dwickert@.online.microsoft.com> escribi en el
mensaje
> news:uKcmbmRyEHA.1300@.TK2MSFTNGP14.phx.gbl...
reports[vbcol=seagreen]
> other
version[vbcol=seagreen]
with[vbcol=seagreen]
the[vbcol=seagreen]
> rights.
>

BI Accelerator experiences...?

Hi,
I'm just embarking on a data warehousing project for the first time. I have
a good understanding of the general principles - dimensional modelling, fact
tables, ETL, data staging, cubes, etc.
I am interested to get feedback from people who have used BI Accelerator for
SQL Server 2000 to automate the task of building a data warehouse and
analytical applications. Does it work as advertised? How productive can you
be compared to 'hand coding' the process?
Also, I would like to know why the production deployment of BI Accelerator
applications requires SQL Server Enterprise edition?
Thanks, in anticipation.
Chrisamong other reasons - partitioning
Posted using Wimdows.net NntpNews Component - Posted from SQL Servers Largest Community
Website: http://www.sqlJunkies.com/newsgroups/

BI Accelerator Error Table Question

I just have a question about interpreting the error tables that BI Accelerator creates. In a couple of the tables in the Staging database I have rows that are rejected during the update process. There is a field called Error_Reason that says 'Missing' for each row. How can I tell which column is actually 'Missing' in the row?

Thanks.

Best Regards,
JimBI Accelerator

That's a new one...what is it?

BI Accelerator Cube Reprocess Problems

We are currently trying to implement a BI solution for a data warehouse,
but can't seem to get the last few kinks worked out.
I have a cube made by BI that has monthly partitions. The problem is
that whenever I add new dimention information (which should be an
incremental update), I have to reprocess the entire cube. This is not
really an option, since it takes 57 hours to do that. So, does anyone
have a suggestion why BI is not doing an incremental update?
Thanks,
Chris
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!
Two comments:
1) Re: "The problem is that whenever I add new dimention information (which
should be an incremental update), I have to reprocess the entire cube." Can
you be more specific what you are using? To "add" this new information are
you doing a full process of the dimension? Or an incremental process of the
dimension? I am a bit confused because you said "which should be an
incremental update" -- is it or not?
One of the things that you need to remember is that when you do a full
process of a non-changing dimension (regardless if the data changed or not),
then a full process of the cube is always required. This is because a full
process of the dimension could have caused changes in the hierarchy, which
would have likewise changed all of the aggregates -- which is a full
reprocess.
2) Re: "This is not really an option, since it takes 57 hours to do that."
You should be looking at improving that time. The general rule of thumb that
I use is that a server-class machine processing a typical partition should
be able to process about 1 million rows per minute. If your system is not
getting performance close to that then you should be looking at the general
best practices outlined in the SSAS Performance Guide at:
http://www.microsoft.com/technet/pro.../ansvcspg.mspx
see the section titled: "Optimizing Analysis Services to Improve Processing
Performance"
Dave Wickert [MSFT]
dwickert@.online.microsoft.com
Program Manager
BI SystemsTeam
SQL BI Product Unit (Analysis Services)
This posting is provided "AS IS" with no warranties, and confers no rights.
"Chris Timko" <ctimko@.hersheys.com> wrote in message
news:eP1Nk8lzEHA.2316@.TK2MSFTNGP15.phx.gbl...
> We are currently trying to implement a BI solution for a data warehouse,
> but can't seem to get the last few kinks worked out.
> I have a cube made by BI that has monthly partitions. The problem is
> that whenever I add new dimention information (which should be an
> incremental update), I have to reprocess the entire cube. This is not
> really an option, since it takes 57 hours to do that. So, does anyone
> have a suggestion why BI is not doing an incremental update?
> Thanks,
> Chris
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!
|||It is simply adding new members to the lowest level of the hierarchy,
there were no changes being made to the rest of the structure. I guess
I am going to have to look at some of the other ways BI processes
dimentions.
And yes, we would like to get our cube running a little (actually, a
lot) faster. Right now, though, we need to make sure a full reprocess
isn't needed every time we add items.
Chris
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!
|||Adding new members should *not* force a full reprocess.
Look in the documentation, I believe there is a way to force it do an
incremental only.
As a worst-case, you can modify the packages by-hand for only do an
incremental.
However I had thought that we had this worked out correctly so that we
detected if an incremental or full was required, e.g. deleting a member
forces a full.
Dave Wickert [MSFT]
dwickert@.online.microsoft.com
Program Manager
BI SystemsTeam
SQL BI Product Unit (Analysis Services)
This posting is provided "AS IS" with no warranties, and confers no rights.
"Chris Timko" <ctimko@.hersheys.com> wrote in message
news:eAe%233UL0EHA.3820@.TK2MSFTNGP11.phx.gbl...
> It is simply adding new members to the lowest level of the hierarchy,
> there were no changes being made to the rest of the structure. I guess
> I am going to have to look at some of the other ways BI processes
> dimentions.
> And yes, we would like to get our cube running a little (actually, a
> lot) faster. Right now, though, we need to make sure a full reprocess
> isn't needed every time we add items.
> Chris
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!
|||I made the change to the only incremental update for the dimentions, and
it seemed to update just fine. If a full process needed to be run
instead of the incremental, would the BI dts update package crash? If
so, then it seems like there may exist a bug in the auto update of
dimentions. I plan to at some point (after some data massaging) use the
flagged update algorithm, so some of the dimentions can be changed.
thanks for the help,
Chris
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!

BI Accelerator Cube Reprocess Problems

We are currently trying to implement a BI solution for a data warehouse,
but can't seem to get the last few kinks worked out.
I have a cube made by BI that has monthly partitions. The problem is
that whenever I add new dimention information (which should be an
incremental update), I have to reprocess the entire cube. This is not
really an option, since it takes 57 hours to do that. So, does anyone
have a suggestion why BI is not doing an incremental update?
Thanks,
Chris
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!Two comments:
1) Re: "The problem is that whenever I add new dimention information (which
should be an incremental update), I have to reprocess the entire cube." Can
you be more specific what you are using? To "add" this new information are
you doing a full process of the dimension? Or an incremental process of the
dimension? I am a bit confused because you said "which should be an
incremental update" -- is it or not?
One of the things that you need to remember is that when you do a full
process of a non-changing dimension (regardless if the data changed or not),
then a full process of the cube is always required. This is because a full
process of the dimension could have caused changes in the hierarchy, which
would have likewise changed all of the aggregates -- which is a full
reprocess.
2) Re: "This is not really an option, since it takes 57 hours to do that."
You should be looking at improving that time. The general rule of thumb that
I use is that a server-class machine processing a typical partition should
be able to process about 1 million rows per minute. If your system is not
getting performance close to that then you should be looking at the general
best practices outlined in the SSAS Performance Guide at:
http://www.microsoft.com/technet/pr...n/ansvcspg.mspx
see the section titled: "Optimizing Analysis Services to Improve Processing
Performance"
Dave Wickert [MSFT]
dwickert@.online.microsoft.com
Program Manager
BI SystemsTeam
SQL BI Product Unit (Analysis Services)
--
This posting is provided "AS IS" with no warranties, and confers no rights.
"Chris Timko" <ctimko@.hersheys.com> wrote in message
news:eP1Nk8lzEHA.2316@.TK2MSFTNGP15.phx.gbl...
> We are currently trying to implement a BI solution for a data warehouse,
> but can't seem to get the last few kinks worked out.
> I have a cube made by BI that has monthly partitions. The problem is
> that whenever I add new dimention information (which should be an
> incremental update), I have to reprocess the entire cube. This is not
> really an option, since it takes 57 hours to do that. So, does anyone
> have a suggestion why BI is not doing an incremental update?
> Thanks,
> Chris
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!|||It is simply adding new members to the lowest level of the hierarchy,
there were no changes being made to the rest of the structure. I guess
I am going to have to look at some of the other ways BI processes
dimentions.
And yes, we would like to get our cube running a little (actually, a
lot) faster. Right now, though, we need to make sure a full reprocess
isn't needed every time we add items.
Chris
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!|||Adding new members should *not* force a full reprocess.
Look in the documentation, I believe there is a way to force it do an
incremental only.
As a worst-case, you can modify the packages by-hand for only do an
incremental.
However I had thought that we had this worked out correctly so that we
detected if an incremental or full was required, e.g. deleting a member
forces a full.
--
Dave Wickert [MSFT]
dwickert@.online.microsoft.com
Program Manager
BI SystemsTeam
SQL BI Product Unit (Analysis Services)
--
This posting is provided "AS IS" with no warranties, and confers no rights.
"Chris Timko" <ctimko@.hersheys.com> wrote in message
news:eAe%233UL0EHA.3820@.TK2MSFTNGP11.phx.gbl...
> It is simply adding new members to the lowest level of the hierarchy,
> there were no changes being made to the rest of the structure. I guess
> I am going to have to look at some of the other ways BI processes
> dimentions.
> And yes, we would like to get our cube running a little (actually, a
> lot) faster. Right now, though, we need to make sure a full reprocess
> isn't needed every time we add items.
> Chris
> *** Sent via Developersdex http://www.codecomments.com ***
> Don't just participate in USENET...get rewarded for it!|||I made the change to the only incremental update for the dimentions, and
it seemed to update just fine. If a full process needed to be run
instead of the incremental, would the BI dts update package crash? If
so, then it seems like there may exist a bug in the auto update of
dimentions. I plan to at some point (after some data massaging) use the
flagged update algorithm, so some of the dimentions can be changed.
thanks for the help,
Chris
*** Sent via Developersdex http://www.codecomments.com ***
Don't just participate in USENET...get rewarded for it!

BI Accelerator 1.2 Restriction not in 1.1

Please see below a section in the BI Accelerator 1.2
documentation. This is not present in the v1.1
documentation. We would like to know why this restriction
(we are forced to go to lowest level when 2 hierarchies
are included in a cube) was imposed in version 1.2.
o When a cube includes both the primary and based-on
hierarchy of a dimension, you cannot specify the level
when including both these hierarchies in a cube; the
lowest level of the dimension will be included in the
cube.
o You cannot have a based-on hierarchy in a cube
without having the primary hierarchy.
o You can have the primary hierarchy in a cube without
the based-on hierarchy, and you can specify the level for
the cube.
ThanksClearly, the second and third rules make sense. I wondered whether
the first rule was simply that Microsoft's Excel front-end doesn't do
sufficient checking, i.e. that the level you choose for the primary
hierarchy
to be attached to the cube is not higher than the level at which
the primary hierarchy and the "based-on" hierarchy are joined.
It looks to me as though the underlying relation structure that is
generated would support a level higher than the lowest one.
I notice too that when defining a "based-on" level you must
"... select a level from the first hierarchy that is defined for that
dimension ..." (quoted from the development guide). This is a
restriction enforced by the front-end that the underlying database
doesn't need. Without this simplification it would need to check
for circularities - perhaps they chose the easiest option.
Maybe there is a more subtle reason, but it is not obvious.
John Bates
"Joseph Ollero" <jollero@.wizardsrgoup.com> wrote in message
news:c77301c438cf$1bb1cf40$a001280a@.phx.gbl...
> Please see below a section in the BI Accelerator 1.2
> documentation. This is not present in the v1.1
> documentation. We would like to know why this restriction
> (we are forced to go to lowest level when 2 hierarchies
> are included in a cube) was imposed in version 1.2.
>
> o When a cube includes both the primary and based-on
> hierarchy of a dimension, you cannot specify the level
> when including both these hierarchies in a cube; the
> lowest level of the dimension will be included in the
> cube.
> o You cannot have a based-on hierarchy in a cube
> without having the primary hierarchy.
> o You can have the primary hierarchy in a cube without
> the based-on hierarchy, and you can specify the level for
> the cube.
>
> Thanks
>

BI Accelerator 1.2 Restriction not in 1.1

Please see below a section in the BI Accelerator 1.2
documentation. This is not present in the v1.1
documentation. We would like to know why this restriction
(we are forced to go to lowest level when 2 hierarchies
are included in a cube) was imposed in version 1.2.
o When a cube includes both the primary and based-on
hierarchy of a dimension, you cannot specify the level
when including both these hierarchies in a cube; the
lowest level of the dimension will be included in the
cube.
o You cannot have a based-on hierarchy in a cube
without having the primary hierarchy.
o You can have the primary hierarchy in a cube without
the based-on hierarchy, and you can specify the level for
the cube.
Thanks
Clearly, the second and third rules make sense. I wondered whether
the first rule was simply that Microsoft's Excel front-end doesn't do
sufficient checking, i.e. that the level you choose for the primary
hierarchy
to be attached to the cube is not higher than the level at which
the primary hierarchy and the "based-on" hierarchy are joined.
It looks to me as though the underlying relation structure that is
generated would support a level higher than the lowest one.
I notice too that when defining a "based-on" level you must
"... select a level from the first hierarchy that is defined for that
dimension ..." (quoted from the development guide). This is a
restriction enforced by the front-end that the underlying database
doesn't need. Without this simplification it would need to check
for circularities - perhaps they chose the easiest option.
Maybe there is a more subtle reason, but it is not obvious.
John Bates
"Joseph Ollero" <jollero@.wizardsrgoup.com> wrote in message
news:c77301c438cf$1bb1cf40$a001280a@.phx.gbl...
> Please see below a section in the BI Accelerator 1.2
> documentation. This is not present in the v1.1
> documentation. We would like to know why this restriction
> (we are forced to go to lowest level when 2 hierarchies
> are included in a cube) was imposed in version 1.2.
>
> o When a cube includes both the primary and based-on
> hierarchy of a dimension, you cannot specify the level
> when including both these hierarchies in a cube; the
> lowest level of the dimension will be included in the
> cube.
> o You cannot have a based-on hierarchy in a cube
> without having the primary hierarchy.
> o You can have the primary hierarchy in a cube without
> the based-on hierarchy, and you can specify the level for
> the cube.
>
> Thanks
>

BI Accelerator 1.1 and Bulk Insert

Dear Anyone,

I had been using BI Accelerator 1.1 for 2 years now. In our current project, I encountered the following error during the importing phase of BI.

I traced the problem to the bulk insert statement that BI Accelerator is executing which is ...

BULK INSERT [Project88_Staging]..[S_Fact_BrdcastAlert] FROM 'C:\Program Files\Microsoft SQL Server Accelerator for BI\data\Project88\Import\S_Fact_BrdcastAlert.txt' WITH ( FORMATFILE = 'C:\Program Files\Microsoft SQL Server Accelerator for BI\data\Project88\DTS\Import\Format\S_Fact_Brdcast Alert.fmt' , DATAFILETYPE = 'widechar', BATCHSIZE = 100000 , MAXERRORS = 100 )

The error that it generates is as follows:

Bulk Insert: DataFileType was incorrectly specified as widechar. DataFileType will be assumed to be char because the data file does not have a Unicode signature.

We export data to text files that BI Accelerator picks up. All other files imports properly imported without problems except for one fact table text file. We use the same format file that BI Accelerator uses to export to file the data to make sure that there would be no incompatabilities. File size is about 2.3GB++, which is a standard size with our other fact table files that doesnt have any errors.

We also checked for data error which we found none. We also checked the txt file we generate, notepad confirms that it is a unicode file.

Can anyone please shed a light in what is happening. Is this a bug? As much as possible I dont want to place a workaround on this one since our entire framework for loading data is all the same and has already been set up. Any help would be apreciated.

Thank YouAny reply would be appreciated.

Joseph Ollero
jollero@.wizardsgroup.com

Wednesday, March 7, 2012

BI Accelerator + Applications + Report Services

Hi All,
this seems like the best place for this question.
I've had cause to review the MSFT position/tools in the BI area. I'm
surprised!!! I feel like MSFT have made MUCH more progress than generally
talked about in newsgroups and customers I work with. All I ever hear from
MSFT with respect to DW/BI is 'the next release is going to be great.' Not,
'Hey, take a look at BIA/Reporting Services'...?
Q1. In BI accelerator I installed and tested out the shopfloor application
(manufacturing) and I generated the shopfloor database from the XL Sheet.
But I can't see where it got the table definitions for the staging table and
the underlying dimensional database from. For example, where are the
table/column/datatype definitions of the staging area and the dimensional
database in the XL sheets? I'm sure I looked at all the sheets in the
spreadsheet AnalyticsBuilderWB_ShopFloorPerformance.v.0.2.xls. (But maybe
I'm going crazy and didn't). To me it only seems like the analytical model
is in the spreadsheet.
Q2. I see it generates a ton of .dts files. And when I open them up they
seem to me (a zero level skilled DTS person) to be very complex. The manual
says 'don't change the packages' which I gather means don't look, don't
worry, just run it and it will all be ok....again, I gather that the DTS
packages are somehow generated by BIA into their binary format and I believe
the source to target mappings are defined by the 'mappings' spreadsheet. But
in this spreadsheet I don't seem to find enough columns for all the columns
in the dimensional model. For example I can't find mappings for dim_emp_std
in the spreadsheet but I would have thought it should be there. (Or am I
missing something)?
Q3. In BIA I see lots about sharepoint portal and office objects inside web
pages and all that as a presentation layer. But I just took a look at
Reporting Services. Reporting Services looks absolutely fantastic for what
it is trying to do and it as very obvious how easy it will be to extend
reporting services to do MUCH, MUCH more very, very easily. RDL (Report
Definition Language) is an idea long overdue and I do believe MSFT are first
with that one. I have not heard any other vendor talk about an RDL yet. I
was amazed that a report can just be exposed as a web service and you can
call it from anywhere with anything. Now THAT is a useful thing to have.
So the question is, how come BIA seems to completely ignore Reporting
Services? I would have thought they would be very closely 'joined'. For
example, why not do the front ends for BIA apps in reporting services? I am
assuming this is possible. I'm assuming reporting services can get data out
of Analysis Server because it can get data out of any ado.net server.
(I must say I thought MSF had hidden how good Report Services is very
well...unless I'm greatly mistaken, I've only had a few hours to look at it,
it looks like a really, really useful product!!!)
Anyway, thanks in advance if anyone can let me know what I am missing in
BIA.
Best Regards
Peter Nolan
www.peternolan.com
Wow. There is a lot of info here. A couple of comments:
Q1) datatypes using in BIA -- well, we make several guesses. First we know
that all measures are numeric (that is a requirement of Analysis Services).
Second we know that fields that we generate have specific uses -- and from
that we know their datatypes. For example, the surrogate keys are integers
(as you would expect because they are identity). For those fields which are
user supplies, e.g. member names -- we know just treat them as "names", i.e.
varchars. If you have something different, e.g. names which are really
integers, they will have to re-do things by-hand. For those things which
could be any datatype, e.g. member properties, you can choice the
appropriate datatypes.
If there are specific tables and columns you are wondering about just tell
me and I'd be glad to explain why we did something one way or the other.
What you are seeing the whole point of the BI Accelerator. You layout the
logical multidimensional design and we auto-generate a relational staging
area; final data mart, OLAP structures, and DTS packages to move the data
through it. Logical data model to final app in one click :-)
And that final app should have all of the tips and tricks that you would see
in a production system; not just a rough proof-of-concept system.
Q2) DTS packages in BIA -- thank you. I wrote the generator code and I am
quite proud of them.
It generates a ton of packages because of the type of schema we choose to
implement. Since we generate a snowflake schema, there are tables for every
dimension and level in the dimensional structure.
We had several goals with the DTS packages. First, we wanted them to be
data-driven because we don't expect everyone to be an expert on DTS. Thus
rather than having to make changes to them, most of them have variables you
can change value and have the package do something different. All of this is
documented in the PAG (online doc set). One of the challenges of this was
that DTS with SQL Server 2000 doesn't have the control flow tasks needed to
make this declaritive -- with 2005, we put that all in natively and similar
packages with 2005 would be quite a bit less complex.
Second, we wanted the packages to be visible and extensiveable by
knowledgeable users. Nothing is hidden -- it is all up front and in your
face. Yes, a novice will look at them and dispare -- but don't give up!
There is documentation in the PAG provided for all of them! Lastly you might
be interested in this white paper which talks about the DTS packages and
provides various tips and tricks beyond what the PAG has in it.
http://msdn.microsoft.com/library/de..._dts_ssabi.asp
If there is a specific step you have questions about, I'd be glad to help
also.
Q3) I am glad you like Reporting Services. I agree it is a fantastic tool.
However, it is a totally different product. BI Accelerator is a BI
application generator; not a report generator. It has a client component
only for customization. The idea was that suppose you had a report called a
"template" which was exactly the same from client to client, but one
customer called products "items" and another called products "books".
Wouldn't it be nice to ship a template along with the multidimensional
design and have the system automatically rename products to items or
products to books. That is what the client generator does with the BI
Accelerator. It ships with a Proclarity component which knows how to go
inside a Proclarity briefing book and replace one tag for another tag. A
similar facility is available from Panorama and there is an API available to
other vendors if they would like to plug into the BI Accelerator client
generator.
Hope this helps.
Dave Wickert [MSFT]
dwickert@.online.microsoft.com
Program Manager
BI SystemsTeam
SQL BI Product Unit (Analysis Services)
This posting is provided "AS IS" with no warranties, and confers no rights.
"Peter Nolan" <peter@.peternolan.com> wrote in message
news:uCEWHDlBFHA.2876@.TK2MSFTNGP12.phx.gbl...
> Hi All,
> this seems like the best place for this question.
> I've had cause to review the MSFT position/tools in the BI area. I'm
> surprised!!! I feel like MSFT have made MUCH more progress than generally
> talked about in newsgroups and customers I work with. All I ever hear
from
> MSFT with respect to DW/BI is 'the next release is going to be great.'
Not,
> 'Hey, take a look at BIA/Reporting Services'...?
> Q1. In BI accelerator I installed and tested out the shopfloor application
> (manufacturing) and I generated the shopfloor database from the XL Sheet.
> But I can't see where it got the table definitions for the staging table
and
> the underlying dimensional database from. For example, where are the
> table/column/datatype definitions of the staging area and the dimensional
> database in the XL sheets? I'm sure I looked at all the sheets in the
> spreadsheet AnalyticsBuilderWB_ShopFloorPerformance.v.0.2.xls. (But maybe
> I'm going crazy and didn't). To me it only seems like the analytical
model
> is in the spreadsheet.
>
> Q2. I see it generates a ton of .dts files. And when I open them up they
> seem to me (a zero level skilled DTS person) to be very complex. The
manual
> says 'don't change the packages' which I gather means don't look, don't
> worry, just run it and it will all be ok....again, I gather that the DTS
> packages are somehow generated by BIA into their binary format and I
believe
> the source to target mappings are defined by the 'mappings' spreadsheet.
But
> in this spreadsheet I don't seem to find enough columns for all the
columns
> in the dimensional model. For example I can't find mappings for
dim_emp_std
> in the spreadsheet but I would have thought it should be there. (Or am I
> missing something)?
>
> Q3. In BIA I see lots about sharepoint portal and office objects inside
web
> pages and all that as a presentation layer. But I just took a look at
> Reporting Services. Reporting Services looks absolutely fantastic for
what
> it is trying to do and it as very obvious how easy it will be to extend
> reporting services to do MUCH, MUCH more very, very easily. RDL (Report
> Definition Language) is an idea long overdue and I do believe MSFT are
first
> with that one. I have not heard any other vendor talk about an RDL yet.
I
> was amazed that a report can just be exposed as a web service and you can
> call it from anywhere with anything. Now THAT is a useful thing to have.
> So the question is, how come BIA seems to completely ignore Reporting
> Services? I would have thought they would be very closely 'joined'. For
> example, why not do the front ends for BIA apps in reporting services? I
am
> assuming this is possible. I'm assuming reporting services can get data
out
> of Analysis Server because it can get data out of any ado.net server.
> (I must say I thought MSF had hidden how good Report Services is very
> well...unless I'm greatly mistaken, I've only had a few hours to look at
it,
> it looks like a really, really useful product!!!)
> Anyway, thanks in advance if anyone can let me know what I am missing in
> BIA.
> Best Regards
> Peter Nolan
> www.peternolan.com
>
|||Hi Dave,
wow, nice to see the guy who 'wrote the code' is watching the forum.
Thanks for your feedback. Makes sense and I can see where BIA could
head in the next release.
You'll be pleased to hear this...the reason for my interest is some
colleagues and I are bringing to market a product that will be based
solely on MSFT BI technologies and part of that is we are researching
everything we can find about MSFT BI and the way forward with MSFT
BI.....I haven't seen any kind of comprehensive suite of slides that
is public in this matter...if you hapeen to know where one can be
downloaded from that would be great.
I stall have to figure out if BIA can fit into what we are doing. We
shall see.
My other comment...
"Q2) DTS packages in BIA -- thank you. I wrote the generator =ADcode and
I am
quite proud of them."
Let's just say I recognise a good idea and a smart guy when I see
one...;-)
One of my specialities is ETL and reducing the cost of writing ETL for
my clients. I've been doing DW for 14 years now and ETL consumes so
much of the money it's the obvious thing to keep working on to reduce.
About 2.5 years ago I investigated the possibility of writing a
generator for ETL jobs in Informatica and DataStage. I found it was
certainly possible to do so. The 'problem' was that it would be
impossible for an 'independent' to make any money out of it. If the
idea did well it would be trivial for the vendor to write the same
functionality and release it. My generator was planned to produce XML
as both INFA/DS can export/import jobs in XML format.
So I was impressed that you have come up with the same idea and from
looking at the time frames you must have had that idea before me...well
done..;-)...And that you are able to produce the DTS jobs in binary.
(Though I'm sure being MSFT you can see the source code/structures for
DTS.)
Me, I changed course after my evaluation and wrote my own ETL tool. It
is very productive. Rather than map fields at field level it maps at
table levels and moves fields within tables on matching column names.
So it is not actually necessary to ever define source to target
mappings anywhere. It discovers the column names at run time so that
when more columns are added there is no code to regenerate or change
and it is the changing of ETL code that is expensive in DW maintenance.
This is another reason why I was impressed with what you have
done....yours is the first example I have seen where it might be
possible to get away with ETL changes without re-testing in the
non-production environment while still using a 'real' ETL tool.
We have also settled on a spreadsheet as the way to record all
mappings. But we had not automated the generation of jobs/code because
even on very large projects the amount of time taken is very small. (I
recently built the ETL for a staging area+DW with 100 tables and 3,000
fields in 2 weeks so there didn't seem to be a need to speed it up even
further.) So your idea of putting a 'generate' button in the
spreadsheet and a whole set of ticks on what to generate out of the
spreadsheet also intrigued me. It looks like a really 'neat trick'. I
am an 'XL-dummy' so that thought had not crossed my mind before. I must
ask one of my colleagues how that was done......The thought has
crossed my mind that our spreadsheet could be extended to record other
information required for ETL generation and we could cut even the 2
weeks work we do now out by generating what we need to generate...
So, like I said, I just recognised a good idea and a smart guy...;-)
All the best...I'll be investigating BIA some more and look forward to
seeing what happens next...
Best Regards
Peter Nolan
www.peternolan.com
|||Actually the DTS generator uses the normal SQL Server APIs to create its
packages. All I did was to create the packages that I wanted; then saved the
package as VB. Then I reversed engineered the VB back into production
quality code. We never had to look at the source code for DTS at all.
Dave Wickert [MSFT]
dwickert@.online.microsoft.com
Program Manager
BI SystemsTeam
SQL BI Product Unit (Analysis Services)
This posting is provided "AS IS" with no warranties, and confers no rights.
"Peter Nolan" <peter@.peternolan.com> wrote in message
news:1107346392.279399.284200@.f14g2000cwb.googlegr oups.com...
Hi Dave,
wow, nice to see the guy who 'wrote the code' is watching the forum.
Thanks for your feedback. Makes sense and I can see where BIA could
head in the next release.
You'll be pleased to hear this...the reason for my interest is some
colleagues and I are bringing to market a product that will be based
solely on MSFT BI technologies and part of that is we are researching
everything we can find about MSFT BI and the way forward with MSFT
BI.....I haven't seen any kind of comprehensive suite of slides that
is public in this matter...if you hapeen to know where one can be
downloaded from that would be great.
I stall have to figure out if BIA can fit into what we are doing. We
shall see.
My other comment...
"Q2) DTS packages in BIA -- thank you. I wrote the generator Xcode and
I am
quite proud of them."
Let's just say I recognise a good idea and a smart guy when I see
one...;-)
One of my specialities is ETL and reducing the cost of writing ETL for
my clients. I've been doing DW for 14 years now and ETL consumes so
much of the money it's the obvious thing to keep working on to reduce.
About 2.5 years ago I investigated the possibility of writing a
generator for ETL jobs in Informatica and DataStage. I found it was
certainly possible to do so. The 'problem' was that it would be
impossible for an 'independent' to make any money out of it. If the
idea did well it would be trivial for the vendor to write the same
functionality and release it. My generator was planned to produce XML
as both INFA/DS can export/import jobs in XML format.
So I was impressed that you have come up with the same idea and from
looking at the time frames you must have had that idea before me...well
done..;-)...And that you are able to produce the DTS jobs in binary.
(Though I'm sure being MSFT you can see the source code/structures for
DTS.)
Me, I changed course after my evaluation and wrote my own ETL tool. It
is very productive. Rather than map fields at field level it maps at
table levels and moves fields within tables on matching column names.
So it is not actually necessary to ever define source to target
mappings anywhere. It discovers the column names at run time so that
when more columns are added there is no code to regenerate or change
and it is the changing of ETL code that is expensive in DW maintenance.
This is another reason why I was impressed with what you have
done....yours is the first example I have seen where it might be
possible to get away with ETL changes without re-testing in the
non-production environment while still using a 'real' ETL tool.
We have also settled on a spreadsheet as the way to record all
mappings. But we had not automated the generation of jobs/code because
even on very large projects the amount of time taken is very small. (I
recently built the ETL for a staging area+DW with 100 tables and 3,000
fields in 2 weeks so there didn't seem to be a need to speed it up even
further.) So your idea of putting a 'generate' button in the
spreadsheet and a whole set of ticks on what to generate out of the
spreadsheet also intrigued me. It looks like a really 'neat trick'. I
am an 'XL-dummy' so that thought had not crossed my mind before. I must
ask one of my colleagues how that was done......The thought has
crossed my mind that our spreadsheet could be extended to record other
information required for ETL generation and we could cut even the 2
weeks work we do now out by generating what we need to generate...
So, like I said, I just recognised a good idea and a smart guy...;-)
All the best...I'll be investigating BIA some more and look forward to
seeing what happens next...
Best Regards
Peter Nolan
www.peternolan.com
|||Hi Dave,
interesting...I am not up on DTS and I had never heard that the
packages could be saved to VB...I must look into DTS when the next
release comes out.....
I've ask one of my XL knowledgable guys how to put something like a
'generate' button into the spreadsheets we use to write our mappings
and to call some C++ code we have......In fact, we type the
definitions into a spreadsheet and then drop them into the database to
run the code to generate the tables/views!!! (LOL) This is when we
aren't using a data modelling tool. Most places have a standard tool
that must be used for table definitions...
Best Regards
Peter Nolan