Showing posts with label millions. Show all posts
Showing posts with label millions. Show all posts

Tuesday, March 20, 2012

Big performance problem with reports

Hello everyone!
I have major performance problems with reports that I have deployed in
Reporting Services.
I have an Sql database with millions of rows, and I have created a cube in
Analysis Services that is run against a view in the database.
My reports get data from the cube and the dataset is created with Sql syntax.
I need to filter the report and needed to create parameters, but only
prameters that give the option of choosing "all", i.e. being optional
parameters. The only way I can do that, as far as I know, is by creating the
filter parameter datasets in Sql Syntax. That is to be able to use union as
in
"select 'All'
union
select the real dataset
group by value" syntax. And it seems that I can't use union when running the
sql statement on a cube, so I have to run the statement on the original
database view.
Now the problem I have is that it takes about half an hour or more to view
the report, after choosing values in some filter parameters!
As far as I can see, the group by, is the statement that takes such a long
time. But that can't be the only problem.
So... does anyone know what I can do to reduce the time it takes to process
a report and what performance enhancements I can do?
I also have a problem with a specific report that gets a
System.OutOfMemoryException when I try to view it in the preview pane. But
when I run the dataset generating the report data, there are no problems.
Does anybody know what the problem might be?
I'm in a tight place and would appreciate any fast responses.
Thanks a lot,
TaraDo you have a stored procedure or is the sql code inside the RDL?
If you put as much as possible on the SQL Server, you can run it
through Query Analyzer and look at query plans etc. It does sound like
you are passing a lot more data around than you need, so get all the
SQL code out of the RDL and see where that gets you.

Friday, February 10, 2012

Best way of dumping the rows of a table to another table

Hi,

I have two table as follows

Tabel Cat

{

ID

Update Datetime

}

Table Master

{

UniqueId

Cat Datetime

}

I wanted to insert millions of rows from Cat to Master table and can be achieved from the following query

Code Snippet

IINSERT INTO Master (UniqueId, Cat )
SELECT ID, Update from Cat

Question : Is there any better way do achieve the same operation with better performance mean to say that the execution should be less than the time it takes to execute the above statement

Thanks for your time

~Mohan

There are some elaborate things you might do (looping, BCP etc) but the time to develope and test those will MORE than eat up the time it takes to run a probably-quite-efficient update like you've create above.

You might experiment with disabling the indexes on the target table, and then rebuilding them after the insert is complete (if your application will tolerate the indexes being offline for a period). Indexes tend to generally slow down inserts.

Code Snippet

ALTER INDEX ALL on Master DISABLE;

IINSERT INTO Master (UniqueId, Cat )
SELECT ID, Update from Cat

ALTER INDEX ALL ON Master REBUILD;

|||

Hello Mohan,

'Insert into' statement writes to log and will take a long time to insert millions of rows. I would use the follwoing to make it quicker...

1. drop any existing second table. In your case t is 'table Master'.

2. Execute this query - select * into master from Cat

3. This will create and populate the rows into master table. Note 'master' table will have the same column names as in Cat.

4. You can rename the columns using 'sp_rename'. Look it up in BOL. Check if you can rename the columns in designer.

Regards....

|||

Hi ,

If we do not have indexes on the table then the query will be same as the one i mentioned. In that case it will be time cosuming to insert millions of rows..right?

~Mohan

|||

Hello AsianIndian,

I can't drop the table and the column name are different and i cannot change the column names too as the master tabel has different intentiion.

Can u please let me know if there is any other way of doing it

~Mohan

|||

Hello Mohan,

1. Do you have any rows in Master table? If yes, approx how many?

2. Did you try BCP OUT and BCP IN?

regards......

|||

Hello,

BCP is a command line utility to read into or form a file. I don't think so we cann't use it to dump data from one tabel to another table.

~Mohan

|||

Disabling the indexes is a very good idea. You might also see if you can partition the rows you are inserting into seperate batches of some length (based on testing)

The problem is that each insert requires a log write. So one row at a time in a normal RDBMS operation can take longer than doing multiples. However, if you try to insert too many rows at a time, you get a situation where it takes a lot more time because things cannot be committed, using more memory.

However, along the lines that the other person mentioned, I would consider writing a simple SSIS (2005 and up) or DTS (2000 and earlier) package to bulk insert the data (might be labeled fast load) along with the disabling/dropping of indexes. I have used this technique on several occasions to reduce a process of inserting rows that took over an hour to just taking a few minutes.

|||

Hi,

I am agree with rusag2 and Louis Davidson..

I also would like to suggest to use SSIS or DTS to transfer the huge number of data and run it .. and run them on the server throuch scheduled job(instead on Client) which makes it to perform much faster than runnin it in Management Studio or Query Analyzer..