Showing posts with label columns. Show all posts
Showing posts with label columns. Show all posts

Sunday, March 25, 2012

BINARY_INTEGER in CRXI

Hi...
could anyone clarify my doubts...

I am using BINARY_FLOAT columns in my oracle tables.
In crystal report XI, when you connect to oracle and see the list of tables and columns...the table does not show the columns which are of BINARY_FLOAT type.

What could be the reason??
CRXI doesnt support this???

Help me pls

rgds
salaiSee if you find answer here
http://support.businessobjects.com/

Thursday, March 22, 2012

binary checksum

Hi,
Can anyone provide me with the syntax for comparing rows of two tables using binary checksum? The tables A and B have 8 & 9 columns respectively. The PK in both cases is Col1 & Col2. I want checksum on Columns 1 to 8.
ThanksTry this draft:

drop table test
drop table test2
go
create table test(id int, col1 int,col2 varchar(5),col3 datetime)
create table test2(id int, col1 int,col2 varchar(5),col3 datetime)
go
insert test values(1,1,'a','02/03/2004')
insert test values(2,2,'b','02/04/2004')
insert test values(3,3,'c','02/05/2004')
insert test values(4,4,'d','02/06/2004')
insert test2 values(1,1,'a','02/03/2004')
insert test2 values(2,2,'b','02/04/2004')
insert test2 values(3,3,'f','02/05/2004')
insert test2 values(4,4,'d','02/01/2004')
go
select t.*
from test t
join test2 t2 on t2.id=t.id
where CHECKSUM(t.col2,t.col3)<>CHECKSUM(t2.col2,t2.col3)|||Hi,

The code doesn't work for this case.

drop table test
drop table test2
go
create table test(id int, col1 int,col2 varchar(5),col3 datetime)
create table test2(id int, col1 int,col2 varchar(5),col3 datetime)

insert test values(4,4,'d','02/06/2004')
insert test values(4,4,'e','02/06/2004')

insert test2 values(4,4,'d','02/06/2004')
insert test2 values(4,4,'e','02/06/2004')

select *
from test

select *
from test2

select t.*
from test t
join test2 t2 on t2.id=t.id
where CHECKSUM(t.col2,t.col3)<>CHECKSUM(t2.col2,t2.col3)

--

All my data is like this. The rows are the same but still checksum selects the rows. Please help.|||drop table test
drop table test2
go
create table test(id int, col1 int,col2 varchar(5),col3 datetime)
create table test2(id int, col1 int,col2 varchar(5),col3 datetime)

insert test values(4,4,'d','02/06/2004')
insert test values(4,4,'e','02/06/2004')

insert test2 values(4,4,'d','02/06/2004')
insert test2 values(4,4,'e','02/06/2004')

select *
from test

select *
from test2

select t.*
from test t
join test2 t2 on t2.col1=t.col1 AND t2.col2=t.col2 -- Join on PK
where BINARY_CHECKSUM(t.id,t.col3) <> BINARY_CHECKSUM(t2.id,t2.col3)

-- A much more complex, but its the way I do it.

SELECT t.*
from test t
JOIN
(
SELECT t1.col1, t1.col2, BINARY_CHECKSUM(*) as bin_ck_sum
from test t1
) AS X1 ON t.col1 = X1.col1 AND t.col2 = X1.col2
JOIN
(
SELECT t2.col1, t2.col2, BINARY_CHECKSUM(*) as bin_ck_sum
from test2 t2
) AS X2 ON t.col1 = X2.col1 AND t.col2 = X2.col2
WHERE X1.bin_ck_sum <> X2.bin_ck_sum|||TimS, your method is the most appropriate when comparing one entire record to another, but vivek_vdc only wants to check on 8 of the 9 columns in one table against the 8 columns in the other table. Binary_Checksum(*) on both tables will not do this comparison.|||--
TimS, your method is the most appropriate when comparing one entire record to another, but vivek_vdc only wants to check on 8 of the 9 columns in one table against the 8 columns in the other table. Binary_Checksum(*) on both tables will not do this comparison.
--

I agree but, I gave him two different solutions.

The Second, is how I would compare tables and it can be adapted to his problem; I have found that the second solution works best for me. Thier is no reason he can't replace the star with what columns he wish to compare.

THE PROBLEM was no one was joining on the PK of the tables.

Tim S|||You have to join on the primary keys change the join to use the columns in your primary key!

select t.*
from test t
-- Join on PK1 & PK2 ( the Primary Key COLUMNS )
join test2 t2 on t2.PK1=t.PK1 AND t2.PK2=t.PK2
where BINARY_CHECKSUM(t.id,t.col3) <> BINARY_CHECKSUM(t2.id,t2.col3)

If you want any more help please response to the newsgoup. Please give your create table and insert data that MATCHES your tables.

Tim S

--Original Message--
From: vivek_vdc
Sent: Monday, February 09, 2004 3:46 PM
To:
Subject: binary checksum post help

TimS - The code that you have given is assuming col1 & col2 are the PK but I have id & Col1 as PK. The code doesn't work in this case. Please advise on how I should proceed?

Tuesday, March 20, 2012

Big table(?) or split between tables?

Hi Guys

I have an application that runs on several sites that has a table with 36 columns mostly ints och small varchars.

I currently have only one table that stores the data and five indexes and since the table on one location (and others soon) has about 18 million rows I have been trying to come up with a better solution (but only if needed, I dont think I have to tell you that I am a programmer and not an dba).
The db file size with all the indexes is more then 10gb, in it self is not an problem but is it a bad solution to have it that way?

The questions are:

Are there any big benefits if i split it into several smaller tables or even smaler databases and make the SPs that gets the data aware that say 2006 years data is in table a and so on?
Its quite important that there are fast SELECTS and that need is far more important then to decrease the size of the database file and so on.

How many rows is okay to have in one table (with 25 columns) before its too big?

Thanks in advance.

Best regards
Johan, Sweden.

CREATE TABLE [dbo].[Cdr](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Abandon] [varchar](7) NULL,
[Bcap] [varchar](2) NULL,
[BlId] [varchar](16) NULL,
[CallChg] [varchar](6) NULL,
[CallIdentifier] [uniqueidentifier] NULL,
[ChgInfo] [varchar](5) NULL,
[ClId] [varchar](16) NULL,
[CustNo] [smallint] NULL,
[Digits] [varchar](32) NULL,
[DigitType] [varchar](1) NULL,
[Dnis1] [varchar](6) NULL,
[Dnis2] [varchar](6) NULL,
[Duration] [int] NULL,
[FgDani] [varchar](13) NULL,
[HoundredHourDuration] [varchar](3) NULL,
[Name] [varchar](40) NULL,
[NameId] [int] NOT NULL,
[Npi] [varchar](2) NULL,
[OrigAuxId] [varchar](11) NULL,
[OrigId] [varchar](7) NULL,
[OrigMin] [varchar](16) NULL,
[Origten0] [varchar](3) NULL,
[RecNo] [int] NULL,
[RecType] [varchar](1) NOT NULL,
[Redir] [varchar](1) NULL,
[TerId] [varchar](7) NOT NULL,
[TermAuxId] [varchar](11) NULL,
[TermMin] [varchar](16) NULL,
[Termten0] [varchar](3) NULL,
[Timestamp] [datetime] NOT NULL,
[Ton] [varchar](1) NULL,
[Tta] [int] NULL,
[Twt] [int] NULL,
[DateValue] [int] NULL,
[TimeValue] [int] NULL,
[Level] [varchar](50) NOT NULL CONSTRAINT [DF_Cdr_Level] DEFAULT ('x:'),
CONSTRAINT [PK_Cdr] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 10) ON [PRIMARY]
) ON [PRIMARY]http://databases.about.com/od/specificproducts/a/firstnormalform.htm

http://en.wikipedia.org/wiki/First_normal_form|||Thanks for the quick reply.
Okay Ive read the articles, but the data in this table is records from a machine and are not duplicate, the name column is duplicated but I figured that it was a overkill to move one column out, all the other fields are diffrent from time to time. And all column always only has one value or null, so there is no need to have a parent-child relation to another table, or?

The acctual question was not if I would split the columns in multiple tables but if I should split the rows in multiple tables.|||What's the justification for splitting the table data into multiple tables of the exact structure?
EDIT: That should probably read - "What are your thoughts behind doing this?"|||it's called horizontal table partitioning and it can be used to create partitioned views and this works best to increase performance if you are doing it over multiple file groups on multiple disks.|||The question is does that increase performance in a single disk/filegroup scenario.

If I ask it like this, is it any problems related with a table with 18 million rows?
If not I am quite happy with the current solution, but if I were to split the data in several tables maby one for the last months data in one "active" table (since most querys are on the most recent data) and all the other in an "archive" table.

And If a split would be a good chooise, when to query a several months whats the best way to look in the two tables? Is it to have a SP that handels all this or should my data layer handle the access. Ex two SPs one for active and one for historic data and combine them with two calls if needed?|||If I ask it like this, is it any problems related with a table with 18 million rows?No - the number of rows is irrelevent. You need to see how things perform and act accordingly based on that. And although partitioning is an option there would be hundreds of things you'd want to consider first. Just to give you perspective, I work with non-partitioned tables with 1/2 billion rows and I expect a lot of the other posters here do also.

Also Don is right, your table would benefit from normalisation and not just first normal form. Remember that it is the number of pages read\ written from disk not the number of rows that count. So if you want a performance justification (rather than logical justification) for normalising then a normalised database will distribute your data across multiple tables, reducing the row size and leading to more rows per page. YMMV of course depending on your queries.

What sort of queries are running on it? Single row lookups or reports pulling in lots of data or a combination?|||Woah - this is probably terrible:
CONSTRAINT [PK_Cdr] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 10) ON [PRIMARY]
) ON [PRIMARY]Unless your rows get a lot of updates making the data in the columns larger after the initial insert then the fillfactor on a monotonically increasing clustered index should be approaching 100. All your clustered index leaf pages are 90% empty (depending on data modifications).

This is the sort of thing I mean by partitioning being one of your last options - lots of things to consider first :)|||Fancy scripting out the other indexes?|||Thanks guys seem like I have to really need to sit down and study some database architecture.

I currently only have one SP that handels the querys that always are used in reporting, soo when a row is in the db its "never" changes. But since this tables store telephone call data records that are collected at runtime, it gets alot of small inserts quite often. Say 10-200 records every five minutes. The insert interval is different at diffrent sites. But as a I said once its in there its "never" gonna change.
So the queries gets everyting from a couple of million rows to about 100 based on how long period your report is covering.

How would you suggest I should normalize this table then, or point me to some good resource where I can find out.

I'm sorry that this maby is newbe questions but thats really what I am so:)
Really appreciates your help.

Here are the rest of the indexes:

/****** Object: Index [IX_DateValue] Script Date: 11/21/2007 15:22:20 ******/
CREATE NONCLUSTERED INDEX [IX_DateValue] ON [dbo].[Cdr]
(
[DateValue] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = ON, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]

/****** Object: Index [IX_Level] Script Date: 11/21/2007 15:22:38 ******/
CREATE NONCLUSTERED INDEX [IX_Level] ON [dbo].[Cdr]
(
[Level] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = ON, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]

/****** Object: Index [IX_OrigId] Script Date: 11/21/2007 15:22:48 ******/
CREATE NONCLUSTERED INDEX [IX_OrigId] ON [dbo].[Cdr]
(
[OrigId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = ON, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]

/****** Object: Index [IX_TerId] Script Date: 11/21/2007 15:22:56 ******/
CREATE NONCLUSTERED INDEX [IX_TerId] ON [dbo].[Cdr]
(
[TerId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = ON, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]

/****** Object: Index [IX_TimeValue] Script Date: 11/21/2007 15:23:08 ******/
CREATE NONCLUSTERED INDEX [IX_TimeValue] ON [dbo].[Cdr]
(
[TimeValue] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]|||Bump fillfactor for the clustered index to 100.

This is my fave db design article at the mo:
http://www.tonymarston.net/php-mysql/database-design.html

Please post your proc. If that is literally all you do with this table then we can tune it rather well. I suspect we will want to change your clustered index....|||I have some reading to do:)

This is the quite simple SP the table function SplitDelimitedVarChar takes a carchar and a split char and creates a table that I use to get the correct dates and [level]'s (called exchanges in the SP).

And the way that I have come up with this code is by testing diffrent solutions and this has been the fastest, and as you can se its a quite simple select query so the columsn [DateValue] and [TimeValue] are ints that are created at insert based on the DATETIME column [timestamp] and are just the int representation of the date and time, and is there because its easier.

CREATE PROCEDURE [dbo].[spStudio_Get_Cdr]
@.beginDate DATETIME,
@.endDate DATETIME,
@.beginTime INT,
@.endTime INT,
@.subscribers VARCHAR(MAX),
@.exchanges VARCHAR(MAX) = '1:'
AS
BEGIN
SET NOCOUNT ON;

DECLARE @.exch TABLE(Item Varchar(50))
INSERT INTO @.exch
SELECT Item FROM [SplitDelimitedVarChar] (@.exchanges, '|') ORDER BY Item

DECLARE @.subs TABLE(Item Varchar(19))
INSERT INTO @.subs
SELECT Item FROM [SplitDelimitedVarChar] (@.subscribers, '|') ORDER BY Item

SELECT [id]
,[Abandon]
,[Bcap]
,[BlId]
,[CallChg]
,[CallIdentifier]
,[ChgInfo]
,[ClId]
,[CustNo]
,[Digits]
,[DigitType]
,[Dnis1]
,[Dnis2]
,[Duration]
,[FgDani]
,[HoundredHourDuration]
,[Name]
,[NameId]
,[Npi]
,[OrigAuxId]
,[OrigId]
,[OrigMin]
,[Origten0]
,[RecNo]
,[RecType]
,[Redir]
,[TerId]
,[TermAuxId]
,[TermMin]
,[Termten0]
,[Timestamp]
,[Ton]
,[Tta]
,[Twt]
,[Level]
FROM
[dbo].[Cdr] AS C
INNER JOIN @.exch AS E
ON
C.[Level] = E.[Item]
WHERE
(C.[DateValue] BETWEEN FLOOR(CAST(@.beginDate AS FLOAT)) AND FLOOR(CAST(@.endDate AS FLOAT)))
AND
(C.[TimeValue] BETWEEN @.beginTime AND @.endTime)
AND
(EXISTS(SELECT * FROM @.subs WHERE [Item] = C.[OrigId])
OR
EXISTS(SELECT * FROM @.subs WHERE [Item] = C.[TerId]))

END

thanks in advance|||Changing from a datetime to 2 different float values is NOT easier than doing a datediff or a dateadd.|||Agreed.

Jeff has some pertinent comments here - at least two or three articles apply. http://weblogs.sqlteam.com/jeffs/category/283.aspx
More reading ;)|||The idea of your sproc then is that you don't search for a range of "from 2:00pm 1st of Jan 2007 to 4:00pm 8th of April" but "1st of Jan 2007 to 8th of April, between the times of 2:00pm to 4:00 pm only". Correct?|||Yes that correct, the report can be say the whole year of 2006 but only between 07:00 and 16:00, not 20060101 07:00 o 20061231 16:00.
Exactly as you described.

Is there a better way to get this result Im ofcource going to try it.|||You can use Display Estimated Execution Plan to analyze your query performance.

cheers
iful|||if you do decide to partition (which probably is not necessary) and you are on 2005, have a look at partition functions and partition schemes. you don't need to use partitioned views anymore.

This approach to horizontal partitioning is more convenient IMO because you only have one set of indexes, etc, to manage. It behaves as a single table spread among multiple filegroups.

Big queries return empty result set

Hello guys,

MS SQL server 2000 behavies strange with big queries that involves relatively large number of tables:
If I just enumerate columns I want to receive, the result set is empty. Adding * to the column list without making any change to where clause seems to solve the problem but I guess it's not the best practice.
The most amazing is that this behavior is not stable, so I suppose there's something to deal with server itself, not the application.
Has anybody suffered this problem and what solution was adopted?
Thanks for any information you can provide.Most of my queries are small (less than 200 columns and 500,000 rows), so maybe I'm not hitting the kind of volume that you are, but I've never heard of that behavior! I also rarely push beyond 30-50 tables in a single query with the exception of one purchased package that sometimes can create monsters.

Can you give me a rough idea of how big your database is (in gigabytes) and how big your result set is (rows, columns, and megabytes)? I'll try to reproduce the behavior here with a known good server.

-PatP|||Definitely my queries never get this volume. I have a join of about 15 tables with 10 columns in each. Empty recordset is an ocasional behavior, i mean today the query may work perfectly, tomorrow absolutely exact query with absolutely exact parameters without any modification made to the database fails.
I was told that this could be caused by particular, localized version of SQL server that we use (sql server 2000 SP3 spanish, 8.00.760). Could you confirm that?

Originally posted by Pat Phelan
Most of my queries are small (less than 200 columns and 500,000 rows), so maybe I'm not hitting the kind of volume that you are, but I've never heard of that behavior! I also rarely push beyond 30-50 tables in a single query with the exception of one purchased package that sometimes can create monsters.

Can you give me a rough idea of how big your database is (in gigabytes) and how big your result set is (rows, columns, and megabytes)? I'll try to reproduce the behavior here with a known good server.

-PatP|||Originally posted by Pat Phelan
Most of my queries are small (less than 200 columns and 500,000 rows), so maybe I'm not hitting the kind of volume that you are, but I've never heard of that behavior! I also rarely push beyond 30-50 tables in a single query with the exception of one purchased package that sometimes can create monsters.


:D

Why not POST the query...|||Originally posted by Brett Kaiser
:D

Why not POST the query...

As you wish:
--NON-EMPTY RESULTSET--

select bb.i_object_id as bb_aa, bb.i_object_type_id as bb_ab,
<skipped about 50 columns>
bl.i_operation_id as bl_ak, bl.integra_operation_id as bl_al,
* <-- this allows me receive data
from i_sysobject bb, part_list_item bc,
i_relation bd,
i_sysobject be, part_list bf,
i_relation bg, i_sysobject bh,
production_order bi, i_relation bj,
i_sysobject bk, operation bl
where bc.i_part_list_item_id = bb.i_object_id and
bf.state = ? and
bf.i_part_list_id = be.i_object_id and
bi.i_production_order_id = bh.i_object_id and
bl.i_operation_id = bk.i_object_id and
bc.i_part_list_item_id=bd.i_relation_child_object_ id and
bd.i_relation_parent_object_id=bf.i_part_list_id and
bf.i_part_list_id=bg.i_relation_child_object_id and
bg.i_relation_parent_object_id=bi.i_production_ord er_id and
bf.i_part_list_id=bj.i_relation_child_object_id and
bj.i_relation_parent_object_id=bl.i_operation_id
order by bf.i_part_list_id asc

--EMPTY RESULTSET--
select bb.i_object_id as bb_aa, bb.i_object_type_id as bb_ab,
<skipped about 50 columns>
bl.i_operation_id as bl_ak, bl.integra_operation_id as bl_al
from i_sysobject bb, part_list_item bc,
i_relation bd,
i_sysobject be, part_list bf,
i_relation bg, i_sysobject bh,
production_order bi, i_relation bj,
i_sysobject bk, operation bl
where bc.i_part_list_item_id = bb.i_object_id and
bf.state = ? and
bf.i_part_list_id = be.i_object_id and
bi.i_production_order_id = bh.i_object_id and
bl.i_operation_id = bk.i_object_id and
bc.i_part_list_item_id=bd.i_relation_child_object_ id and
bd.i_relation_parent_object_id=bf.i_part_list_id and
bf.i_part_list_id=bg.i_relation_child_object_id and
bg.i_relation_parent_object_id=bi.i_production_ord er_id and
bf.i_part_list_id=bj.i_relation_child_object_id and
bj.i_relation_parent_object_id=bl.i_operation_id
order by bf.i_part_list_id asc

bf.state = ? receives the SAME parameter in both cases.|||Everything is identity, isn't it...

Is one of these tables a driver? Like what you want to base your result set on?

Lots of sele referencing...

Maybe you can use derived tables..

SELECT * FROM (SELECT * FROM ...join the table that relate) AS A
LEFT JOIN (SELECT * FROM ..same thing) AS B
ON A.key = B.Key

Here's your code cleaned up some..Not sure if it's doing the same thing...

I think it is...and easier to see what you're trying to do

SELECT *
FROM i_sysobject bb
LEFT JOIN part_list_item bc ON bc.i_part_list_item_id = bb.i_object_id
LEFT JOIN i_relation bd ON bc.i_part_list_item_id=bd.i_relation_child_object_ id
LEFT JOIN i_sysobject be ON bf.i_part_list_id = be.i_object_id and
LEFT JOIN part_list bf ON bd.i_relation_parent_object_id=bf.i_part_list_id
LEFT JOIN i_relation bg ON bf.i_part_list_id=bg.i_relation_child_object_id
LEFT JOIN i_sysobject bh ON bi.i_production_order_id = bh.i_object_id
LEFT JOIN production_order bi ON bg.i_relation_parent_object_id=bi.i_production_ord er_id
LEFT JOIN i_relation bj ON bf.i_part_list_id=bj.i_relation_child_object_id
LEFT JOIN i_sysobject bk ON bl.i_operation_id = bk.i_object_id
LEFT JOIN operation bl ON bj.i_relation_parent_object_id=bl.i_operation_id
WHERE bf.state = ?

Big Mistake? - Indexes with includes on partition tables

I recently tried adding include columns to two indexes, one on each of two
partitioned tables. The tables are modest-sized, fact tables used for
aggregation. Each week, we slide the partitions.
The slide process is normally very fast. However, after adding include
columns to the indexes, the command to split the range in the partition
functions resulted in what appears to be runaway processes that gobbled up
system resources and maxxed out the transaction log. In fact, even after
emptying the tables of data and dropping the indexes, this happened. It was
only after I deleted the index partition scheme that the problem went away.
So... is there a problem with creating indexes with include columns on
partitioned tables where the partitions will slide? Is there something
exponential that goes on?
Thanks
Thanks,
CGW
I can't think of why simply including columns would show down the split.
Can you please post the DDL, including partition schemes and functions?
Hope this helps.
Dan Guzman
SQL Server MVP
"CGW" <CGW@.discussions.microsoft.com> wrote in message
news:2CDF1207-375E-4ABD-9723-D69A9BE1B010@.microsoft.com...
>I recently tried adding include columns to two indexes, one on each of two
> partitioned tables. The tables are modest-sized, fact tables used for
> aggregation. Each week, we slide the partitions.
> The slide process is normally very fast. However, after adding include
> columns to the indexes, the command to split the range in the partition
> functions resulted in what appears to be runaway processes that gobbled up
> system resources and maxxed out the transaction log. In fact, even after
> emptying the tables of data and dropping the indexes, this happened. It
> was
> only after I deleted the index partition scheme that the problem went
> away.
> So... is there a problem with creating indexes with include columns on
> partitioned tables where the partitions will slide? Is there something
> exponential that goes on?
> Thanks
> --
> Thanks,
> CGW
|||That could ba a lot of work for both of us. Are you just mildly curious, or
do you think you could really puzzle it out?
Thanks,
CGW
"Dan Guzman" wrote:

> I can't think of why simply including columns would show down the split.
> Can you please post the DDL, including partition schemes and functions?
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "CGW" <CGW@.discussions.microsoft.com> wrote in message
> news:2CDF1207-375E-4ABD-9723-D69A9BE1B010@.microsoft.com...
>
|||> That could ba a lot of work for both of us. Are you just mildly curious,
> or
> do you think you could really puzzle it out?
I'm quite interested in this problem and would like to try to help you
figure out what's going on. However, I'll need DDL to do so since I can't
think of anything obvious based on the information given. The only causes I
can think of isn't related to included columns.
One gotcha I've seen is that splitting a partition function will change all
the related partition schemes. If you have multiple tables using the same
scheme, all corresponding partitions will be split. I've run into an issue
where a staging table (with same scheme) wasn't empty when the function was
split so a lot of data needed to be moved in the staging table partitions
even though no data needed to be moved into the newly created partition of
the main table.
Hope this helps.
Dan Guzman
SQL Server MVP
"CGW" <CGW@.discussions.microsoft.com> wrote in message
news:00984490-298D-43AD-B820-BAA95680012F@.microsoft.com...[vbcol=seagreen]
> That could ba a lot of work for both of us. Are you just mildly curious,
> or
> do you think you could really puzzle it out?
> --
> Thanks,
> CGW
>
> "Dan Guzman" wrote:

Big Mistake? - Indexes with includes on partition tables

I recently tried adding include columns to two indexes, one on each of two
partitioned tables. The tables are modest-sized, fact tables used for
aggregation. Each week, we slide the partitions.
The slide process is normally very fast. However, after adding include
columns to the indexes, the command to split the range in the partition
functions resulted in what appears to be runaway processes that gobbled up
system resources and maxxed out the transaction log. In fact, even after
emptying the tables of data and dropping the indexes, this happened. It was
only after I deleted the index partition scheme that the problem went away.
So... is there a problem with creating indexes with include columns on
partitioned tables where the partitions will slide? Is there something
exponential that goes on?
Thanks
--
Thanks,
CGWI can't think of why simply including columns would show down the split.
Can you please post the DDL, including partition schemes and functions?
--
Hope this helps.
Dan Guzman
SQL Server MVP
"CGW" <CGW@.discussions.microsoft.com> wrote in message
news:2CDF1207-375E-4ABD-9723-D69A9BE1B010@.microsoft.com...
>I recently tried adding include columns to two indexes, one on each of two
> partitioned tables. The tables are modest-sized, fact tables used for
> aggregation. Each week, we slide the partitions.
> The slide process is normally very fast. However, after adding include
> columns to the indexes, the command to split the range in the partition
> functions resulted in what appears to be runaway processes that gobbled up
> system resources and maxxed out the transaction log. In fact, even after
> emptying the tables of data and dropping the indexes, this happened. It
> was
> only after I deleted the index partition scheme that the problem went
> away.
> So... is there a problem with creating indexes with include columns on
> partitioned tables where the partitions will slide? Is there something
> exponential that goes on?
> Thanks
> --
> Thanks,
> CGW|||That could ba a lot of work for both of us. Are you just mildly curious, or
do you think you could really puzzle it out?
--
Thanks,
CGW
"Dan Guzman" wrote:
> I can't think of why simply including columns would show down the split.
> Can you please post the DDL, including partition schemes and functions?
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "CGW" <CGW@.discussions.microsoft.com> wrote in message
> news:2CDF1207-375E-4ABD-9723-D69A9BE1B010@.microsoft.com...
> >I recently tried adding include columns to two indexes, one on each of two
> > partitioned tables. The tables are modest-sized, fact tables used for
> > aggregation. Each week, we slide the partitions.
> >
> > The slide process is normally very fast. However, after adding include
> > columns to the indexes, the command to split the range in the partition
> > functions resulted in what appears to be runaway processes that gobbled up
> > system resources and maxxed out the transaction log. In fact, even after
> > emptying the tables of data and dropping the indexes, this happened. It
> > was
> > only after I deleted the index partition scheme that the problem went
> > away.
> >
> > So... is there a problem with creating indexes with include columns on
> > partitioned tables where the partitions will slide? Is there something
> > exponential that goes on?
> >
> > Thanks
> > --
> > Thanks,
> >
> > CGW
>|||> That could ba a lot of work for both of us. Are you just mildly curious,
> or
> do you think you could really puzzle it out?
I'm quite interested in this problem and would like to try to help you
figure out what's going on. However, I'll need DDL to do so since I can't
think of anything obvious based on the information given. The only causes I
can think of isn't related to included columns.
One gotcha I've seen is that splitting a partition function will change all
the related partition schemes. If you have multiple tables using the same
scheme, all corresponding partitions will be split. I've run into an issue
where a staging table (with same scheme) wasn't empty when the function was
split so a lot of data needed to be moved in the staging table partitions
even though no data needed to be moved into the newly created partition of
the main table.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"CGW" <CGW@.discussions.microsoft.com> wrote in message
news:00984490-298D-43AD-B820-BAA95680012F@.microsoft.com...
> That could ba a lot of work for both of us. Are you just mildly curious,
> or
> do you think you could really puzzle it out?
> --
> Thanks,
> CGW
>
> "Dan Guzman" wrote:
>> I can't think of why simply including columns would show down the split.
>> Can you please post the DDL, including partition schemes and functions?
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> "CGW" <CGW@.discussions.microsoft.com> wrote in message
>> news:2CDF1207-375E-4ABD-9723-D69A9BE1B010@.microsoft.com...
>> >I recently tried adding include columns to two indexes, one on each of
>> >two
>> > partitioned tables. The tables are modest-sized, fact tables used for
>> > aggregation. Each week, we slide the partitions.
>> >
>> > The slide process is normally very fast. However, after adding include
>> > columns to the indexes, the command to split the range in the partition
>> > functions resulted in what appears to be runaway processes that gobbled
>> > up
>> > system resources and maxxed out the transaction log. In fact, even
>> > after
>> > emptying the tables of data and dropping the indexes, this happened. It
>> > was
>> > only after I deleted the index partition scheme that the problem went
>> > away.
>> >
>> > So... is there a problem with creating indexes with include columns on
>> > partitioned tables where the partitions will slide? Is there something
>> > exponential that goes on?
>> >
>> > Thanks
>> > --
>> > Thanks,
>> >
>> > CGWsql

Monday, March 19, 2012

Big Mistake? - Indexes with includes on partition tables

I recently tried adding include columns to two indexes, one on each of two
partitioned tables. The tables are modest-sized, fact tables used for
aggregation. Each week, we slide the partitions.
The slide process is normally very fast. However, after adding include
columns to the indexes, the command to split the range in the partition
functions resulted in what appears to be runaway processes that gobbled up
system resources and maxxed out the transaction log. In fact, even after
emptying the tables of data and dropping the indexes, this happened. It was
only after I deleted the index partition scheme that the problem went away.
So... is there a problem with creating indexes with include columns on
partitioned tables where the partitions will slide? Is there something
exponential that goes on?
Thanks
--
Thanks,
CGWI can't think of why simply including columns would show down the split.
Can you please post the DDL, including partition schemes and functions?
Hope this helps.
Dan Guzman
SQL Server MVP
"CGW" <CGW@.discussions.microsoft.com> wrote in message
news:2CDF1207-375E-4ABD-9723-D69A9BE1B010@.microsoft.com...
>I recently tried adding include columns to two indexes, one on each of two
> partitioned tables. The tables are modest-sized, fact tables used for
> aggregation. Each week, we slide the partitions.
> The slide process is normally very fast. However, after adding include
> columns to the indexes, the command to split the range in the partition
> functions resulted in what appears to be runaway processes that gobbled up
> system resources and maxxed out the transaction log. In fact, even after
> emptying the tables of data and dropping the indexes, this happened. It
> was
> only after I deleted the index partition scheme that the problem went
> away.
> So... is there a problem with creating indexes with include columns on
> partitioned tables where the partitions will slide? Is there something
> exponential that goes on?
> Thanks
> --
> Thanks,
> CGW|||That could ba a lot of work for both of us. Are you just mildly curious, or
do you think you could really puzzle it out?
--
Thanks,
CGW
"Dan Guzman" wrote:

> I can't think of why simply including columns would show down the split.
> Can you please post the DDL, including partition schemes and functions?
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "CGW" <CGW@.discussions.microsoft.com> wrote in message
> news:2CDF1207-375E-4ABD-9723-D69A9BE1B010@.microsoft.com...
>|||> That could ba a lot of work for both of us. Are you just mildly curious,
> or
> do you think you could really puzzle it out?
I'm quite interested in this problem and would like to try to help you
figure out what's going on. However, I'll need DDL to do so since I can't
think of anything obvious based on the information given. The only causes I
can think of isn't related to included columns.
One gotcha I've seen is that splitting a partition function will change all
the related partition schemes. If you have multiple tables using the same
scheme, all corresponding partitions will be split. I've run into an issue
where a staging table (with same scheme) wasn't empty when the function was
split so a lot of data needed to be moved in the staging table partitions
even though no data needed to be moved into the newly created partition of
the main table.
Hope this helps.
Dan Guzman
SQL Server MVP
"CGW" <CGW@.discussions.microsoft.com> wrote in message
news:00984490-298D-43AD-B820-BAA95680012F@.microsoft.com...[vbcol=seagreen]
> That could ba a lot of work for both of us. Are you just mildly curious,
> or
> do you think you could really puzzle it out?
> --
> Thanks,
> CGW
>
> "Dan Guzman" wrote:
>

Big clustered index rebuild

I need to establish the storage requirements for a clustered index rebuild with SQL 05. The table is made up of the following columns

[ProductID] [int] NOT NULL,

[RegionID] [int] NOT NULL,

[TimeID] [int] NOT NULL,

[FactID] [int] NOT NULL,

[Value] [decimal](14, 4) NOT NULL

This is the clustered index :-

[RegionId] ASC,

[FactId] ASC,

[TimeId] ASC,

[ProductId] ASC

This is the result of a sp_spaceused on this particular table

name rows reserved data index_size unused Table 16910379278 868107368 KB 863579184 KB 3869848 KB 658336 KB

The database where this table is stored is in Simple recovery mode.

What i would really like to know is, what additional storage would i require to run the following rebuild index command.

Alter Index blah on table Rebuild

Thanks

There is a nice explanation here.

WesleyB

Visit my SQL Server weblog @. http://dis4ea.blogspot.com

Wednesday, March 7, 2012

between dates and times

Hello,
I am new to SQL and am having a problem with dates and times.
I have a table with a 4 columns. Date, Time, Material, Qty
example;
01/12/05 04:00 steel 35
01/12/05 06:00 steel 60
01/12/05 23:40 steel 40
01/12/05 08:00 iron 25
02/12/05 05:00 steel 20
02/12/05 06:00 steel 50
I want to be able to look for matching rows that have a start & end date and
start & end time.
i.e start date 01/12/05 time >= 06:00
end date 02/12/05 time <= 05:59
The times are aways static. I ideally would like to see one row returned tha
t
summed the last column and combined the different dates like;
01/12/05 steel 120
01/12/05 iron 25
however if I searched for 01/12/05 and 03/12/05 I would see;
01/12/05 steel 120
01/12/05 iron 25
02/12/05 steel 50
Any help or guidance will be appreciated.What are the datatypes of the date and time columns. The ideal solution
would be to represent them as a single datetime value.
In any case, as a short term solution, check out the CONVERT function in SQL
Server Books Onlne. There are arguments that can help you convert a datetime
value to date-only string and a time-only string.
Anith

Saturday, February 25, 2012

Better way to solve my RS problem...

Hello all,
I have a sales table with the following columns:
Product
Date
Price
SalesAmount
Quantity
I need to write a report in Rs that has the following information:
Products sold in a month and during the year...
Currently I use a sub report with different date parameters ie date between
1/1/04 and 1/1/05 for The year sold data, and Date = @.Month for the month
sales.
However, sub-reports seem to slow the report down significantly. Is there a
better way to do this using only one report?
Thanks,
ClintTake a look at the [Employee Sales Summary.rdl] sample report that ships
with the product.
--
Ravi Mumulla (Microsoft)
SQL Server Reporting Services
This posting is provided "AS IS" with no warranties, and confers no rights.
"AshVsAOD" <.> wrote in message news:en4Pm2NqEHA.340@.TK2MSFTNGP10.phx.gbl...
> Hello all,
> I have a sales table with the following columns:
> Product
> Date
> Price
> SalesAmount
> Quantity
> I need to write a report in Rs that has the following information:
> Products sold in a month and during the year...
> Currently I use a sub report with different date parameters ie date
between
> 1/1/04 and 1/1/05 for The year sold data, and Date = @.Month for the month
> sales.
> However, sub-reports seem to slow the report down significantly. Is there
a
> better way to do this using only one report?
> Thanks,
> Clint
>|||Thanks,
However, I need one table, this solution seems to require multiple
tables/charts.
Any other way?
"Ravi Mumulla (Microsoft)" <ravimu@.online.microsoft.com> wrote in message
news:eP2I7cWqEHA.3464@.tk2msftngp13.phx.gbl...
> Take a look at the [Employee Sales Summary.rdl] sample report that ships
> with the product.
> --
> Ravi Mumulla (Microsoft)
> SQL Server Reporting Services
> This posting is provided "AS IS" with no warranties, and confers no
rights.
> "AshVsAOD" <.> wrote in message
news:en4Pm2NqEHA.340@.TK2MSFTNGP10.phx.gbl...
> > Hello all,
> >
> > I have a sales table with the following columns:
> > Product
> > Date
> > Price
> > SalesAmount
> > Quantity
> >
> > I need to write a report in Rs that has the following information:
> > Products sold in a month and during the year...
> >
> > Currently I use a sub report with different date parameters ie date
> between
> > 1/1/04 and 1/1/05 for The year sold data, and Date = @.Month for the
month
> > sales.
> > However, sub-reports seem to slow the report down significantly. Is
there
> a
> > better way to do this using only one report?
> >
> > Thanks,
> > Clint
> >
> >
>|||This looks to me to be a good candidate for drill down. Your query should
summarize by year,month. Then in Report Services you adding a grouping for
year and a grouping for month. You want the month grouping to be hidden and
based on the year. Then if they click on the year it expands and you get the
month. If the group header for the year you put in a sum expression for the
year total shows in the heading. Hope that makes sense you you.
Bruce L-C
MVP Reporting Services
"AshVsAOD" <.> wrote in message
news:eCvnWabqEHA.1300@.TK2MSFTNGP12.phx.gbl...
> Thanks,
> However, I need one table, this solution seems to require multiple
> tables/charts.
> Any other way?
> "Ravi Mumulla (Microsoft)" <ravimu@.online.microsoft.com> wrote in message
> news:eP2I7cWqEHA.3464@.tk2msftngp13.phx.gbl...
>> Take a look at the [Employee Sales Summary.rdl] sample report that ships
>> with the product.
>> --
>> Ravi Mumulla (Microsoft)
>> SQL Server Reporting Services
>> This posting is provided "AS IS" with no warranties, and confers no
> rights.
>> "AshVsAOD" <.> wrote in message
> news:en4Pm2NqEHA.340@.TK2MSFTNGP10.phx.gbl...
>> > Hello all,
>> >
>> > I have a sales table with the following columns:
>> > Product
>> > Date
>> > Price
>> > SalesAmount
>> > Quantity
>> >
>> > I need to write a report in Rs that has the following information:
>> > Products sold in a month and during the year...
>> >
>> > Currently I use a sub report with different date parameters ie date
>> between
>> > 1/1/04 and 1/1/05 for The year sold data, and Date = @.Month for the
> month
>> > sales.
>> > However, sub-reports seem to slow the report down significantly. Is
> there
>> a
>> > better way to do this using only one report?
>> >
>> > Thanks,
>> > Clint
>> >
>> >
>>
>|||The Company Sales.rdl sample report, that shipped with SQL Server 2000
Reporting Services, is good example of using drilldown.
--
Bruce Johnson [MSFT]
Microsoft SQL Server Reporting Services
This posting is provided "AS IS" with no warranties, and confers no rights.
"Bruce Loehle-Conger" <bruce_lcNOSPAM@.hotmail.com> wrote in message
news:Oj0OQIiqEHA.2764@.TK2MSFTNGP11.phx.gbl...
> This looks to me to be a good candidate for drill down. Your query should
> summarize by year,month. Then in Report Services you adding a grouping for
> year and a grouping for month. You want the month grouping to be hidden
> and based on the year. Then if they click on the year it expands and you
> get the month. If the group header for the year you put in a sum
> expression for the year total shows in the heading. Hope that makes sense
> you you.
> Bruce L-C
> MVP Reporting Services
> "AshVsAOD" <.> wrote in message
> news:eCvnWabqEHA.1300@.TK2MSFTNGP12.phx.gbl...
>> Thanks,
>> However, I need one table, this solution seems to require multiple
>> tables/charts.
>> Any other way?
>> "Ravi Mumulla (Microsoft)" <ravimu@.online.microsoft.com> wrote in message
>> news:eP2I7cWqEHA.3464@.tk2msftngp13.phx.gbl...
>> Take a look at the [Employee Sales Summary.rdl] sample report that ships
>> with the product.
>> --
>> Ravi Mumulla (Microsoft)
>> SQL Server Reporting Services
>> This posting is provided "AS IS" with no warranties, and confers no
>> rights.
>> "AshVsAOD" <.> wrote in message
>> news:en4Pm2NqEHA.340@.TK2MSFTNGP10.phx.gbl...
>> > Hello all,
>> >
>> > I have a sales table with the following columns:
>> > Product
>> > Date
>> > Price
>> > SalesAmount
>> > Quantity
>> >
>> > I need to write a report in Rs that has the following information:
>> > Products sold in a month and during the year...
>> >
>> > Currently I use a sub report with different date parameters ie date
>> between
>> > 1/1/04 and 1/1/05 for The year sold data, and Date = @.Month for the
>> month
>> > sales.
>> > However, sub-reports seem to slow the report down significantly. Is
>> there
>> a
>> > better way to do this using only one report?
>> >
>> > Thanks,
>> > Clint
>> >
>> >
>>
>>
>

Better way to build a stored proc for an INSERT...

I've built a stored procedure where I'm inserting a row into two tables.
Both tables have a number of columns - and so I have to pass a rather larger
number of parameters to the stored proc. Like follows
INSERT INTO MyTable1 (MyCol1, MyCol2, ... MyCol25) VALUES (@.cParm1, @.cParm2,
... @.cParm25)
INSERT INTO MyTable2 (MyCol1, MyCol2, ... MyCol25) VALUES (@.cParm26,
@.cParm27, ... @.cParm50)
For any one row, however, at least a third of the columns are going to be
NULL, based on the value of one of the columns (a category column).
There's no opportunity to modify the table structure - it is what it is.
What I have "works", but I'm curious if there's a way that doesn't involve
as many parameters.
Thanks,
KevinKevin@.test.com wrote:
> I've built a stored procedure where I'm inserting a row into two
> tables.
> Both tables have a number of columns - and so I have to pass a rather
> larger number of parameters to the stored proc. Like follows
> INSERT INTO MyTable1 (MyCol1, MyCol2, ... MyCol25) VALUES (@.cParm1,
> @.cParm2, ... @.cParm25)
> INSERT INTO MyTable2 (MyCol1, MyCol2, ... MyCol25) VALUES (@.cParm26,
> @.cParm27, ... @.cParm50)
>
> For any one row, however, at least a third of the columns are going
> to be NULL, based on the value of one of the columns (a category
> column).
> There's no opportunity to modify the table structure - it is what it
> is. What I have "works", but I'm curious if there's a way that
> doesn't involve as many parameters.
>
> Thanks,
> Kevin
Write separate stored procedures for each "insert" type. So, let's say
your table has three logical implementations (design-issues aside), you
can write three separate insert SPs that only require the user pass
those that are asked.
The other option is to use defaults on the parameters, so if they are
not passed they default to an appropriate value:
Create Proc Test
@.Param1 INT = NULL
@.Param2 INT = NULL
as
Exec dbo.Test @.Param2 = 5
Exec dbo.Test @.Param1 = 3
Exec dbo.Test 1, 3
Exec dbo.Test
You may have to add some validation to the SP in the case where a user
leaves out a logically incorrect number of columns.
David Gugick
Imceda Software
www.imceda.com|||David,
Thanks!...you've given me some good ideas to seriously consider, especially
having 3 stored procs.
Kevin

Tuesday, February 14, 2012

Best way to insert large amounts of data from a webform to SQL Server 2005

Hi

I have a VB.net web page which generates a datatable of values (3 columns and on average about 1000-3000 rows).

What is the best way to get this data table into an SQL Server? I can create a table on SQL Server no problem but I've found simply looping through the datatable and doing 1000-3000 insert statements is slow (a few seconds). I'd like to make this as streamlined as possible so was wondering is there is a native way to insert all records in a batch via ADO.net or something.

Any ideas?
Thanks
Ed

Hi,

If you gonna insert or update in single table then use SQL Data adapter. But it needs to have primary or unique key to create update query via sql command builder otherwise its just insert all rows in your table.

If you want to update for above scenario there are two ways.

1. Create staging table in your database and insert your records using sqlbulkcopy class which can insert lakhs of data/s (i'm not sure abt amt), then create a stored procedure to update or insert rows from staging table to main table

2. You can write your update and insert command in sqldataadapter.sqlcommand property.

for example

create a table as testtable with id number and name as text

Sql query will be like this

update testtable
set name = @.namecolumn
Where id = @.idColumn
If @.@.Rowcount = 0
Insert into testtable (id,name) values(@.idColumn,@.nameColumn)

then add sql parameters(@.idcolumn,@.nameColumn) with your data table column names

then adapter will execute the query while you call update method

hope it helps

Sunday, February 12, 2012

best way to get metadata

I have a query that gets the tables, columns, indexes, index_columns for all tables in a data base.

But, I need to be able to select any data base on the server
and then drill down into the tables, columns, etc.

I looked at sys.databases but can't see how to relate that to the
sys.tables to extend my query.

Is there a way?

I've looked at the information_schema area and catalog views
but I don't see a solution yet.

I have a query that gets the tables, columns, indexes, index_columns for all tables in a data base.

But, I need to be able to select any data base on the server
and then drill down into the tables, columns, etc.

I looked at sys.databases but can't see how to relate that to the
sys.tables to extend my query.

Is there a way?

I've looked at the information_schema area and catalog views
but I don't see a solution yet.

|||Using three-parts names, add the database name to the object, e.g., Northwind.sys.tables.|||

Being in the database you want to inspect use the

sys.tables
sys.all_columns
sys.indexes
sys.index_columns

The information schema will provide you with most of the object you want to inspect but is not suitable for the used indexes.

Jens K. Suessmeyer.

http://www.sqlserver2005.de
--

|||

This has been answered in your identical post here.

It is best to not 'multi-post' like this. Post in one forum and wait for a response. That shows respect for the folks here since they don't have to spend time answering a question that was answered in another forum.

|||sorry for the multi-post but I thought maybe it was the wrong
forum to start with

Friday, February 10, 2012

best way to combine columns

Hi all,

I have a table with multiple rows with the same ID.

a) How do I combine all columns into one row with the same ID?

b) Is this better to do the combine in the store procedure/trigger or a sub that involked by some sort of datarepeater, datagrid controls?

Since I am trying to brush up my sql. I appreciate any examples on top of conceptual solution.

Thanks

I take that my question is not so clear so here is the example of what I wanted to do:

I have a sql 2000 table like this:

ID col1 col2

7777 itemx 12/02/07 00:00:10

7777 itemy 12/02/07 10:00:00

7777 itemz 12/02/07 12:10:60

8888 itemA 12/02/07 01:01:00

888 itemB 12/02/07 02:00:00

..........................................

I like to combine all rows with the same ID together like the followings:

7777 itemx itemy itemz 12/02/07

888 itemA itemB...................

The question has 3 parts:

1) what is the best way to do this? using trigger/stored procedure or just a vb sub with somesort of datarepeater, datalist controls

2) can you show me some examples of each way if any?

3) Can crystal report do something like this?

I am open to create another table or just plain writting them out on page.

Thanks

|||

Some one mentioned to me that 'trigger' could cause some confusion in reading the question. So , please do skip the trigger part.

|||

Depending on how many records will be pulled out, you could do it at the report level or at the DB level. You could write a function that takes the ID as parameter and returns a concatenated string for items and use the function in the SELECT.

SELECT Id, dbo.fnGetItems(ID), col2...

FROM ...

and create the function with a SELECT as

SELECT @.val = ISNULL(@.val,'') + ' ' + Convert(Varchar, col2) FROM YourTable WHERE ID = @.Id

and return the @.val.

|||

Thanks ndinakar. Anyone has any other solutions/ideas or clearer/complete solution? thanks

|||

Hi tvfoto,

This depends on if this table has a primary key.

If yes, I would suggest you read everything into a DataSet. Process the data combination

within the DataSet and update it back to the server, because in the .NET code, you will

have better flexibility for the combination logic.

If the table doesn't have a primary key, as ndinakar suggested, you can use some stored

procedure to achieve this.

HTH. If this does not answer you question, please feel free to mark it as Not Answered and post your reply. Thanks!

|||

THis table does not have a primary key. The following code works, however I dont know how to seperate them out as individual columns so I can give them a proper heading. Any suggestion? Thanks

//my user function.

create functionGetItems(@.mId int)

returns varchar(1000)As

Begin

declare @.values varchar(1000)

Set @.values=''

Select @.values=@.values+','+ myItemColumnName from myTable whereID=@.mId

return @.values

End

Go

//my aspx code

<

body><formid="form1"runat="server"><div><asp:SqlDataSourceID="SqlDataSource1"runat="server"ConnectionString="<%$ ConnectionStrings:myTableConnectionString %>"SelectCommand="SELECT DISTINCT [ID],dbo.GetItems(ID) as [xyzValues] FROM [myTable] "></asp:SqlDataSource>

<

asp:RepeaterID="Repeater1"runat="server"DataSourceID="SqlDataSource1">

<HeaderTemplate><tablestyle="background:#ebebeb"><tr><tdcolspan="5"> </td></tr><trstyle="text-align:left; background:yellow"><th>ID</th><th> </th><th>item1,item2,item3,item4, item5...</h>///////header for different items here...not a good way

</tr>

</HeaderTemplate><ItemTemplate><tr><tdstyle="width:25px"><%#DataBinder.Eval(Container.DataItem,"ID")%></td>

<td> </td><td><%#DataBinder.Eval(Container.DataItem,"xzyValues")%></td>.................................

</div></form></body>