Showing posts with label number. Show all posts
Showing posts with label number. Show all posts

Tuesday, March 27, 2012

binding COUNT

hi guys,

i want to count the number of times a particular data occurs in a table and display that data in a listbox

initially i planned to count the number of occurrences and bind that number to a variable for further manipulation but i have no idea how to do the binding part

how do i solve this?

thanx

Are you trying to update a db column that contains the count? If you're trying to do that, you could use a trigger or daily/nightly processing to do this based on how often you want it updated, how frequently the table is accessed, etc. If you're trying to do it in code, you could just set your variable equal to "select count ... where..." and mess with it in the code.

Thanks,
Sam Lester (MSFT)

|||

thanx sam!

Bind multi-table dataset to a datagrid

I have created a SQL procedure that returns a dataset with a varying number of tables like the following example:

RepID-- PhoneUPS
---- ----
3---- 3

RepID-- PhoneUPS
---- ----
4---- 0

RepID-- PhoneUPS
---- ----
5---- 2

No more results.
(9 row(s) returned)
@.RETURN_VALUE = 0

All of the tables have the same header row, but are a seperate table. From my experience, I am not able to bind more than one table to a datagrid. Does anyone have any suggestions on how I can go about displaying this full dataset on my aspx page? I've been going in circles on this for two days, so I'm open to any suggestions :)

Cheers,
AndrewCan you create a UNION query in the stored procedure, rather then seperate resultsets?

Failing that, you can manually add each row of each resultset to the Items collection of the Datagrid.|||Thanks for the tip. I've been working on my SQL procedure trying to incorporate the UNION in the SELECT statement. I am having problems with the logic of working it in, though. Here is the original SQL procedure I've created:

/*BEGIN Procedure */

ALTER PROCEDURE Rep_Status_Array

AS

/* Build array or ID numbers to cycle through */
DECLARE RepIDs_cursor CURSOR
FOR SELECT UserID
FROM TBL_Users
WHERE (TBL_Users.DepartmentID > 0)

OPEN RepIDs_cursor
DECLARE @.userID int
FETCH NEXT FROM RepIDs_cursor INTO @.userID

/* Begin WHILE loop to collect data for each ID */
WHILE (@.@.FETCH_STATUS <> -1)
BEGIN
SELECT
RepID=@.userID,

PhoneUPS=(SELECT Count(*) FROM TBL_UpEntry WHERE (TypeID = 11) AND (SalesmanID = @.userID)),

LOTUPS=(SELECT Count(*) FROM TBL_UpEntry WHERE (TypeID = 1) AND (SalesmanID = @.userID)),

CVR=(SELECT Count(*) FROM TBL_UpEntry WHERE (TypeID = 1) AND (SalesmanID = @.userID)),

FETCH NEXT FROM RepIDs_cursor INTO @.userID
END
/* END WHILE loop */

CLOSE RepIDs_cursor
DEALLOCATE RepIDs_cursor

/* END Procedure */

The problem I'm having with this is that each time through the WHILE loop creates a new table in the dataset that is returned by the procedure. Any suggestions?|||Is it even possible to use UNION or UNION ALL within a WHILE loop?|||To the best of my knowledge, it is not possible to use the UNION statement in conjunction with a WHILE loop.

I'm going to go with Douglas' second recommendation and manually create a table within my page, adding each row one at a time.|||Another alternative:

Create a temporary table, and then INSERT all selected rows into the temp table, and then at the end of the SP,

SELECT * FROM #temp

(with any required ORDER BY).|||Thanks for the help, Douglas, much appreciated!

Andrew

Tuesday, March 20, 2012

bigint field vs int

i am creating an application which the number of records will be beyond 100000 so i need to set the primary keys to bigint instead of int

i need to know about the performance and difference

will it affect the performance much and second will i have to change any code like delete,update

A standard Int ranges from -2,147,483,648 through 2,147,483,647 so you can use them to store values much larger than 100000.

If you do use bigints, you would need to use Int64 in your code

|||

HI,

There will be some performance difference in using int and bigint. If you use bigint the performance is slower in an 32 bit processer. But in case of a 64 bit processor the performance should not be a problem

|||

Int will give you up to 2,147,483,647 while Bigint gives up to 9,223,372,036,854,775,807. The only difference in your stored procedure is that you will need to change the parameter declarations from INT to BIGINT.

>i need to know about the performance and difference
Obviously more working memory will be required for BIGINT instead of INT (8 bytes instead of 4 for for each value), however SQL's will still work very efficiently provided there is enough RAM. If you only have enough RAM for INT, changing to BIGINT will put you at a disadvantage - so make sure you have enough RAM.

sql

Bigint autoincrement question

Hi All,
i wonder if i can get an bigint autoincrement field where the number begins with the current year + 1 autonumber
Does someone know if it is possible and if yes, how?
Already thanx.
Cheers Wimcreate table test_t
(col1 bigint identity (2004, 1), col2 char (5)
)
go

declare @.counter tinyint

set @.counter = 1

while @.counter <> 5
begin
insert into test_t
select 'test'
set @.counter = @.counter + 1
end
go

select * from test_t
go|||Originally posted by aldo_2003
create table test_t
(col1 bigint identity (2004, 1), col2 char (5)
)
go

declare @.counter tinyint

set @.counter = 1

while @.counter <> 5
begin
insert into test_t
select 'test'
set @.counter = @.counter + 1
end
go

select * from test_t
go

But it doen't change the year automatically|||How do you mean automatically ?

Can you clarify ?|||Originally posted by aldo_2003
How do you mean automatically ?

Can you clarify ?

The id should always start with the current year, so when it is 2005 the id should start with 2005 1 + increment|||Example please?|||Originally posted by aldo_2003
Example please?

It should be something like
create table test_t
(col1 bigint identity (datepart(yy, getdate())1, 1), col2 char (5)
)
go|||An example of what it would look like in a table.|||You should store your year value and your ID value separately, and then use a trigger to increment the ID value.

[YearColumn] defaults to year(getdate())

[IDColumn] is set by a trigger (Insert only!) to:

set [IDColumn] = isnull((select Max([IDCOLUMN]) from YourTable where [YearColumn] = Year(Getdate())), 0) + 1

Then you can combine the two values in your SQL code, or create a calculted column that combines the two.|||I like your solution, but the problem is that i cant set that column to primary key|||You can add a unique index to a computed column. You may be able to make it a primary key, thought I'm not certain about that. In any case, you could set the Year column and the ID column together as a composite primary key.

Big queries return empty result set

Hello guys,

MS SQL server 2000 behavies strange with big queries that involves relatively large number of tables:
If I just enumerate columns I want to receive, the result set is empty. Adding * to the column list without making any change to where clause seems to solve the problem but I guess it's not the best practice.
The most amazing is that this behavior is not stable, so I suppose there's something to deal with server itself, not the application.
Has anybody suffered this problem and what solution was adopted?
Thanks for any information you can provide.Most of my queries are small (less than 200 columns and 500,000 rows), so maybe I'm not hitting the kind of volume that you are, but I've never heard of that behavior! I also rarely push beyond 30-50 tables in a single query with the exception of one purchased package that sometimes can create monsters.

Can you give me a rough idea of how big your database is (in gigabytes) and how big your result set is (rows, columns, and megabytes)? I'll try to reproduce the behavior here with a known good server.

-PatP|||Definitely my queries never get this volume. I have a join of about 15 tables with 10 columns in each. Empty recordset is an ocasional behavior, i mean today the query may work perfectly, tomorrow absolutely exact query with absolutely exact parameters without any modification made to the database fails.
I was told that this could be caused by particular, localized version of SQL server that we use (sql server 2000 SP3 spanish, 8.00.760). Could you confirm that?

Originally posted by Pat Phelan
Most of my queries are small (less than 200 columns and 500,000 rows), so maybe I'm not hitting the kind of volume that you are, but I've never heard of that behavior! I also rarely push beyond 30-50 tables in a single query with the exception of one purchased package that sometimes can create monsters.

Can you give me a rough idea of how big your database is (in gigabytes) and how big your result set is (rows, columns, and megabytes)? I'll try to reproduce the behavior here with a known good server.

-PatP|||Originally posted by Pat Phelan
Most of my queries are small (less than 200 columns and 500,000 rows), so maybe I'm not hitting the kind of volume that you are, but I've never heard of that behavior! I also rarely push beyond 30-50 tables in a single query with the exception of one purchased package that sometimes can create monsters.


:D

Why not POST the query...|||Originally posted by Brett Kaiser
:D

Why not POST the query...

As you wish:
--NON-EMPTY RESULTSET--

select bb.i_object_id as bb_aa, bb.i_object_type_id as bb_ab,
<skipped about 50 columns>
bl.i_operation_id as bl_ak, bl.integra_operation_id as bl_al,
* <-- this allows me receive data
from i_sysobject bb, part_list_item bc,
i_relation bd,
i_sysobject be, part_list bf,
i_relation bg, i_sysobject bh,
production_order bi, i_relation bj,
i_sysobject bk, operation bl
where bc.i_part_list_item_id = bb.i_object_id and
bf.state = ? and
bf.i_part_list_id = be.i_object_id and
bi.i_production_order_id = bh.i_object_id and
bl.i_operation_id = bk.i_object_id and
bc.i_part_list_item_id=bd.i_relation_child_object_ id and
bd.i_relation_parent_object_id=bf.i_part_list_id and
bf.i_part_list_id=bg.i_relation_child_object_id and
bg.i_relation_parent_object_id=bi.i_production_ord er_id and
bf.i_part_list_id=bj.i_relation_child_object_id and
bj.i_relation_parent_object_id=bl.i_operation_id
order by bf.i_part_list_id asc

--EMPTY RESULTSET--
select bb.i_object_id as bb_aa, bb.i_object_type_id as bb_ab,
<skipped about 50 columns>
bl.i_operation_id as bl_ak, bl.integra_operation_id as bl_al
from i_sysobject bb, part_list_item bc,
i_relation bd,
i_sysobject be, part_list bf,
i_relation bg, i_sysobject bh,
production_order bi, i_relation bj,
i_sysobject bk, operation bl
where bc.i_part_list_item_id = bb.i_object_id and
bf.state = ? and
bf.i_part_list_id = be.i_object_id and
bi.i_production_order_id = bh.i_object_id and
bl.i_operation_id = bk.i_object_id and
bc.i_part_list_item_id=bd.i_relation_child_object_ id and
bd.i_relation_parent_object_id=bf.i_part_list_id and
bf.i_part_list_id=bg.i_relation_child_object_id and
bg.i_relation_parent_object_id=bi.i_production_ord er_id and
bf.i_part_list_id=bj.i_relation_child_object_id and
bj.i_relation_parent_object_id=bl.i_operation_id
order by bf.i_part_list_id asc

bf.state = ? receives the SAME parameter in both cases.|||Everything is identity, isn't it...

Is one of these tables a driver? Like what you want to base your result set on?

Lots of sele referencing...

Maybe you can use derived tables..

SELECT * FROM (SELECT * FROM ...join the table that relate) AS A
LEFT JOIN (SELECT * FROM ..same thing) AS B
ON A.key = B.Key

Here's your code cleaned up some..Not sure if it's doing the same thing...

I think it is...and easier to see what you're trying to do

SELECT *
FROM i_sysobject bb
LEFT JOIN part_list_item bc ON bc.i_part_list_item_id = bb.i_object_id
LEFT JOIN i_relation bd ON bc.i_part_list_item_id=bd.i_relation_child_object_ id
LEFT JOIN i_sysobject be ON bf.i_part_list_id = be.i_object_id and
LEFT JOIN part_list bf ON bd.i_relation_parent_object_id=bf.i_part_list_id
LEFT JOIN i_relation bg ON bf.i_part_list_id=bg.i_relation_child_object_id
LEFT JOIN i_sysobject bh ON bi.i_production_order_id = bh.i_object_id
LEFT JOIN production_order bi ON bg.i_relation_parent_object_id=bi.i_production_ord er_id
LEFT JOIN i_relation bj ON bf.i_part_list_id=bj.i_relation_child_object_id
LEFT JOIN i_sysobject bk ON bl.i_operation_id = bk.i_object_id
LEFT JOIN operation bl ON bj.i_relation_parent_object_id=bl.i_operation_id
WHERE bf.state = ?

Saturday, February 25, 2012

Better way to build a stored proc for an INSERT...

I've built a stored procedure where I'm inserting a row into two tables.
Both tables have a number of columns - and so I have to pass a rather larger
number of parameters to the stored proc. Like follows
INSERT INTO MyTable1 (MyCol1, MyCol2, ... MyCol25) VALUES (@.cParm1, @.cParm2,
... @.cParm25)
INSERT INTO MyTable2 (MyCol1, MyCol2, ... MyCol25) VALUES (@.cParm26,
@.cParm27, ... @.cParm50)
For any one row, however, at least a third of the columns are going to be
NULL, based on the value of one of the columns (a category column).
There's no opportunity to modify the table structure - it is what it is.
What I have "works", but I'm curious if there's a way that doesn't involve
as many parameters.
Thanks,
KevinKevin@.test.com wrote:
> I've built a stored procedure where I'm inserting a row into two
> tables.
> Both tables have a number of columns - and so I have to pass a rather
> larger number of parameters to the stored proc. Like follows
> INSERT INTO MyTable1 (MyCol1, MyCol2, ... MyCol25) VALUES (@.cParm1,
> @.cParm2, ... @.cParm25)
> INSERT INTO MyTable2 (MyCol1, MyCol2, ... MyCol25) VALUES (@.cParm26,
> @.cParm27, ... @.cParm50)
>
> For any one row, however, at least a third of the columns are going
> to be NULL, based on the value of one of the columns (a category
> column).
> There's no opportunity to modify the table structure - it is what it
> is. What I have "works", but I'm curious if there's a way that
> doesn't involve as many parameters.
>
> Thanks,
> Kevin
Write separate stored procedures for each "insert" type. So, let's say
your table has three logical implementations (design-issues aside), you
can write three separate insert SPs that only require the user pass
those that are asked.
The other option is to use defaults on the parameters, so if they are
not passed they default to an appropriate value:
Create Proc Test
@.Param1 INT = NULL
@.Param2 INT = NULL
as
Exec dbo.Test @.Param2 = 5
Exec dbo.Test @.Param1 = 3
Exec dbo.Test 1, 3
Exec dbo.Test
You may have to add some validation to the SP in the case where a user
leaves out a logically incorrect number of columns.
David Gugick
Imceda Software
www.imceda.com|||David,
Thanks!...you've given me some good ideas to seriously consider, especially
having 3 stored procs.
Kevin

Better query?

I have a table from a 3rd party application that stores the time that
events occurred as the number of seconds since Jan 1 1970. I want to
perform an aggregate function on the values stored in each event, grouped
by date and present the sorted results. Here's what I have:
SELECT
DateAdd(dd, Time/86400, '1/1/1970 12:00:00 AM') AS PointDate,
Avg(Value)
FROM Data
GROUP BY DateAdd(dd, Time/86400, '1/1/1970 12:00:00 AM')
ORDER BY DateAdd(dd, Time/86400, '1/1/1970 12:00:00 AM') ASC
Is there an easier way to write this without duplicating the DateAdd
calculation three times' I can't use Stored Procedures (I'm not allowed
to modify the database in any way.)
-mdb>> have a table from a 3rd party application that stores the time that even
ts occurred as the number of seconds since 1970-01-01 <<
Oh yeah, that makes sense :) !! I always hated the term "legacy data"
when what we meant was "family curse from Hell!" instead.
Build a table like this:
CREATE TABLE FoobarDates
(foobar_date CHAR (10) NOT NULL,
start_second INTEGER NOT NULL,
finish_second INTEGER NOT NULL
PRIMARY KEY(start_second, finish_second ));
1) Avoid math in SQL-- it is designed to do JOINs. not algebra.
2) An ORDER BY on a SELECT is **meaningless** -- read any basic SQL
book. Indexes matter.
3) Use a spreadsheet to do the computations for the table. Easy and
faster than code.|||you can use a derived table Michael...
SELECT PointDate, Avg_Value = Avg( Value )
FROM (
SELECT PointDate, DateAdd(dd, Time/86400, '1/1/1970 12:00:00 AM'),
Value
FROM Data ) AS dt
GROUP BY PointDate
ORDER BY PointDate ASC
Tony
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials
"Michael Bray" <mbray@.makeDIntoDot_ctiusaDcom> wrote in message
news:Xns9739BF2454556mbrayctiusacom@.207.46.248.16...
>I have a table from a 3rd party application that stores the time that
> events occurred as the number of seconds since Jan 1 1970. I want to
> perform an aggregate function on the values stored in each event, grouped
> by date and present the sorted results. Here's what I have:
> SELECT
> DateAdd(dd, Time/86400, '1/1/1970 12:00:00 AM') AS PointDate,
> Avg(Value)
> FROM Data
> GROUP BY DateAdd(dd, Time/86400, '1/1/1970 12:00:00 AM')
> ORDER BY DateAdd(dd, Time/86400, '1/1/1970 12:00:00 AM') ASC
> Is there an easier way to write this without duplicating the DateAdd
> calculation three times' I can't use Stored Procedures (I'm not allowed
> to modify the database in any way.)
> -mdb|||Sorry, that should be...
SELECT PointDate, Avg_Value = Avg( Value )
FROM (
SELECT PointDate = DateAdd(dd, Time/86400, '1/1/1970 12:00:00 AM'),
Value
FROM Data ) AS dt
GROUP BY PointDate
ORDER BY PointDate ASC
Tony.
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials
"Tony Rogerson" <tonyrogerson@.sqlserverfaq.com> wrote in message
news:eaoeBd4CGHA.532@.TK2MSFTNGP15.phx.gbl...
> you can use a derived table Michael...
> SELECT PointDate, Avg_Value = Avg( Value )
> FROM (
> SELECT PointDate, DateAdd(dd, Time/86400, '1/1/1970 12:00:00 AM'),
> Value
> FROM Data ) AS dt
> GROUP BY PointDate
> ORDER BY PointDate ASC
> Tony
> --
> Tony Rogerson
> SQL Server MVP
> http://sqlserverfaq.com - free video tutorials
>
> "Michael Bray" <mbray@.makeDIntoDot_ctiusaDcom> wrote in message
> news:Xns9739BF2454556mbrayctiusacom@.207.46.248.16...
>|||"--CELKO--" <jcelko212@.earthlink.net> wrote in
news:1135732983.997588.94360@.o13g2000cwo.googlegroups.com:
> Oh yeah, that makes sense :) !! I always hated the term "legacy data"
> when what we meant was "family curse from Hell!" instead.
As a matter of fact, this is NOT legacy Data. In fact, it is an
application that would not have existed 10 years ago, because the
technology that is represented by the data in the database didn't exist.
(I'm not going to mention the vendor, but it is an IP Telephony monitoring
system.)
The rest of your post sounds like a troll, so I'm not going to comment,
except for this comment.
-mdb|||"Tony Rogerson" <tonyrogerson@.sqlserverfaq.com> wrote in
news:eaoeBd4CGHA.532@.TK2MSFTNGP15.phx.gbl:

> you can use a derived table Michael...
>
Wow... you know I had always tried to do something similar, but I never
could get it to work because I didn't know that I had to include the 'AS
<alias>'. Thanks! Now I get to go back and re-write a bunch of stored
procedures in other databases I've worked on that I used memory tables in!
:)
-mdb

Friday, February 24, 2012

Best Way to transfer tables via Terminal Server ?

I have updated a number of tables at home (size is around
200MB) and I am able to connect to office via Terminal
Server.
I would like to know what is the best tools I can use to
transfer the tables to my office and update them.
Should I transfer the whole database (around 1 GB) and use
DTS on Terminal Server to update the whole database ?
Thanks
If you have any backup encryption type of software (SQL LiteSpeed) then that
would be the way to go. A gig may be painful, really a call for your
company.
"Peter" <anonymous@.discussions.microsoft.com> wrote in message
news:054e01c593f5$1d51fc30$a401280a@.phx.gbl...
>I have updated a number of tables at home (size is around
> 200MB) and I am able to connect to office via Terminal
> Server.
> I would like to know what is the best tools I can use to
> transfer the tables to my office and update them.
> Should I transfer the whole database (around 1 GB) and use
> DTS on Terminal Server to update the whole database ?
> Thanks
|||Dear Chris,
Is there any way that I only transfer those tables to the Terminal Server
and use DTS there to update the SQL Server ?
Thanks
Peter
"ChrisR" <noemail@.bla.com> wrote in message
news:eMhOclFlFHA.3260@.TK2MSFTNGP10.phx.gbl...
> If you have any backup encryption type of software (SQL LiteSpeed) then
> that would be the way to go. A gig may be painful, really a call for your
> company.
>
> "Peter" <anonymous@.discussions.microsoft.com> wrote in message
> news:054e01c593f5$1d51fc30$a401280a@.phx.gbl...
>
|||Hi
If you don't have SQL Server port access, or file access to the remote
server, you can't transfer the data from your home machine.
Terminal server does not allow local copy to a remote server.
Regards
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Peter" <anonymous@.discussions.microsoft.com> wrote in message
news:eq4SdMKlFHA.2852@.TK2MSFTNGP14.phx.gbl...
> Dear Chris,
> Is there any way that I only transfer those tables to the Terminal Server
> and use DTS there to update the SQL Server ?
> Thanks
> Peter
> "ChrisR" <noemail@.bla.com> wrote in message
> news:eMhOclFlFHA.3260@.TK2MSFTNGP10.phx.gbl...
>
|||Dear Mike,
I am able to access the Terminal Server. From the Terminal Server, I am
able to connect to the SQL Server via Remote Access. On the other hand, we
have installed Enterprise Manager on the Terminal Server.
Thanking you in anticipation.
Peter
"Mike Epprecht (SQL MVP)" <mike@.epprecht.net> wrote in message
news:Oe9ZlbNlFHA.1412@.TK2MSFTNGP09.phx.gbl...
> Hi
> If you don't have SQL Server port access, or file access to the remote
> server, you can't transfer the data from your home machine.
> Terminal server does not allow local copy to a remote server.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> IM: mike@.epprecht.net
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
> "Peter" <anonymous@.discussions.microsoft.com> wrote in message
> news:eq4SdMKlFHA.2852@.TK2MSFTNGP14.phx.gbl...
>

Best Way to transfer tables via Terminal Server ?

I have updated a number of tables at home (size is around
200MB) and I am able to connect to office via Terminal
Server.
I would like to know what is the best tools I can use to
transfer the tables to my office and update them.
Should I transfer the whole database (around 1 GB) and use
DTS on Terminal Server to update the whole database ?
ThanksIf you have any backup encryption type of software (SQL LiteSpeed) then that
would be the way to go. A gig may be painful, really a call for your
company.
"Peter" <anonymous@.discussions.microsoft.com> wrote in message
news:054e01c593f5$1d51fc30$a401280a@.phx.gbl...
>I have updated a number of tables at home (size is around
> 200MB) and I am able to connect to office via Terminal
> Server.
> I would like to know what is the best tools I can use to
> transfer the tables to my office and update them.
> Should I transfer the whole database (around 1 GB) and use
> DTS on Terminal Server to update the whole database ?
> Thanks|||Dear Chris,
Is there any way that I only transfer those tables to the Terminal Server
and use DTS there to update the SQL Server ?
Thanks
Peter
"ChrisR" <noemail@.bla.com> wrote in message
news:eMhOclFlFHA.3260@.TK2MSFTNGP10.phx.gbl...
> If you have any backup encryption type of software (SQL LiteSpeed) then
> that would be the way to go. A gig may be painful, really a call for your
> company.
>
> "Peter" <anonymous@.discussions.microsoft.com> wrote in message
> news:054e01c593f5$1d51fc30$a401280a@.phx.gbl...
>|||Hi
If you don't have SQL Server port access, or file access to the remote
server, you can't transfer the data from your home machine.
Terminal server does not allow local copy to a remote server.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Peter" <anonymous@.discussions.microsoft.com> wrote in message
news:eq4SdMKlFHA.2852@.TK2MSFTNGP14.phx.gbl...
> Dear Chris,
> Is there any way that I only transfer those tables to the Terminal Server
> and use DTS there to update the SQL Server ?
> Thanks
> Peter
> "ChrisR" <noemail@.bla.com> wrote in message
> news:eMhOclFlFHA.3260@.TK2MSFTNGP10.phx.gbl...
>|||Dear Mike,
I am able to access the Terminal Server. From the Terminal Server, I am
able to connect to the SQL Server via Remote Access. On the other hand, we
have installed Enterprise Manager on the Terminal Server.
Thanking you in anticipation.
Peter
"Mike Epprecht (SQL MVP)" <mike@.epprecht.net> wrote in message
news:Oe9ZlbNlFHA.1412@.TK2MSFTNGP09.phx.gbl...
> Hi
> If you don't have SQL Server port access, or file access to the remote
> server, you can't transfer the data from your home machine.
> Terminal server does not allow local copy to a remote server.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> IM: mike@.epprecht.net
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
> "Peter" <anonymous@.discussions.microsoft.com> wrote in message
> news:eq4SdMKlFHA.2852@.TK2MSFTNGP14.phx.gbl...
>

Best Way to transfer tables via Terminal Server ?

I have updated a number of tables at home (size is around
200MB) and I am able to connect to office via Terminal
Server.
I would like to know what is the best tools I can use to
transfer the tables to my office and update them.
Should I transfer the whole database (around 1 GB) and use
DTS on Terminal Server to update the whole database ?
ThanksIf you have any backup encryption type of software (SQL LiteSpeed) then that
would be the way to go. A gig may be painful, really a call for your
company.
"Peter" <anonymous@.discussions.microsoft.com> wrote in message
news:054e01c593f5$1d51fc30$a401280a@.phx.gbl...
>I have updated a number of tables at home (size is around
> 200MB) and I am able to connect to office via Terminal
> Server.
> I would like to know what is the best tools I can use to
> transfer the tables to my office and update them.
> Should I transfer the whole database (around 1 GB) and use
> DTS on Terminal Server to update the whole database ?
> Thanks|||Dear Chris,
Is there any way that I only transfer those tables to the Terminal Server
and use DTS there to update the SQL Server ?
Thanks
Peter
"ChrisR" <noemail@.bla.com> wrote in message
news:eMhOclFlFHA.3260@.TK2MSFTNGP10.phx.gbl...
> If you have any backup encryption type of software (SQL LiteSpeed) then
> that would be the way to go. A gig may be painful, really a call for your
> company.
>
> "Peter" <anonymous@.discussions.microsoft.com> wrote in message
> news:054e01c593f5$1d51fc30$a401280a@.phx.gbl...
>>I have updated a number of tables at home (size is around
>> 200MB) and I am able to connect to office via Terminal
>> Server.
>> I would like to know what is the best tools I can use to
>> transfer the tables to my office and update them.
>> Should I transfer the whole database (around 1 GB) and use
>> DTS on Terminal Server to update the whole database ?
>> Thanks
>|||Hi
If you don't have SQL Server port access, or file access to the remote
server, you can't transfer the data from your home machine.
Terminal server does not allow local copy to a remote server.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Peter" <anonymous@.discussions.microsoft.com> wrote in message
news:eq4SdMKlFHA.2852@.TK2MSFTNGP14.phx.gbl...
> Dear Chris,
> Is there any way that I only transfer those tables to the Terminal Server
> and use DTS there to update the SQL Server ?
> Thanks
> Peter
> "ChrisR" <noemail@.bla.com> wrote in message
> news:eMhOclFlFHA.3260@.TK2MSFTNGP10.phx.gbl...
>> If you have any backup encryption type of software (SQL LiteSpeed) then
>> that would be the way to go. A gig may be painful, really a call for your
>> company.
>>
>> "Peter" <anonymous@.discussions.microsoft.com> wrote in message
>> news:054e01c593f5$1d51fc30$a401280a@.phx.gbl...
>>I have updated a number of tables at home (size is around
>> 200MB) and I am able to connect to office via Terminal
>> Server.
>> I would like to know what is the best tools I can use to
>> transfer the tables to my office and update them.
>> Should I transfer the whole database (around 1 GB) and use
>> DTS on Terminal Server to update the whole database ?
>> Thanks
>>
>|||Dear Mike,
I am able to access the Terminal Server. From the Terminal Server, I am
able to connect to the SQL Server via Remote Access. On the other hand, we
have installed Enterprise Manager on the Terminal Server.
Thanking you in anticipation.
Peter
"Mike Epprecht (SQL MVP)" <mike@.epprecht.net> wrote in message
news:Oe9ZlbNlFHA.1412@.TK2MSFTNGP09.phx.gbl...
> Hi
> If you don't have SQL Server port access, or file access to the remote
> server, you can't transfer the data from your home machine.
> Terminal server does not allow local copy to a remote server.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> IM: mike@.epprecht.net
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
> "Peter" <anonymous@.discussions.microsoft.com> wrote in message
> news:eq4SdMKlFHA.2852@.TK2MSFTNGP14.phx.gbl...
>> Dear Chris,
>> Is there any way that I only transfer those tables to the Terminal Server
>> and use DTS there to update the SQL Server ?
>> Thanks
>> Peter
>> "ChrisR" <noemail@.bla.com> wrote in message
>> news:eMhOclFlFHA.3260@.TK2MSFTNGP10.phx.gbl...
>> If you have any backup encryption type of software (SQL LiteSpeed) then
>> that would be the way to go. A gig may be painful, really a call for
>> your company.
>>
>> "Peter" <anonymous@.discussions.microsoft.com> wrote in message
>> news:054e01c593f5$1d51fc30$a401280a@.phx.gbl...
>>I have updated a number of tables at home (size is around
>> 200MB) and I am able to connect to office via Terminal
>> Server.
>> I would like to know what is the best tools I can use to
>> transfer the tables to my office and update them.
>> Should I transfer the whole database (around 1 GB) and use
>> DTS on Terminal Server to update the whole database ?
>> Thanks
>>
>>
>

Monday, February 13, 2012

Best way to get the next auto number

Hi all, let say I have a table:
CREATE TABLE test (
[MyID] [int] IDENTITY (1, 1) NOT NULL ,
[Name] [varchar] (50) NOT NULL
)
How do I know the next value that MyID gonna have before inserting a
new record?
What is the best way to do that?
Get the max number plus one? Wouldn't it be expensive to do it this
way?
Thanks
ky
Hi
CREATE TABLE test (
[MyID] [int] IDENTITY (1, 1) NOT NULL ,
[Name] [varchar] (50) NOT NULL
)
DECLARE @.nextid INT
INSERT INTO test([Name]) VALUES ('John')
SELECT @.nextid=SCOPE_IDENTITY()
Print 'Next id is '+CAST(@.nextid+1 AS VARCHAR(10))
DROP TABLE Test
--Or not using an identity property
CREATE PROC Get_nextID
@.ID char(20),
@.nextID numeric(18) output
as
update NextIDs
set @.nextID = NextID = NextID + 1
where ID = @.ID
<nonamehkg@.hotmail.com> wrote in message
news:1141275178.710563.65560@.i40g2000cwc.googlegro ups.com...
> Hi all, let say I have a table:
> CREATE TABLE test (
> [MyID] [int] IDENTITY (1, 1) NOT NULL ,
> [Name] [varchar] (50) NOT NULL
> )
> How do I know the next value that MyID gonna have before inserting a
> new record?
> What is the best way to do that?
> Get the max number plus one? Wouldn't it be expensive to do it this
> way?
>
> Thanks
> --
> ky
>
|||nonamehkg@.hotmail.com wrote:
> Hi all, let say I have a table:
> CREATE TABLE test (
> [MyID] [int] IDENTITY (1, 1) NOT NULL ,
> [Name] [varchar] (50) NOT NULL
> )
> How do I know the next value that MyID gonna have before inserting a
> new record?
> What is the best way to do that?
>
You don't. You retrieve the IDENTITY value afterwards. Take a look at
the SCOPE_IDENTITY function in Books Online.
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
|||To add the other responses, if you need to assign a known surrogate key
value before insertion, you need to use something other than identity. One
method is to use uniqueidentifier so that you can assign the value with
NEWID() or via application code. Another method is to create a table that
contains a row with the last (or next) assigned key value and use that table
to update and retrieve the key value prior to insertion.
Hope this helps.
Dan Guzman
SQL Server MVP
<nonamehkg@.hotmail.com> wrote in message
news:1141275178.710563.65560@.i40g2000cwc.googlegro ups.com...
> Hi all, let say I have a table:
> CREATE TABLE test (
> [MyID] [int] IDENTITY (1, 1) NOT NULL ,
> [Name] [varchar] (50) NOT NULL
> )
> How do I know the next value that MyID gonna have before inserting a
> new record?
> What is the best way to do that?
> Get the max number plus one? Wouldn't it be expensive to do it this
> way?
>
> Thanks
> --
> ky
>
|||Thanks eveyone, I think SCOPE_IDENTITY is what I should use.
I actually need to know the id of a newly inserted record, and then
create a folder using that id as name.
It helps a lot thanks again.

Best way to get the next auto number

Hi all, let say I have a table:
CREATE TABLE test (
[MyID] [int] IDENTITY (1, 1) NOT NULL ,
[Name] [varchar] (50) NOT NULL
)
How do I know the next value that MyID gonna have before inserting a
new record?
What is the best way to do that?
Get the max number plus one? Wouldn't it be expensive to do it this
way?
Thanks
kyHi
CREATE TABLE test (
[MyID] [int] IDENTITY (1, 1) NOT NULL ,
[Name] [varchar] (50) NOT NULL
)
DECLARE @.nextid INT
INSERT INTO test([Name]) VALUES ('John')
SELECT @.nextid=SCOPE_IDENTITY()
Print 'Next id is '+CAST(@.nextid+1 AS VARCHAR(10))
DROP TABLE Test
--Or not using an identity property
CREATE PROC Get_nextID
@.ID char(20),
@.nextID numeric(18) output
as
update NextIDs
set @.nextID = NextID = NextID + 1
where ID = @.ID
<nonamehkg@.hotmail.com> wrote in message
news:1141275178.710563.65560@.i40g2000cwc.googlegroups.com...
> Hi all, let say I have a table:
> CREATE TABLE test (
> [MyID] [int] IDENTITY (1, 1) NOT NULL ,
> [Name] [varchar] (50) NOT NULL
> )
> How do I know the next value that MyID gonna have before inserting a
> new record?
> What is the best way to do that?
> Get the max number plus one? Wouldn't it be expensive to do it this
> way?
>
> Thanks
> --
> ky
>|||nonamehkg@.hotmail.com wrote:
> Hi all, let say I have a table:
> CREATE TABLE test (
> [MyID] [int] IDENTITY (1, 1) NOT NULL ,
> [Name] [varchar] (50) NOT NULL
> )
> How do I know the next value that MyID gonna have before inserting a
> new record?
> What is the best way to do that?
>
You don't. You retrieve the IDENTITY value afterwards. Take a look at
the SCOPE_IDENTITY function in Books Online.
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--|||To add the other responses, if you need to assign a known surrogate key
value before insertion, you need to use something other than identity. One
method is to use uniqueidentifier so that you can assign the value with
NEWID() or via application code. Another method is to create a table that
contains a row with the last (or next) assigned key value and use that table
to update and retrieve the key value prior to insertion.
Hope this helps.
Dan Guzman
SQL Server MVP
<nonamehkg@.hotmail.com> wrote in message
news:1141275178.710563.65560@.i40g2000cwc.googlegroups.com...
> Hi all, let say I have a table:
> CREATE TABLE test (
> [MyID] [int] IDENTITY (1, 1) NOT NULL ,
> [Name] [varchar] (50) NOT NULL
> )
> How do I know the next value that MyID gonna have before inserting a
> new record?
> What is the best way to do that?
> Get the max number plus one? Wouldn't it be expensive to do it this
> way?
>
> Thanks
> --
> ky
>|||Thanks eveyone, I think SCOPE_IDENTITY is what I should use.
I actually need to know the id of a newly inserted record, and then
create a folder using that id as name.
It helps a lot thanks again.

Best way to get the next auto number

Hi all, let say I have a table:
CREATE TABLE test (
[MyID] [int] IDENTITY (1, 1) NOT NULL ,
[Name] [varchar] (50) NOT NULL
)
How do I know the next value that MyID gonna have before inserting a
new record?
What is the best way to do that?
Get the max number plus one? Wouldn't it be expensive to do it this
way?
Thanks
--
kyHi
CREATE TABLE test (
[MyID] [int] IDENTITY (1, 1) NOT NULL ,
[Name] [varchar] (50) NOT NULL
)
DECLARE @.nextid INT
INSERT INTO test([Name]) VALUES ('John')
SELECT @.nextid=SCOPE_IDENTITY()
Print 'Next id is '+CAST(@.nextid+1 AS VARCHAR(10))
DROP TABLE Test
--Or not using an identity property
CREATE PROC Get_nextID
@.ID char(20),
@.nextID numeric(18) output
as
update NextIDs
set @.nextID = NextID = NextID + 1
where ID = @.ID
<nonamehkg@.hotmail.com> wrote in message
news:1141275178.710563.65560@.i40g2000cwc.googlegroups.com...
> Hi all, let say I have a table:
> CREATE TABLE test (
> [MyID] [int] IDENTITY (1, 1) NOT NULL ,
> [Name] [varchar] (50) NOT NULL
> )
> How do I know the next value that MyID gonna have before inserting a
> new record?
> What is the best way to do that?
> Get the max number plus one? Wouldn't it be expensive to do it this
> way?
>
> Thanks
> --
> ky
>|||nonamehkg@.hotmail.com wrote:
> Hi all, let say I have a table:
> CREATE TABLE test (
> [MyID] [int] IDENTITY (1, 1) NOT NULL ,
> [Name] [varchar] (50) NOT NULL
> )
> How do I know the next value that MyID gonna have before inserting a
> new record?
> What is the best way to do that?
>
You don't. You retrieve the IDENTITY value afterwards. Take a look at
the SCOPE_IDENTITY function in Books Online.
--
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--|||To add the other responses, if you need to assign a known surrogate key
value before insertion, you need to use something other than identity. One
method is to use uniqueidentifier so that you can assign the value with
NEWID() or via application code. Another method is to create a table that
contains a row with the last (or next) assigned key value and use that table
to update and retrieve the key value prior to insertion.
--
Hope this helps.
Dan Guzman
SQL Server MVP
<nonamehkg@.hotmail.com> wrote in message
news:1141275178.710563.65560@.i40g2000cwc.googlegroups.com...
> Hi all, let say I have a table:
> CREATE TABLE test (
> [MyID] [int] IDENTITY (1, 1) NOT NULL ,
> [Name] [varchar] (50) NOT NULL
> )
> How do I know the next value that MyID gonna have before inserting a
> new record?
> What is the best way to do that?
> Get the max number plus one? Wouldn't it be expensive to do it this
> way?
>
> Thanks
> --
> ky
>|||Thanks eveyone, I think SCOPE_IDENTITY is what I should use.
I actually need to know the id of a newly inserted record, and then
create a folder using that id as name.
It helps a lot thanks again.

Sunday, February 12, 2012

Best way to generate next sequence number of a table.

Hi,
Can you tell me the best way to get a next unique sequence
integer number which fits in a table. My application supports multi users
and at a single point of time users may add a new row to the table. But it
should fit to generate a unique next available integer value.
For eg: my table contains EMPID int, EMP_Name varchar(50)
EMPID is a primary key.
Can any one suggest me the best way to do it?
Thanks in advance.
VenkatHi,
create table Employees
(
EmpID int identity(1, 1) not null, --this will increment with each
insert by one.
Emp_Name varchar(50)
)
HTH
Peter
"Venkat" <stammana@.palantirsolutions.com> wrote in message
news:eb$gxjFSGHA.4300@.TK2MSFTNGP14.phx.gbl...
> Hi,
>
> Can you tell me the best way to get a next unique sequence
> integer number which fits in a table. My application supports multi users
> and at a single point of time users may add a new row to the table. But it
> should fit to generate a unique next available integer value.
>
> For eg: my table contains EMPID int, EMP_Name varchar(50)
> EMPID is a primary key.
>
> Can any one suggest me the best way to do it?
>
> Thanks in advance.
>
> --
> Venkat
>|||After doing an insert, you can get the most recently inserted id with:
SELECT SCOPE_IDENTITY()|||I would re-think this design. Where is the check digit so I can
validate it? Why is the employee identifer a varying length integer
issued internally where it cannot be verified?|||> Why is the employee identifer a varying length integer
> issued internally where it cannot be verified?
can you suggest a better alternative?

Best way to do batch inserts

All -
I've read up quite a bit on simply doing batch inserts where the scenario
involves inserting a good number of records (maybe a 1000?) at single time.
I definitely do not want to use bulk inserts, so let's rule that out
immediately. It is also my opinion that is not efficient to pass the
"values" as string (or whatever) and have a loop in a stored proc that
parses the string an the insert repeatedly (if anyone disagrees with this
statement, please let me know).
Having concluded the above, it leaves me with two options:
1.Concate a string of insert/values statements
2.Concate a string of insert/select/union all statements where the
"select/union all" represents my values.
First of all does anyone have any metrics as to which of the approaches
above peforms better?
Secondly, in a multi-user, concurrent environment, which is better?
a)Putting all inserts in a single transaction
b)Putting each insert in a single transaction
I understand that there is a lot more I/O with option (b), but I'm of the
opinion that option (a) can lead to table locking etc. Which should I worry
about more?
For now, these are my questions. I'm not looking for the magic bullet here,
I will do some testing myself, but I wanted to poll the minds of the smart
folk who read these posts to see if anyone had already done this or had
opinions
Thanks.
iceA 1000 rows at a time is not much of a challenge. I don't understand why
you've ruled out bulk inserts. They can be very fast. If you don't want to
insert your data directly into the target tables via bulk insert, you could
load staging tables that way and then transfer over via SQL.
As for larger inserts, it boils down to how much concurrency do you need?
If you can take some downtime, then it is faster to do things in a single
chunk (some exceptions do apply). However, if you can't afford to be down
for too long, then you may have to do a bunch of min-transactions with, say,
100 - 1000 rows at a time.
HTH
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinnaclepublishing.com
.
"Ice" <ice@.nospam.com> wrote in message
news:uR0lcN9JFHA.3500@.TK2MSFTNGP14.phx.gbl...
All -
I've read up quite a bit on simply doing batch inserts where the scenario
involves inserting a good number of records (maybe a 1000?) at single time.
I definitely do not want to use bulk inserts, so let's rule that out
immediately. It is also my opinion that is not efficient to pass the
"values" as string (or whatever) and have a loop in a stored proc that
parses the string an the insert repeatedly (if anyone disagrees with this
statement, please let me know).
Having concluded the above, it leaves me with two options:
1.Concate a string of insert/values statements
2.Concate a string of insert/select/union all statements where the
"select/union all" represents my values.
First of all does anyone have any metrics as to which of the approaches
above peforms better?
Secondly, in a multi-user, concurrent environment, which is better?
a)Putting all inserts in a single transaction
b)Putting each insert in a single transaction
I understand that there is a lot more I/O with option (b), but I'm of the
opinion that option (a) can lead to table locking etc. Which should I worry
about more?
For now, these are my questions. I'm not looking for the magic bullet here,
I will do some testing myself, but I wanted to poll the minds of the smart
folk who read these posts to see if anyone had already done this or had
opinions
Thanks.
ice|||Thanks for your quick reply.
I know that "Bulk Inserts" will get this done fast. But I'm looking for a
generic solution that would work for different databases without having to
use their specific implementations to solve the problem. I think using the
"staging" idea in this scenario is probably adding more work than might be
necessary. When you say "via SQL" what are you suggesting I use?
With regards to concurrency and downtime, I want to operate under assumption
that there will be no downtime. So based on that, it would seem that you
are suggesting that inserting ~1000 records in a single transaction is
better than ~1000 individual transactions. I am correct with this
deduction?
ice
"Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
news:#4LF2W9JFHA.1280@.TK2MSFTNGP09.phx.gbl...
> A 1000 rows at a time is not much of a challenge. I don't understand why
> you've ruled out bulk inserts. They can be very fast. If you don't want
to
> insert your data directly into the target tables via bulk insert, you
could
> load staging tables that way and then transfer over via SQL.
> As for larger inserts, it boils down to how much concurrency do you need?
> If you can take some downtime, then it is faster to do things in a single
> chunk (some exceptions do apply). However, if you can't afford to be down
> for too long, then you may have to do a bunch of min-transactions with,
say,
> 100 - 1000 rows at a time.
> HTH
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
> SQL Server MVP
> Columnist, SQL Server Professional
> Toronto, ON Canada
> www.pinnaclepublishing.com
> .
> "Ice" <ice@.nospam.com> wrote in message
> news:uR0lcN9JFHA.3500@.TK2MSFTNGP14.phx.gbl...
> All -
> I've read up quite a bit on simply doing batch inserts where the scenario
> involves inserting a good number of records (maybe a 1000?) at single
time.
> I definitely do not want to use bulk inserts, so let's rule that out
> immediately. It is also my opinion that is not efficient to pass the
> "values" as string (or whatever) and have a loop in a stored proc that
> parses the string an the insert repeatedly (if anyone disagrees with this
> statement, please let me know).
> Having concluded the above, it leaves me with two options:
> 1.Concate a string of insert/values statements
> 2.Concate a string of insert/select/union all statements where the
> "select/union all" represents my values.
> First of all does anyone have any metrics as to which of the approaches
> above peforms better?
> Secondly, in a multi-user, concurrent environment, which is better?
> a)Putting all inserts in a single transaction
> b)Putting each insert in a single transaction
> I understand that there is a lot more I/O with option (b), but I'm of the
> opinion that option (a) can lead to table locking etc. Which should I
worry
> about more?
> For now, these are my questions. I'm not looking for the magic bullet
here,
> I will do some testing myself, but I wanted to poll the minds of the smart
> folk who read these posts to see if anyone had already done this or had
> opinions
> Thanks.
> ice
>|||> First of all does anyone have any metrics as to which of the approaches
> above peforms better?
To add to Tom's response, I ran a quick test from Query Analyzer and found
the separate inserts outperformed the UNION ALL method when SET NOCOUNT ON
was specified.
Hope this helps.
Dan Guzman
SQL Server MVP
"Ice" <ice@.nospam.com> wrote in message
news:uR0lcN9JFHA.3500@.TK2MSFTNGP14.phx.gbl...
> All -
> I've read up quite a bit on simply doing batch inserts where the scenario
> involves inserting a good number of records (maybe a 1000?) at single
> time.
> I definitely do not want to use bulk inserts, so let's rule that out
> immediately. It is also my opinion that is not efficient to pass the
> "values" as string (or whatever) and have a loop in a stored proc that
> parses the string an the insert repeatedly (if anyone disagrees with this
> statement, please let me know).
> Having concluded the above, it leaves me with two options:
> 1.Concate a string of insert/values statements
> 2.Concate a string of insert/select/union all statements where the
> "select/union all" represents my values.
> First of all does anyone have any metrics as to which of the approaches
> above peforms better?
> Secondly, in a multi-user, concurrent environment, which is better?
> a)Putting all inserts in a single transaction
> b)Putting each insert in a single transaction
> I understand that there is a lot more I/O with option (b), but I'm of the
> opinion that option (a) can lead to table locking etc. Which should I
> worry
> about more?
> For now, these are my questions. I'm not looking for the magic bullet
> here,
> I will do some testing myself, but I wanted to poll the minds of the smart
> folk who read these posts to see if anyone had already done this or had
> opinions
> Thanks.
> ice
>|||Doing 1000 individual transaction will take longer end-to-end. However,
each little insert is likely to take not much time by itself. A 1000 row
insert would take longer than a single-row insert. However, a 1000 row
insert should be of the order of a second or two. If you want to insert,
say, a million rows, then you may want to do that in 1000 row batches.
Using SQL, you would be doing an INSERT SELECT from one table into another.
How are these data arriving into your system - DTS, Linked server, flat
file?
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinnaclepublishing.com
.
"Ice" <ice@.nospam.com> wrote in message
news:%23MY27B%23JFHA.484@.TK2MSFTNGP15.phx.gbl...
Thanks for your quick reply.
I know that "Bulk Inserts" will get this done fast. But I'm looking for a
generic solution that would work for different databases without having to
use their specific implementations to solve the problem. I think using the
"staging" idea in this scenario is probably adding more work than might be
necessary. When you say "via SQL" what are you suggesting I use?
With regards to concurrency and downtime, I want to operate under assumption
that there will be no downtime. So based on that, it would seem that you
are suggesting that inserting ~1000 records in a single transaction is
better than ~1000 individual transactions. I am correct with this
deduction?
ice
"Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
news:#4LF2W9JFHA.1280@.TK2MSFTNGP09.phx.gbl...
> A 1000 rows at a time is not much of a challenge. I don't understand why
> you've ruled out bulk inserts. They can be very fast. If you don't want
to
> insert your data directly into the target tables via bulk insert, you
could
> load staging tables that way and then transfer over via SQL.
> As for larger inserts, it boils down to how much concurrency do you need?
> If you can take some downtime, then it is faster to do things in a single
> chunk (some exceptions do apply). However, if you can't afford to be down
> for too long, then you may have to do a bunch of min-transactions with,
say,
> 100 - 1000 rows at a time.
> HTH
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
> SQL Server MVP
> Columnist, SQL Server Professional
> Toronto, ON Canada
> www.pinnaclepublishing.com
> .
> "Ice" <ice@.nospam.com> wrote in message
> news:uR0lcN9JFHA.3500@.TK2MSFTNGP14.phx.gbl...
> All -
> I've read up quite a bit on simply doing batch inserts where the scenario
> involves inserting a good number of records (maybe a 1000?) at single
time.
> I definitely do not want to use bulk inserts, so let's rule that out
> immediately. It is also my opinion that is not efficient to pass the
> "values" as string (or whatever) and have a loop in a stored proc that
> parses the string an the insert repeatedly (if anyone disagrees with this
> statement, please let me know).
> Having concluded the above, it leaves me with two options:
> 1.Concate a string of insert/values statements
> 2.Concate a string of insert/select/union all statements where the
> "select/union all" represents my values.
> First of all does anyone have any metrics as to which of the approaches
> above peforms better?
> Secondly, in a multi-user, concurrent environment, which is better?
> a)Putting all inserts in a single transaction
> b)Putting each insert in a single transaction
> I understand that there is a lot more I/O with option (b), but I'm of the
> opinion that option (a) can lead to table locking etc. Which should I
worry
> about more?
> For now, these are my questions. I'm not looking for the magic bullet
here,
> I will do some testing myself, but I wanted to poll the minds of the smart
> folk who read these posts to see if anyone had already done this or had
> opinions
> Thanks.
> ice
>|||Ok, thanks for the information.
I thought about the INSERT SELECT option, but I don't think it'll get me
much.
I will do some additional testing.
"Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
news:uLpaQFAKFHA.3340@.TK2MSFTNGP14.phx.gbl...
> Doing 1000 individual transaction will take longer end-to-end. However,
> each little insert is likely to take not much time by itself. A 1000 row
> insert would take longer than a single-row insert. However, a 1000 row
> insert should be of the order of a second or two. If you want to insert,
> say, a million rows, then you may want to do that in 1000 row batches.
> Using SQL, you would be doing an INSERT SELECT from one table into
another.
> How are these data arriving into your system - DTS, Linked server, flat
> file?
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
> SQL Server MVP
> Columnist, SQL Server Professional
> Toronto, ON Canada
> www.pinnaclepublishing.com
> .
> "Ice" <ice@.nospam.com> wrote in message
> news:%23MY27B%23JFHA.484@.TK2MSFTNGP15.phx.gbl...
> Thanks for your quick reply.
> I know that "Bulk Inserts" will get this done fast. But I'm looking for a
> generic solution that would work for different databases without having to
> use their specific implementations to solve the problem. I think using
the
> "staging" idea in this scenario is probably adding more work than might be
> necessary. When you say "via SQL" what are you suggesting I use?
> With regards to concurrency and downtime, I want to operate under
assumption
> that there will be no downtime. So based on that, it would seem that you
> are suggesting that inserting ~1000 records in a single transaction is
> better than ~1000 individual transactions. I am correct with this
> deduction?
> ice
> "Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
> news:#4LF2W9JFHA.1280@.TK2MSFTNGP09.phx.gbl...
why
want
> to
> could
need?
single
down
> say,
scenario
> time.
this
the
> worry
> here,
smart
>|||Interesting. Were all the inserts in one transaction or were you setting
"nocount" per individual insert?
ice
"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:#orP1F#JFHA.2764@.tk2msftngp13.phx.gbl...
> To add to Tom's response, I ran a quick test from Query Analyzer and found
> the separate inserts outperformed the UNION ALL method when SET NOCOUNT ON
> was specified.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Ice" <ice@.nospam.com> wrote in message
> news:uR0lcN9JFHA.3500@.TK2MSFTNGP14.phx.gbl...
scenario
this
the
smart
>|||The inserts were done as individual statements/transactions. The single
batch was 1000 insert statements was a script like:
SET NOCOUNT ON
INSERT INTO MyTable VALUES(1)
INSERT INTO MyTable VALUES(2)
--etc.
The UNION ALL script as basically:
SET NOCOUNT ON
INSERT INTO MyTable
SELECT 1
UNION ALL SELECT 2
--etc.
I ran this on my home PC so the write caching IDE controller mitigates the
effect of multiple transactions and log I/O. I'd expect a single
transaction to be a little faster.
Hope this helps.
Dan Guzman
SQL Server MVP
"Ice" <ice@.nospam.com> wrote in message
news:%23QENHJBKFHA.1476@.TK2MSFTNGP09.phx.gbl...
> Interesting. Were all the inserts in one transaction or were you setting
> "nocount" per individual insert?
> ice
> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
> news:#orP1F#JFHA.2764@.tk2msftngp13.phx.gbl...
> scenario
> this
> the
> smart
>|||Is "SET NOCOUNT" a connection specific setting? How If i send 10 different
individual statements using ADO.NET and I set "no count" on before send the
1st of the 10, is it on for all 10? Any ideas?
ice
"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:u2c#OWBKFHA.2752@.TK2MSFTNGP10.phx.gbl...
> The inserts were done as individual statements/transactions. The single
> batch was 1000 insert statements was a script like:
> SET NOCOUNT ON
> INSERT INTO MyTable VALUES(1)
> INSERT INTO MyTable VALUES(2)
> --etc.
> The UNION ALL script as basically:
> SET NOCOUNT ON
> INSERT INTO MyTable
> SELECT 1
> UNION ALL SELECT 2
> --etc.
> I ran this on my home PC so the write caching IDE controller mitigates the
> effect of multiple transactions and log I/O. I'd expect a single
> transaction to be a little faster.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Ice" <ice@.nospam.com> wrote in message
> news:%23QENHJBKFHA.1476@.TK2MSFTNGP09.phx.gbl...
setting
approaches
that
approaches
I
bullet
had
>|||ignore my last post, just found some documentation on this in the framework
SDK.
ice
"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:u2c#OWBKFHA.2752@.TK2MSFTNGP10.phx.gbl...
> The inserts were done as individual statements/transactions. The single
> batch was 1000 insert statements was a script like:
> SET NOCOUNT ON
> INSERT INTO MyTable VALUES(1)
> INSERT INTO MyTable VALUES(2)
> --etc.
> The UNION ALL script as basically:
> SET NOCOUNT ON
> INSERT INTO MyTable
> SELECT 1
> UNION ALL SELECT 2
> --etc.
> I ran this on my home PC so the write caching IDE controller mitigates the
> effect of multiple transactions and log I/O. I'd expect a single
> transaction to be a little faster.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Ice" <ice@.nospam.com> wrote in message
> news:%23QENHJBKFHA.1476@.TK2MSFTNGP09.phx.gbl...
setting
approaches
that
approaches
I
bullet
had
>

Best way to create next sequence number

Hi,
Can you tell me the best way to get a next unique sequence
integer number which fits in a table. My application supports multi users
and at a single point of time users may add a new row to the table. But it
should fit to generate a unique next available integer value.
For eg: my table contains EMPID int, EMP_Name varchar(50)
EMPID is a primary key.
The main constraint here is i am using Ado.net. So i need to
know the the integer (primary key) value before accpeting the changes to
database. I need to get the next sequence number as there are some child
objects for which i need to use this id. Once all the data is set the user
can select to save the changes which commits the data in the data table with
the db.
Can any one suggest me the best way to do it?
Thanks in advance.
VenkatHi
CREATE PROC Get_nextID
@.ID char(20),
@.nextID numeric(18) output
as
update NextIDs
set @.nextID = NextID = NextID + 1
where ID = @.ID
"Venkat" <stammana@.palantirsolutions.com> wrote in message
news:ucR8AgPSGHA.5908@.TK2MSFTNGP10.phx.gbl...
> Hi,
> Can you tell me the best way to get a next unique sequence
> integer number which fits in a table. My application supports multi users
> and at a single point of time users may add a new row to the table. But it
> should fit to generate a unique next available integer value.
>
> For eg: my table contains EMPID int, EMP_Name varchar(50)
> EMPID is a primary key.
> The main constraint here is i am using Ado.net. So i need to
> know the the integer (primary key) value before accpeting the changes to
> database. I need to get the next sequence number as there are some child
> objects for which i need to use this id. Once all the data is set the user
> can select to save the changes which commits the data in the data table
> with the db.
> Can any one suggest me the best way to do it?
>
> Thanks in advance.
>
> --
> Venkat
>
>|||You can use IDENTITY wich is the internal MS SQL Server mechanism and
had recently been normalize in the SQL:2003 ISO standard.
Or you can use your own concept working with a key table like this :
CREATE PROCEDURE P_SYS_DB_CALC_NEW_KEY
@.NOM_TABLE SYSNAME,
@.NEW_VAL INT OUTPUT
AS
-- ****************************************
***************
-- PROCDURE STOCKE DE CALCUL DE CLEF AUTO INCRMENT //
--///////////////////////////////////////////////////////
-- Frdric BROUARD - 2001-09-14 //
--///////////////////////////////////////////////////////
-- chane de requte
DECLARE @.STR_QRY VARCHAR(8000)
-- valeur devant tre retourne
DECLARE @.maxcle integer
SET @.NEW_VAL = NULL
SET NOCOUNT ON
-- ON DEMARRE LA TRANSACTION
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION SET_KEY
-- on regarde si la table des clefs existe
IF NOT EXISTS(SELECT *
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = 'SYS_DB_MASTERKEY')
BEGIN
-- si elle n'existe pas il faut la crer
CREATE TABLE SYS_DB_MASTERKEY
(MSK_TABLE_NAME CHAR(128) NOT NULL,
MSK_LAST_VALUE INTEGER NOT NULL DEFAULT 0,
CONSTRAINT MSK_KEY PRIMARY KEY
(MSK_TABLE_NAME))
IF @.@.ERROR <> 0
GOTO LBL_ERROR
-- cration des index
CREATE INDEX MSK_PK ON SYS_DB_MASTERKEY
(MSK_TABLE_NAME)
END
-- on regarde si la table concerne existe
IF NOT EXISTS(SELECT *
FROM SYS_DB_MASTERKEY
WHERE MSK_TABLE_NAME = @.NOM_TABLE)
BEGIN
-- elle n'existe pas, on l'y insre
INSERT INTO SYS_DB_MASTERKEY VALUES (@.NOM_TABLE, 0)
IF @.@.ERROR <> 0
GOTO LBL_ERROR
END
-- calcule de la nouvelle clef
SELECT @.NEW_VAL = MSK_LAST_VALUE +1
FROM SYS_DB_MASTERKEY
WHERE MSK_TABLE_NAME = @.NOM_TABLE
IF @.@.ERROR <> 0
GOTO LBL_ERROR
-- mise jour de la nouvelle clef dans la table des clefs
UPDATE SYS_DB_MASTERKEY
SET MSK_LAST_VALUE = @.NEW_VAL
WHERE MSK_TABLE_NAME = @.NOM_TABLE
IF @.@.ERROR <> 0
GOTO LBL_ERROR
COMMIT TRANSACTION SET_KEY
GOTO LBL_RESUME
LBL_ERROR:
ROLLBACK TRANSACTION SET_KEY
LBL_RESUME:
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
SET NOCOUNT ON
GO
A +
Venkat a crit :
> Hi,
> Can you tell me the best way to get a next unique sequence
> integer number which fits in a table. My application supports multi users
> and at a single point of time users may add a new row to the table. But it
> should fit to generate a unique next available integer value.
>
> For eg: my table contains EMPID int, EMP_Name varchar(50)
> EMPID is a primary key.
> The main constraint here is i am using Ado.net. So i need to
> know the the integer (primary key) value before accpeting the changes to
> database. I need to get the next sequence number as there are some child
> objects for which i need to use this id. Once all the data is set the user
> can select to save the changes which commits the data in the data table wi
th
> the db.
> Can any one suggest me the best way to do it?
>
> Thanks in advance.
>
> --
> Venkat
>
>
Frdric BROUARD, MVP SQL Server, expert bases de donnes et langage SQL
Le site sur le langage SQL et les SGBDR : http://sqlpro.developpez.com
Audit, conseil, expertise, formation, modlisation, tuning, optimisation
********************* http://www.datasapiens.com ***********************|||Uri Dimant a crit :
> Hi
> CREATE PROC Get_nextID
> @.ID char(20),
> @.nextID numeric(18) output
> as
> update NextIDs
> set @.nextID = NextID = NextID + 1
> where ID = @.ID
>
Never do this ! You will fatally have some deadlock / telescoping key
values over the time due to a lack in the isolation principles.
A +

>
> "Venkat" <stammana@.palantirsolutions.com> wrote in message
> news:ucR8AgPSGHA.5908@.TK2MSFTNGP10.phx.gbl...
>
Frdric BROUARD, MVP SQL Server, expert bases de donnes et langage SQL
Le site sur le langage SQL et les SGBDR : http://sqlpro.developpez.com
Audit, conseil, expertise, formation, modlisation, tuning, optimisation
********************* http://www.datasapiens.com ***********************|||SQLpro
This is only one of many options, I have seen that by using a technique from
my previous post and and having an index on the column we don't get any
deadlocks. But I agree that the OP may use this below much better script
DECLARE @.maxid INT
BEGIN TRAN
SELECT @.maxid =COALESCE(MAX(Mycol),0) FROM Table WITH (UPDLOCK,HOLDLOCK)
WHERE.....
INSERTI INTO Table(mycol) VALUES (@.maxid +1)
COMMIT TRAN
"SQLpro [MVP]" <brouardf@.club-internet.fr> wrote in message
news:u4Ncq2PSGHA.1728@.TK2MSFTNGP11.phx.gbl...
> Uri Dimant a crit :
> Never do this ! You will fatally have some deadlock / telescoping key
> values over the time due to a lack in the isolation principles.
> A +
>
>
> --
> Frdric BROUARD, MVP SQL Server, expert bases de donnes et langage SQL
> Le site sur le langage SQL et les SGBDR : http://sqlpro.developpez.com
> Audit, conseil, expertise, formation, modlisation, tuning, optimisation
> ********************* http://www.datasapiens.com ***********************|||Uri Dimant a crit :
> SQLpro
> This is only one of many options, I have seen that by using a technique fr
om
> my previous post and and having an index on the column we don't get any
> deadlocks. But I agree that the OP may use this below much better script
>
> DECLARE @.maxid INT
> BEGIN TRAN
> SELECT @.maxid =COALESCE(MAX(Mycol),0) FROM Table WITH (UPDLOCK,HOLDLOCK)
> WHERE.....
> INSERTI INTO Table(mycol) VALUES (@.maxid +1)
> COMMIT TRAN
effectyvly this lock or an isolation level as SERIALIZABLE can garanty that.
Wich in term of cost is pretty ugly since the table grow !!!
A +

> "SQLpro [MVP]" <brouardf@.club-internet.fr> wrote in message
> news:u4Ncq2PSGHA.1728@.TK2MSFTNGP11.phx.gbl...
>
Frdric BROUARD, MVP SQL Server, expert bases de donnes et langage SQL
Le site sur le langage SQL et les SGBDR : http://sqlpro.developpez.com
Audit, conseil, expertise, formation, modlisation, tuning, optimisation
********************* http://www.datasapiens.com ***********************

Best way to copy large number of databases to new server?

We're in the process of migrating to a new database server, and have a large
number of databases to move across (approx 100). I found the 'copy database
wizard' which seemed to be exactly what I needed, only it didn't work
because the servers aren't on the same domain (or even the same network).
Is there any similar solution, or am I going to have to use backup/restore
on every single database individually?
Thanks in advance!
ChrisIf they are on the same network and you can´t reach them fromthe one SQL
Server, you should make a backup. You can make a hot backup or depending on
uptime of your database and the size of the databasefiles do a service
shutdown, copy the files and restart the service. This "clone" can be
attached to the other server.
HTH, Jens Suessmeyer.
--
http://www.sqlserver2005.de
--
"Chris Ashley" <chris.ashley@.SPAMblueyonder.co.uk> schrieb im Newsbeitrag
news:427a3eb6.0@.entanet...
> We're in the process of migrating to a new database server, and have a
> large number of databases to move across (approx 100). I found the 'copy
> database wizard' which seemed to be exactly what I needed, only it didn't
> work because the servers aren't on the same domain (or even the same
> network).
> Is there any similar solution, or am I going to have to use backup/restore
> on every single database individually?
> Thanks in advance!
> Chris
>|||Thats odd, I have done the same thing and it works on mine ok.
Couple of other solutions for you
Detach the database and copy the data and log file over (you will need to be
careful on Server permissions for the files for this)
Create the structure and copy the data over using DTS.
Have fun
Peter
"Chris Ashley" wrote:
> We're in the process of migrating to a new database server, and have a large
> number of databases to move across (approx 100). I found the 'copy database
> wizard' which seemed to be exactly what I needed, only it didn't work
> because the servers aren't on the same domain (or even the same network).
> Is there any similar solution, or am I going to have to use backup/restore
> on every single database individually?
> Thanks in advance!
> Chris
>
>|||Hi,
Easy method to copy all the databases (including system DB) is :-
1. Stop active SQL Server service and copy all the MDF , NDF and LDF into a
Tape.
2. Install SQL server and same Service packs (as old) in the identical
folder (Same as existing server)
3. Stop the SQL server
4. Copy the .MDF , NDF and .LDF files (took in step 1 ) from Tape to the
same folders (Same as existing active server).
5. Start SQL server
Now login to query analyzer or enterprise manager and confirm all the
databases are online.
Thanks
Hari
SQL Server MVP
"Chris Ashley" <chris.ashley@.SPAMblueyonder.co.uk> wrote in message
news:427a3eb6.0@.entanet...
> We're in the process of migrating to a new database server, and have a
large
> number of databases to move across (approx 100). I found the 'copy
database
> wizard' which seemed to be exactly what I needed, only it didn't work
> because the servers aren't on the same domain (or even the same network).
> Is there any similar solution, or am I going to have to use backup/restore
> on every single database individually?
> Thanks in advance!
> Chris
>|||In article <OsLpXcaUFHA.3176@.TK2MSFTNGP12.phx.gbl>,
hari_prasad_k@.hotmail.com says...
> Hi,
> Easy method to copy all the databases (including system DB) is :-
> 1. Stop active SQL Server service and copy all the MDF , NDF and LDF into a
> Tape.
> 2. Install SQL server and same Service packs (as old) in the identical
> folder (Same as existing server)
> 3. Stop the SQL server
> 4. Copy the .MDF , NDF and .LDF files (took in step 1 ) from Tape to the
> same folders (Same as existing active server).
> 5. Start SQL server
>
> Now login to query analyzer or enterprise manager and confirm all the
> databases are online.
Don't forget about account maintenance - you will have to check all the
logon/permissions - which can be done in a script.
--
--
spam999free@.rrohio.com
remove 999 in order to email me|||Can't you simply write a T-SQL query that lists all databases and backs each
one up, so that on the other end you can restore it in a similar fashion?
"Chris Ashley" <chris.ashley@.SPAMblueyonder.co.uk> wrote in message
news:427a3eb6.0@.entanet...
> We're in the process of migrating to a new database server, and have a
> large number of databases to move across (approx 100). I found the 'copy
> database wizard' which seemed to be exactly what I needed, only it didn't
> work because the servers aren't on the same domain (or even the same
> network).
> Is there any similar solution, or am I going to have to use backup/restore
> on every single database individually?
> Thanks in advance!
> Chris
>

Best way to copy large number of databases to new server?

We're in the process of migrating to a new database server, and have a large
number of databases to move across (approx 100). I found the 'copy database
wizard' which seemed to be exactly what I needed, only it didn't work
because the servers aren't on the same domain (or even the same network).
Is there any similar solution, or am I going to have to use backup/restore
on every single database individually?
Thanks in advance!
Chris
If they are on the same network and you cant reach them fromthe one SQL
Server, you should make a backup. You can make a hot backup or depending on
uptime of your database and the size of the databasefiles do a service
shutdown, copy the files and restart the service. This "clone" can be
attached to the other server.
HTH, Jens Suessmeyer.
http://www.sqlserver2005.de
"Chris Ashley" <chris.ashley@.SPAMblueyonder.co.uk> schrieb im Newsbeitrag
news:427a3eb6.0@.entanet...
> We're in the process of migrating to a new database server, and have a
> large number of databases to move across (approx 100). I found the 'copy
> database wizard' which seemed to be exactly what I needed, only it didn't
> work because the servers aren't on the same domain (or even the same
> network).
> Is there any similar solution, or am I going to have to use backup/restore
> on every single database individually?
> Thanks in advance!
> Chris
>
|||Thats odd, I have done the same thing and it works on mine ok.
Couple of other solutions for you
Detach the database and copy the data and log file over (you will need to be
careful on Server permissions for the files for this)
Create the structure and copy the data over using DTS.
Have fun
Peter
"Chris Ashley" wrote:

> We're in the process of migrating to a new database server, and have a large
> number of databases to move across (approx 100). I found the 'copy database
> wizard' which seemed to be exactly what I needed, only it didn't work
> because the servers aren't on the same domain (or even the same network).
> Is there any similar solution, or am I going to have to use backup/restore
> on every single database individually?
> Thanks in advance!
> Chris
>
>
|||Hi,
Easy method to copy all the databases (including system DB) is :-
1. Stop active SQL Server service and copy all the MDF , NDF and LDF into a
Tape.
2. Install SQL server and same Service packs (as old) in the identical
folder (Same as existing server)
3. Stop the SQL server
4. Copy the .MDF , NDF and .LDF files (took in step 1 ) from Tape to the
same folders (Same as existing active server).
5. Start SQL server
Now login to query analyzer or enterprise manager and confirm all the
databases are online.
Thanks
Hari
SQL Server MVP
"Chris Ashley" <chris.ashley@.SPAMblueyonder.co.uk> wrote in message
news:427a3eb6.0@.entanet...
> We're in the process of migrating to a new database server, and have a
large
> number of databases to move across (approx 100). I found the 'copy
database
> wizard' which seemed to be exactly what I needed, only it didn't work
> because the servers aren't on the same domain (or even the same network).
> Is there any similar solution, or am I going to have to use backup/restore
> on every single database individually?
> Thanks in advance!
> Chris
>
|||In article <OsLpXcaUFHA.3176@.TK2MSFTNGP12.phx.gbl>,
hari_prasad_k@.hotmail.com says...
> Hi,
> Easy method to copy all the databases (including system DB) is :-
> 1. Stop active SQL Server service and copy all the MDF , NDF and LDF into a
> Tape.
> 2. Install SQL server and same Service packs (as old) in the identical
> folder (Same as existing server)
> 3. Stop the SQL server
> 4. Copy the .MDF , NDF and .LDF files (took in step 1 ) from Tape to the
> same folders (Same as existing active server).
> 5. Start SQL server
>
> Now login to query analyzer or enterprise manager and confirm all the
> databases are online.
Don't forget about account maintenance - you will have to check all the
logon/permissions - which can be done in a script.
--
spam999free@.rrohio.com
remove 999 in order to email me
|||Can't you simply write a T-SQL query that lists all databases and backs each
one up, so that on the other end you can restore it in a similar fashion?
"Chris Ashley" <chris.ashley@.SPAMblueyonder.co.uk> wrote in message
news:427a3eb6.0@.entanet...
> We're in the process of migrating to a new database server, and have a
> large number of databases to move across (approx 100). I found the 'copy
> database wizard' which seemed to be exactly what I needed, only it didn't
> work because the servers aren't on the same domain (or even the same
> network).
> Is there any similar solution, or am I going to have to use backup/restore
> on every single database individually?
> Thanks in advance!
> Chris
>

Best way to copy large number of databases to new server?

We're in the process of migrating to a new database server, and have a large
number of databases to move across (approx 100). I found the 'copy database
wizard' which seemed to be exactly what I needed, only it didn't work
because the servers aren't on the same domain (or even the same network).
Is there any similar solution, or am I going to have to use backup/restore
on every single database individually?
Thanks in advance!
ChrisIf they are on the same network and you cant reach them fromthe one SQL
Server, you should make a backup. You can make a hot backup or depending on
uptime of your database and the size of the databasefiles do a service
shutdown, copy the files and restart the service. This "clone" can be
attached to the other server.
HTH, Jens Suessmeyer.
http://www.sqlserver2005.de
--
"Chris Ashley" <chris.ashley@.SPAMblueyonder.co.uk> schrieb im Newsbeitrag
news:427a3eb6.0@.entanet...
> We're in the process of migrating to a new database server, and have a
> large number of databases to move across (approx 100). I found the 'copy
> database wizard' which seemed to be exactly what I needed, only it didn't
> work because the servers aren't on the same domain (or even the same
> network).
> Is there any similar solution, or am I going to have to use backup/restore
> on every single database individually?
> Thanks in advance!
> Chris
>|||Thats odd, I have done the same thing and it works on mine ok.
Couple of other solutions for you
Detach the database and copy the data and log file over (you will need to be
careful on Server permissions for the files for this)
Create the structure and copy the data over using DTS.
Have fun
Peter
"Chris Ashley" wrote:

> We're in the process of migrating to a new database server, and have a lar
ge
> number of databases to move across (approx 100). I found the 'copy databas
e
> wizard' which seemed to be exactly what I needed, only it didn't work
> because the servers aren't on the same domain (or even the same network).
> Is there any similar solution, or am I going to have to use backup/restore
> on every single database individually?
> Thanks in advance!
> Chris
>
>|||Hi,
Easy method to copy all the databases (including system DB) is :-
1. Stop active SQL Server service and copy all the MDF , NDF and LDF into a
Tape.
2. Install SQL server and same Service packs (as old) in the identical
folder (Same as existing server)
3. Stop the SQL server
4. Copy the .MDF , NDF and .LDF files (took in step 1 ) from Tape to the
same folders (Same as existing active server).
5. Start SQL server
Now login to query analyzer or enterprise manager and confirm all the
databases are online.
Thanks
Hari
SQL Server MVP
"Chris Ashley" <chris.ashley@.SPAMblueyonder.co.uk> wrote in message
news:427a3eb6.0@.entanet...
> We're in the process of migrating to a new database server, and have a
large
> number of databases to move across (approx 100). I found the 'copy
database
> wizard' which seemed to be exactly what I needed, only it didn't work
> because the servers aren't on the same domain (or even the same network).
> Is there any similar solution, or am I going to have to use backup/restore
> on every single database individually?
> Thanks in advance!
> Chris
>|||In article <OsLpXcaUFHA.3176@.TK2MSFTNGP12.phx.gbl>,
hari_prasad_k@.hotmail.com says...
> Hi,
> Easy method to copy all the databases (including system DB) is :-
> 1. Stop active SQL Server service and copy all the MDF , NDF and LDF into
a
> Tape.
> 2. Install SQL server and same Service packs (as old) in the identical
> folder (Same as existing server)
> 3. Stop the SQL server
> 4. Copy the .MDF , NDF and .LDF files (took in step 1 ) from Tape to the
> same folders (Same as existing active server).
> 5. Start SQL server
>
> Now login to query analyzer or enterprise manager and confirm all the
> databases are online.
Don't forget about account maintenance - you will have to check all the
logon/permissions - which can be done in a script.
--
spam999free@.rrohio.com
remove 999 in order to email me|||Can't you simply write a T-SQL query that lists all databases and backs each
one up, so that on the other end you can restore it in a similar fashion?
"Chris Ashley" <chris.ashley@.SPAMblueyonder.co.uk> wrote in message
news:427a3eb6.0@.entanet...
> We're in the process of migrating to a new database server, and have a
> large number of databases to move across (approx 100). I found the 'copy
> database wizard' which seemed to be exactly what I needed, only it didn't
> work because the servers aren't on the same domain (or even the same
> network).
> Is there any similar solution, or am I going to have to use backup/restore
> on every single database individually?
> Thanks in advance!
> Chris
>

Friday, February 10, 2012

Best way for generating a Sequenzenumber

Hi
I'm interested to know what the besrt way would be to create a sequenze
number.
How can I prevent that a number will be created twice? What are your
thoughts?
Thank's
MichelHow about using an IDENTITY column?
Aaron Bertrand
SQL Server MVP
http://www.sqlblog.com/
http://www.aspfaq.com/5006
"Michel" <michel_mueller@.bluewin.ch> wrote in message
news:usU97m5oHHA.4424@.TK2MSFTNGP03.phx.gbl...
> Hi
> I'm interested to know what the besrt way would be to create a sequenze
> number.
> How can I prevent that a number will be created twice? What are your
> thoughts?
> Thank's
> Michel
>|||The number should start from 0 and we should be able to change the number in
some years to 0 again.
"Aaron Bertrand [SQL Server MVP]" <ten.xoc@.dnartreb.noraa> schrieb im
Newsbeitrag news:uIxOgr5oHHA.4132@.TK2MSFTNGP02.phx.gbl...
> How about using an IDENTITY column?
> --
> Aaron Bertrand
> SQL Server MVP
> http://www.sqlblog.com/
> http://www.aspfaq.com/5006
>
>
> "Michel" <michel_mueller@.bluewin.ch> wrote in message
> news:usU97m5oHHA.4424@.TK2MSFTNGP03.phx.gbl...
>|||Michael,
declare your column: MyKey int IDENTITY(0,1) and it will start at zero and
count by one.
TRUNCATE TABLE MyTable will clear all the data from the table and reset the
IDENTITY counter back to zero.
Thank you,
Daniel Jameson
SQL Server DBA
Children's Oncology Group
www.childrensoncologygroup.org
"Michel" <michel_mueller@.bluewin.ch> wrote in message
news:%239QX245oHHA.4032@.TK2MSFTNGP02.phx.gbl...
> The number should start from 0 and we should be able to change the number
> in some years to 0 again.
>
> "Aaron Bertrand [SQL Server MVP]" <ten.xoc@.dnartreb.noraa> schrieb im
> Newsbeitrag news:uIxOgr5oHHA.4132@.TK2MSFTNGP02.phx.gbl...
>|||> The number should start from 0
IDENTITY can start at 0.

> and we should be able to change the number in some years to 0 again.
Do you mean in combination with deleting all of the existing data? If so,
then yes, you can truncate the table (which will also reseed the identity
back to 0) or you can delete manually (which you will need to do, and in a
certain order, if other tables reference this identity value through an FK
relationship) and then use DBCC CHECKIDENT with RESEED and/or drop the table
and re-create it.
If you need to keep the existing data, please explain how anyone can
distinguish between the first row with a value of 0 and a new row when you
start over.
Aaron Bertrand
SQL Server MVP
http://www.sqlblog.com/
http://www.aspfaq.com/5006