Tuesday, March 20, 2012
big update problem!
complex transformationsdatabase.
All the code, .net datatables and logic etc takes 3 minutes to run on the
entire database. However, when I put in the update statements the same
process takes
3 1/2 hours!
So I know its not becuase I am using datatables or that my select statements
are poor etc etc etc. Its 100% the update statements.
Any clues? is this normal? are updates statements the most expensive in time?ADDED:
I am also able to create and populate a fairly large table in a matter of
minutes however updating to this one table is costing a lot of time.
Now granted this table in question is much larger then most tables in the DB
and larger then the tables I create but not 10 times larger!
Also, this table gets a lot of activity. Could our problem be the table
itself?
"Sean" wrote:
> I have a mini .net application that I have created to make some pretty
> complex transformationsdatabase.
> All the code, .net datatables and logic etc takes 3 minutes to run on the
> entire database. However, when I put in the update statements the same
> process takes
> 3 1/2 hours!
> So I know its not becuase I am using datatables or that my select statements
> are poor etc etc etc. Its 100% the update statements.
> Any clues? is this normal? are updates statements the most expensive in time?|||Yes, and it could be a hardware problem or the phase of the moon. With no
information about what the table looks like, what you are trying to do and
what else is running, there's no way to tell. This is like calling your
mechanic and saying it takes to long for me to get to work, what should I
do? The main difference between update statements and select statements is
they take update locks and write to disk. I would look at blocking waiting
to lock something that's in use first but that's just idle speculation.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"Sean" <Sean@.discussions.microsoft.com> wrote in message
news:6E7394B4-A544-4111-A35B-4661DF0C459C@.microsoft.com...
> ADDED:
> I am also able to create and populate a fairly large table in a matter of
> minutes however updating to this one table is costing a lot of time.
> Now granted this table in question is much larger then most tables in the
> DB
> and larger then the tables I create but not 10 times larger!
> Also, this table gets a lot of activity. Could our problem be the table
> itself?
> "Sean" wrote:
>> I have a mini .net application that I have created to make some pretty
>> complex transformationsdatabase.
>> All the code, .net datatables and logic etc takes 3 minutes to run on the
>> entire database. However, when I put in the update statements the same
>> process takes
>> 3 1/2 hours!
>> So I know its not becuase I am using datatables or that my select
>> statements
>> are poor etc etc etc. Its 100% the update statements.
>> Any clues? is this normal? are updates statements the most expensive in
>> time?|||Well there is currently only 3 people accessing the server and only one (me)
accessing this particular database.
by my estimations it is taking about 1-3 seconds to update a row and in this
context each row has to be updated seperately.
Having said all that I think the functionality we are gaining might not be
worth the dev time which is a subject I will have for our meeting.
"Roger Wolter[MSFT]" wrote:
> Yes, and it could be a hardware problem or the phase of the moon. With no
> information about what the table looks like, what you are trying to do and
> what else is running, there's no way to tell. This is like calling your
> mechanic and saying it takes to long for me to get to work, what should I
> do? The main difference between update statements and select statements is
> they take update locks and write to disk. I would look at blocking waiting
> to lock something that's in use first but that's just idle speculation.
> --
> This posting is provided "AS IS" with no warranties, and confers no rights.
> Use of included script samples are subject to the terms specified at
> http://www.microsoft.com/info/cpyright.htm
> "Sean" <Sean@.discussions.microsoft.com> wrote in message
> news:6E7394B4-A544-4111-A35B-4661DF0C459C@.microsoft.com...
> > ADDED:
> >
> > I am also able to create and populate a fairly large table in a matter of
> > minutes however updating to this one table is costing a lot of time.
> >
> > Now granted this table in question is much larger then most tables in the
> > DB
> > and larger then the tables I create but not 10 times larger!
> >
> > Also, this table gets a lot of activity. Could our problem be the table
> > itself?
> >
> > "Sean" wrote:
> >
> >> I have a mini .net application that I have created to make some pretty
> >> complex transformationsdatabase.
> >> All the code, .net datatables and logic etc takes 3 minutes to run on the
> >> entire database. However, when I put in the update statements the same
> >> process takes
> >> 3 1/2 hours!
> >> So I know its not becuase I am using datatables or that my select
> >> statements
> >> are poor etc etc etc. Its 100% the update statements.
> >>
> >> Any clues? is this normal? are updates statements the most expensive in
> >> time?
>
>|||Sean wrote:
> Well there is currently only 3 people accessing the server and only one (me)
> accessing this particular database.
> by my estimations it is taking about 1-3 seconds to update a row and in this
> context each row has to be updated seperately.
> Having said all that I think the functionality we are gaining might not be
> worth the dev time which is a subject I will have for our meeting.
>
1-3 seconds to update a record? How many indexes are on this table? Is
there a clustered index? Is the column you're updating part of the
clustered index key? Are there update triggers on the table?|||Basically none of the above and its safe to assume no optimization strategies
have been done for this database.
We have a primary key on this table but that is it really.
I have studied some on clustering because I wanted to go from MCAD to MCSD
but I dropped that database study so in short I am database stupid outside of
basic T-SQL statements.
If you guys think clustering etc will help a lot I will look into it because
this table is getting big.
Thanks
"Tracy McKibben" wrote:
> Sean wrote:
> > Well there is currently only 3 people accessing the server and only one (me)
> > accessing this particular database.
> > by my estimations it is taking about 1-3 seconds to update a row and in this
> > context each row has to be updated seperately.
> >
> > Having said all that I think the functionality we are gaining might not be
> > worth the dev time which is a subject I will have for our meeting.
> >
> 1-3 seconds to update a record? How many indexes are on this table? Is
> there a clustered index? Is the column you're updating part of the
> clustered index key? Are there update triggers on the table?
>|||and correction, its only .4 a second for updates per record.
Its just that at the point of production we will have
157,000 records which if this was all we were doing it would be fine. but
this one item is taking up more then 50% of the time for our data conversion.
"Sean" wrote:
> Basically none of the above and its safe to assume no optimization strategies
> have been done for this database.
> We have a primary key on this table but that is it really.
> I have studied some on clustering because I wanted to go from MCAD to MCSD
> but I dropped that database study so in short I am database stupid outside of
> basic T-SQL statements.
> If you guys think clustering etc will help a lot I will look into it because
> this table is getting big.
> Thanks
>
>
>
> "Tracy McKibben" wrote:
> > Sean wrote:
> > > Well there is currently only 3 people accessing the server and only one (me)
> > > accessing this particular database.
> > > by my estimations it is taking about 1-3 seconds to update a row and in this
> > > context each row has to be updated seperately.
> > >
> > > Having said all that I think the functionality we are gaining might not be
> > > worth the dev time which is a subject I will have for our meeting.
> > >
> >
> > 1-3 seconds to update a record? How many indexes are on this table? Is
> > there a clustered index? Is the column you're updating part of the
> > clustered index key? Are there update triggers on the table?
> >
big update problem!
I am also able to create and populate a fairly large table in a matter of
minutes however updating to this one table is costing a lot of time.
Now granted this table in question is much larger then most tables in the DB
and larger then the tables I create but not 10 times larger!
Also, this table gets a lot of activity. Could our problem be the table
itself?
"Sean" wrote:
[vbcol=seagreen]
> I have a mini .net application that I have created to make some pretty
> complex transformationsdatabase.
> All the code, .net datatables and logic etc takes 3 minutes to run on the
> entire database. However, when I put in the update statements the same
> process takes
> 3 1/2 hours!
> So I know its not becuase I am using datatables or that my select statemen
ts
> are poor etc etc etc. Its 100% the update statements.
> Any clues? is this normal? are updates statements the most expensive in time?[/vbc
ol]Yes, and it could be a hardware problem or the phase of the moon. With no
information about what the table looks like, what you are trying to do and
what else is running, there's no way to tell. This is like calling your
mechanic and saying it takes to long for me to get to work, what should I
do? The main difference between update statements and select statements is
they take update locks and write to disk. I would look at blocking waiting
to lock something that's in use first but that's just idle speculation.
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"Sean" <Sean@.discussions.microsoft.com> wrote in message
news:6E7394B4-A544-4111-A35B-4661DF0C459C@.microsoft.com...[vbcol=seagreen]
> ADDED:
> I am also able to create and populate a fairly large table in a matter of
> minutes however updating to this one table is costing a lot of time.
> Now granted this table in question is much larger then most tables in the
> DB
> and larger then the tables I create but not 10 times larger!
> Also, this table gets a lot of activity. Could our problem be the table
> itself?
> "Sean" wrote:
>|||Well there is currently only 3 people accessing the server and only one (me)
accessing this particular database.
by my estimations it is taking about 1-3 seconds to update a row and in this
context each row has to be updated seperately.
Having said all that I think the functionality we are gaining might not be
worth the dev time which is a subject I will have for our meeting.
"Roger Wolter[MSFT]" wrote:
> Yes, and it could be a hardware problem or the phase of the moon. With no
> information about what the table looks like, what you are trying to do and
> what else is running, there's no way to tell. This is like calling your
> mechanic and saying it takes to long for me to get to work, what should I
> do? The main difference between update statements and select statements i
s
> they take update locks and write to disk. I would look at blocking waitin
g
> to lock something that's in use first but that's just idle speculation.
> --
> This posting is provided "AS IS" with no warranties, and confers no rights
.
> Use of included script samples are subject to the terms specified at
> http://www.microsoft.com/info/cpyright.htm
> "Sean" <Sean@.discussions.microsoft.com> wrote in message
> news:6E7394B4-A544-4111-A35B-4661DF0C459C@.microsoft.com...
>
>|||I have a mini .net application that I have created to make some pretty
complex transformationsdatabase.
All the code, .net datatables and logic etc takes 3 minutes to run on the
entire database. However, when I put in the update statements the same
process takes
3 1/2 hours!
So I know its not becuase I am using datatables or that my select statements
are poor etc etc etc. Its 100% the update statements.
Any clues? is this normal? are updates statements the most expensive in time
?|||ADDED:
I am also able to create and populate a fairly large table in a matter of
minutes however updating to this one table is costing a lot of time.
Now granted this table in question is much larger then most tables in the DB
and larger then the tables I create but not 10 times larger!
Also, this table gets a lot of activity. Could our problem be the table
itself?
"Sean" wrote:
[vbcol=seagreen]
> I have a mini .net application that I have created to make some pretty
> complex transformationsdatabase.
> All the code, .net datatables and logic etc takes 3 minutes to run on the
> entire database. However, when I put in the update statements the same
> process takes
> 3 1/2 hours!
> So I know its not becuase I am using datatables or that my select statemen
ts
> are poor etc etc etc. Its 100% the update statements.
> Any clues? is this normal? are updates statements the most expensive in time?[/vbc
ol]|||Yes, and it could be a hardware problem or the phase of the moon. With no
information about what the table looks like, what you are trying to do and
what else is running, there's no way to tell. This is like calling your
mechanic and saying it takes to long for me to get to work, what should I
do? The main difference between update statements and select statements is
they take update locks and write to disk. I would look at blocking waiting
to lock something that's in use first but that's just idle speculation.
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"Sean" <Sean@.discussions.microsoft.com> wrote in message
news:6E7394B4-A544-4111-A35B-4661DF0C459C@.microsoft.com...[vbcol=seagreen]
> ADDED:
> I am also able to create and populate a fairly large table in a matter of
> minutes however updating to this one table is costing a lot of time.
> Now granted this table in question is much larger then most tables in the
> DB
> and larger then the tables I create but not 10 times larger!
> Also, this table gets a lot of activity. Could our problem be the table
> itself?
> "Sean" wrote:
>|||Well there is currently only 3 people accessing the server and only one (me)
accessing this particular database.
by my estimations it is taking about 1-3 seconds to update a row and in this
context each row has to be updated seperately.
Having said all that I think the functionality we are gaining might not be
worth the dev time which is a subject I will have for our meeting.
"Roger Wolter[MSFT]" wrote:
> Yes, and it could be a hardware problem or the phase of the moon. With no
> information about what the table looks like, what you are trying to do and
> what else is running, there's no way to tell. This is like calling your
> mechanic and saying it takes to long for me to get to work, what should I
> do? The main difference between update statements and select statements i
s
> they take update locks and write to disk. I would look at blocking waitin
g
> to lock something that's in use first but that's just idle speculation.
> --
> This posting is provided "AS IS" with no warranties, and confers no rights
.
> Use of included script samples are subject to the terms specified at
> http://www.microsoft.com/info/cpyright.htm
> "Sean" <Sean@.discussions.microsoft.com> wrote in message
> news:6E7394B4-A544-4111-A35B-4661DF0C459C@.microsoft.com...
>
>|||Sean wrote:
> Well there is currently only 3 people accessing the server and only one (m
e)
> accessing this particular database.
> by my estimations it is taking about 1-3 seconds to update a row and in th
is
> context each row has to be updated seperately.
> Having said all that I think the functionality we are gaining might not be
> worth the dev time which is a subject I will have for our meeting.
>
1-3 seconds to update a record? How many indexes are on this table? Is
there a clustered index? Is the column you're updating part of the
clustered index key? Are there update triggers on the table?|||Sean wrote:
> Well there is currently only 3 people accessing the server and only one (m
e)
> accessing this particular database.
> by my estimations it is taking about 1-3 seconds to update a row and in th
is
> context each row has to be updated seperately.
> Having said all that I think the functionality we are gaining might not be
> worth the dev time which is a subject I will have for our meeting.
>
1-3 seconds to update a record? How many indexes are on this table? Is
there a clustered index? Is the column you're updating part of the
clustered index key? Are there update triggers on the table?|||Basically none of the above and its safe to assume no optimization strategie
s
have been done for this database.
We have a primary key on this table but that is it really.
I have studied some on clustering because I wanted to go from MCAD to MCSD
but I dropped that database study so in short I am database stupid outside o
f
basic T-SQL statements.
If you guys think clustering etc will help a lot I will look into it because
this table is getting big.
Thanks
"Tracy McKibben" wrote:
> Sean wrote:
> 1-3 seconds to update a record? How many indexes are on this table? Is
> there a clustered index? Is the column you're updating part of the
> clustered index key? Are there update triggers on the table?
>
Sunday, March 11, 2012
Bidirectional Replication
I am trying to perform 2 way replication in SQL 2000.
SERVERA = OrdersTable -> I want data to replicate to this
server from SERVERB on UPDATE only.
SERVERB = OrdersTable -> I want data to replicate to this
server from SERVERA on INSERT only.
This is what I have done.
I created a publication on SERVERA and in the article
properties, I checked only INSERT.
Then I created a push subscription to SERVERB.
I then created a publication on SERVERB and in the article
properties, I checked only UPDATE. Then I created a push
subscription to SERVERA.
Is this the correct way to achieve my goal?
Thanks in advance.
I don't think so. First are you using merge replication? It sounds like you are. I think you are talking about the merging
changes tab options on the articles properties tab.
This verifies that a merge agent has rights to insert, update or delete. So you can have 1 pull subscriber which can do all three depending on his/her rights, and another that can't, or can have a subset or rights.
bi-directional transactional replication might be the way to go with this as you can write custom stored procedures to incorporate your business logic.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Wednesday, March 7, 2012
Beware installing SQL Server 2005 BOL (May 2007)
Beware installing this update, it can seriously damage your health. I clicked 'install' on my Vista 64 machine and let it trundle off to itself while I was doing some Visual Studio work. It runs for a while and then ... RESTARTs my PC. Without asking and withoug allowing me to save my work. Fresh, new work, gone. Thanks a bunch guys, installing some new docs really merits a restart.
This event is in the system log:
The process msiexec.exe has initiated the restart of computer BLUE on behalf of user NT AUTHORITY\SYSTEM for the following reason: No title for this reason could be found
Reason Code: 0x80030002
Shutdown Type: restart
Comment: The Windows Installer initiated a system restart to complete or continue the configuration of 'Microsoft SQL Server 2005 Books Online (English) (May 2007)'.
Thanks very much for bringing this issue to our attention. It is sometimes necessary to reboot after a Books Online update - because Books Online, Visual Studio, and SQL Server Management Studio share components - but the expected behavior is a prompt for reboot rather than an automatic reboot. We are actively investigating the issue, and will re-release the update after we identify a solution, so that other customers will not have a similar experience.
|||Bryan
Thank you far replying to my post -- I am a bit calmer now I have had about four reboots (on various machines) in the last week because of SQL Server patches and I was a bit fractious.
Beware installing SQL Server 2005 BOL (May 2007)
Beware installing this update, it can seriously damage your health. I clicked 'install' on my Vista 64 machine and let it trundle off to itself while I was doing some Visual Studio work. It runs for a while and then ... RESTARTs my PC. Without asking and withoug allowing me to save my work. Fresh, new work, gone. Thanks a bunch guys, installing some new docs really merits a restart.
This event is in the system log:
The process msiexec.exe has initiated the restart of computer BLUE on behalf of user NT AUTHORITY\SYSTEM for the following reason: No title for this reason could be found
Reason Code: 0x80030002
Shutdown Type: restart
Comment: The Windows Installer initiated a system restart to complete or continue the configuration of 'Microsoft SQL Server 2005 Books Online (English) (May 2007)'.
Thanks very much for bringing this issue to our attention. It is sometimes necessary to reboot after a Books Online update - because Books Online, Visual Studio, and SQL Server Management Studio share components - but the expected behavior is a prompt for reboot rather than an automatic reboot. We are actively investigating the issue, and will re-release the update after we identify a solution, so that other customers will not have a similar experience.
|||Bryan
Thank you far replying to my post -- I am a bit calmer now I have had about four reboots (on various machines) in the last week because of SQL Server patches and I was a bit fractious.
Saturday, February 25, 2012
better way to update then select
if I dont include the updatethen a text file can be created. If I include the update, the column is updated but the text file isnt created.
i'm not sure what to do... any suggestions?
a WHEN-BUTTON-PRESSEd trigger calls this procedure:
PROCEDURE SEND_BY_DATE(fromdate in date, todate in date) IS
CURSOR dateProcess IS
SELECT *
FROM SIR
WHERE SIR_REQUEST_DATE BETWEEN fromdate AND todate
AND SIR_REQUEST_SENT = 'N'
rProcess dateProcess%ROWTYPE;
cOut VARCHAR2(2000);
myfile VARCHAR2(255);
mypath VARCHAR2(255);
N_FILE text_io.file_type;
BEGIN
UPDATE SIR
SET SIR_REQUEST_SENT = 'Y'
WHERE SIR_REQUEST_DATE BETWEEN fromdate AND todate;
COMMIT;
OPEN dateProcess;
myfile := sysdate||'_'||'batch'||fromdate||'to'||todate||'_' ||rProcess.SIR_COMPANY;
--myfile := sysdate||'_'||rProcess.SIR_COMPANY;
mypath := 'C:\request';
N_FILE := TEXT_IO.FOPEN(mypath||'\'||myfile||'.TXT', 'W');
LOOP
FETCH dateProcess INTO rProcess;
EXIT WHEN dateProcess%NOTFOUND;
cOut := rProcess.SIR_TRANS_NO || ';'
|| rProcess.SIR_COMPANY || ';'
|| rProcess.SIR_PROJECT || ';'
|| rProcess.SIR_APPL || ';'
|| rProcess.SIR_BUS_FUN || ';'
|| rProcess.SIR_REPORTED_BY || ';'
|| rProcess.SIR_HANDLED_BY || ';'
|| rProcess.SIR_PHASE || ';'
|| rProcess.SIR_TYPE || ';'
|| rProcess.SIR_CAUSE || ';'
|| rProcess.SIR_CLASSIFICATION || ';'
|| rProcess.SIR_DESCRIPTION || ';'
|| rProcess.SIR_REASON || ';'
|| rProcess.SIR_REMARKS || ';'
|| rProcess.SIR_STATUS || ';'
|| rProcess.SIR_REQUEST_DATE || ';'
|| rProcess.SIR_RECEIVED_DATE || ';'
|| rProcess.SIR_START_DATE || ';'
|| rProcess.SIR_CLOSE_DATE || ';'
|| rProcess.SIR_TARGET_DATE || ';'
|| rProcess.SIR_ESTIMATED_MANHRS || ';'
|| rProcess.SIR_ACTUAL_MANHRS || ';'
|| rProcess.SIR_BILLABLE_MANHRS || ';'
|| rProcess.SIR_ATTACHMENT || ';'
|| rProcess.SIR_REQUEST_SENT || ';'
|| rProcess.SIR_CRE_USERID || ';'
|| rProcess.SIR_CRE_DATE || ';'
|| rProcess.SIR_UPD_USERID || ';'
|| rProcess.SIR_UPD_DATE;
TEXT_IO.PUT_LINE(N_FILE, cOut);
END LOOP BeginLoop;
TEXT_IO.FCLOSE(N_FILE);
CLOSE dateProcess;
EXCEPTION
WHEN OTHERS THEN
IF dateProcess%ISOPEN THEN
CLOSE dateProcess;
END IF;
END;I'm no expert on Oracle Forms but have you tried to create file before commit not after. Do the update, select data and create file, commit. You are in the same transaction, so you will be able to see your newly update data.
Hope it helps.
Originally posted by alram
since i can't update before i declare the cursor, is there a better way so i can first update SIR_REQUEST_SENT and then use a cursor to SELECT * and then write to the text file? (using Oracle Forms)
if I dont include the updatethen a text file can be created. If I include the update, the column is updated but the text file isnt created.
i'm not sure what to do... any suggestions?
a WHEN-BUTTON-PRESSEd trigger calls this procedure:
PROCEDURE SEND_BY_DATE(fromdate in date, todate in date) IS
CURSOR dateProcess IS
SELECT *
FROM SIR
WHERE SIR_REQUEST_DATE BETWEEN fromdate AND todate
AND SIR_REQUEST_SENT = 'N'
rProcess dateProcess%ROWTYPE;
cOut VARCHAR2(2000);
myfile VARCHAR2(255);
mypath VARCHAR2(255);
N_FILE text_io.file_type;
BEGIN
UPDATE SIR
SET SIR_REQUEST_SENT = 'Y'
WHERE SIR_REQUEST_DATE BETWEEN fromdate AND todate;
COMMIT;
OPEN dateProcess;
myfile := sysdate||'_'||'batch'||fromdate||'to'||todate||'_' ||rProcess.SIR_COMPANY;
--myfile := sysdate||'_'||rProcess.SIR_COMPANY;
mypath := 'C:\request';
N_FILE := TEXT_IO.FOPEN(mypath||'\'||myfile||'.TXT', 'W');
LOOP
FETCH dateProcess INTO rProcess;
EXIT WHEN dateProcess%NOTFOUND;
cOut := rProcess.SIR_TRANS_NO || ';'
|| rProcess.SIR_COMPANY || ';'
|| rProcess.SIR_PROJECT || ';'
|| rProcess.SIR_APPL || ';'
|| rProcess.SIR_BUS_FUN || ';'
|| rProcess.SIR_REPORTED_BY || ';'
|| rProcess.SIR_HANDLED_BY || ';'
|| rProcess.SIR_PHASE || ';'
|| rProcess.SIR_TYPE || ';'
|| rProcess.SIR_CAUSE || ';'
|| rProcess.SIR_CLASSIFICATION || ';'
|| rProcess.SIR_DESCRIPTION || ';'
|| rProcess.SIR_REASON || ';'
|| rProcess.SIR_REMARKS || ';'
|| rProcess.SIR_STATUS || ';'
|| rProcess.SIR_REQUEST_DATE || ';'
|| rProcess.SIR_RECEIVED_DATE || ';'
|| rProcess.SIR_START_DATE || ';'
|| rProcess.SIR_CLOSE_DATE || ';'
|| rProcess.SIR_TARGET_DATE || ';'
|| rProcess.SIR_ESTIMATED_MANHRS || ';'
|| rProcess.SIR_ACTUAL_MANHRS || ';'
|| rProcess.SIR_BILLABLE_MANHRS || ';'
|| rProcess.SIR_ATTACHMENT || ';'
|| rProcess.SIR_REQUEST_SENT || ';'
|| rProcess.SIR_CRE_USERID || ';'
|| rProcess.SIR_CRE_DATE || ';'
|| rProcess.SIR_UPD_USERID || ';'
|| rProcess.SIR_UPD_DATE;
TEXT_IO.PUT_LINE(N_FILE, cOut);
END LOOP BeginLoop;
TEXT_IO.FCLOSE(N_FILE);
CLOSE dateProcess;
EXCEPTION
WHEN OTHERS THEN
IF dateProcess%ISOPEN THEN
CLOSE dateProcess;
END IF;
END;|||Some alternatives:
1) You could create the file BEFORE you do the update. The problem you have is that if someone else inserts and commits another record into the table between your opening the cursor and doing the update, then you could end up updating some records that you didn't actually put in the file. This can be overcome using "Alter session set isolation_level=serializable" before opening the cursor.
2) You could do the update first as you do now, but in the update also set a new column e.g. batch_id to identify the records you updated. Then use this batch_id in the cursor:
DECLARE
v_batch_id INTEGER;
CURSOR dateProcess( p_batch_id INTEGER ) IS
SELECT *
FROM SIR
WHERE batch_id = p_batch_id;
BEGIN
SELECT batch_seq.NEXTVAL INTO v_batch_id FROM DUAL;
UPDATE SIR
SET SIR_REQUEST_SENT = 'Y',
BATCH_ID = v_batch_id
WHERE SIR_REQUEST_DATE BETWEEN fromdate AND todate;
COMMIT;
FOR rec IN dateProcess( v_batch_id )
LOOP
-- Output record to file
...
END LOOP;
...
3) You could read and update them at the same time:
PROCEDURE ... IS
CURSOR dateProcess IS
SELECT *
FROM SIR
WHERE SIR_REQUEST_DATE BETWEEN fromdate AND todate
AND SIR_REQUEST_SENT = 'N'
FOR UPDATE OF SIR_REQUEST_SENT;
...
BEGIN
FOR rec IN dateProcess LOOP
-- Output record to file
...
UPDATE SIR
SET SIR_REQUEST_SENT = 'Y'
WHERE CURRENT OF dateProcess;
END LOOP;
...
END;|||Thank you for the replies! Very helpful!
sir andrewst,
I was just wondering, is there a way to update first without creating the batch i.d. column?|||Originally posted by alram
Thank you for the replies! Very helpful!
sir andrewst,
I was just wondering, is there a way to update first without creating the batch i.d. column?
Yes, if you really want to you can do this:
declare
cursor c is select * from t where status='N';
r c%ROWTYPE;
begin
open c;
update t set status='Y';
loop
fetch c into r;
exit when c%NOTFOUND;
dbms_output.put_line('=='||r.id);
end loop;
close c;
end;
The key point is to OPEN the cursor BEFORE doing the UPDATE.
You should still use "Alter session set isolation_level=serializable" before opening the cursor to avoid possibility of updating a different set of records to the set selected.|||sir andrewst, thanks again for your help!!|||Originally posted by andrewst
You should still use "Alter session set isolation_level=serializable" before opening the cursor to avoid possibility of updating a different set of records to the set selected.
sir andrewst, i can update successfully, but i tried to do the alter session and Oracle Forms does not recognize the isolation_level. It only recognizes the ff:
nls_language
nls_territory
nls_date_format
nls_date_language
nls_numeric_characters
nls_iso_currency
nls_iso_currency
nls_sort
nls_calendar
i've been looking for other commands that i can use to ensure serializability but no such luck so far.|||If you put spaces arounf equal sign this command will be recognised:
alter session set isolation_level = serializable;
Originally posted by alram
sir andrewst, i can update successfully, but i tried to do the alter session and Oracle Forms does not recognize the isolation_level. It only recognizes the ff:
nls_language
nls_territory
nls_date_format
nls_date_language
nls_numeric_characters
nls_iso_currency
nls_iso_currency
nls_sort
nls_calendar
i've been looking for other commands that i can use to ensure serializability but no such luck so far.|||Originally posted by sjacek
If you put spaces arounf equal sign this command will be recognised:
alter session set isolation_level = serializable;
i gave that a try too, before opening the cursor but the error that comes out during compilation is this:
Encountered the symbol 'alter' when expecting one of the following:
begin declare end exception exit for goto if....(etc.)|||You cannot use 'alter session' directly in PL/SQL. Use native dynamic sql:
execute immediate 'alter session set isolation_level = serializable';
This should work.
Originally posted by alram
i gave that a try too, before opening the cursor but the error that comes out during compilation is this:
Encountered the symbol 'alter' when expecting one of the following:
begin declare end exception exit for goto if....(etc.)|||Originally posted by sjacek
You cannot use 'alter session' directly in PL/SQL. Use native dynamic sql:
execute immediate 'alter session set isolation_level = serializable';
This should work.
sir, i tried just that. but i still encountered an error:
Encountered the symbol "IMMEDIATE" when expecting one of the ff:
:= . [ @. % ;
The symbol ".= was inserted before "IMMEDIATE" to continue.
what am i doing wrong?|||i have this problem: i need to check if the SIR_REQUEST_SENT flag is 'Y'. if it is, then the text/dat file should not be created anymore.
my question is, is this statement possible and where should i put it?
IF rProcess.SIR_REQUEST_SENT = 'N' THEN
-- retrieve the records
-- create the file
ELSE
CLOSE dateProcess;
here is my code:
-------
PROCEDURE SEND_BY_DATE(fromdate in date, todate in date) IS
CURSOR dateProcess IS
SELECT *
FROM SIR
WHERE SIR_REQUEST_DATE BETWEEN fromdate AND todate
AND SIR_REQUEST_SENT = 'N'
ORDER BY SIR_TRANS_NO;
rProcess dateProcess%ROWTYPE;
cOut LONG;
myfile VARCHAR2(255);
mypath VARCHAR2(255);
N_FILE text_io.file_type;
sir_columns VARCHAR2(2000) := 'SIR_TRANS_NO,SIR_COMPANY,SIR_PROJECT,SIR_APPL,SIR _BUS_FUN,SIR_REPORTED_BY,
SIR_TYPE,SIR_CLASSIFICATION,SIR_DSCRIPTION,SIR_REA SON,
SIR_REQUEST_DATE,SIR_ATTACHMENT,SIR_REQUEST_SENT,
SIR_CRE_USERID,SIR_CRE_DATE';
BEGIN
OPEN dateProcess;
myfile := sysdate||'_'||'from'||'_'||fromdate||'_'||'to'||'_ '||todate||'_'||rProcess.SIR_COMPANY;
mypath := 'C:\request';
N_FILE := TEXT_IO.FOPEN(mypath||'\'||myfile||'.dat', 'W');
LOOP
UPDATE SIR
SET SIR_REQUEST_SENT = 'Y'
WHERE SIR_REQUEST_DATE BETWEEN fromdate AND todate;
:SYSTEM.MESSAGE_LEVEL := 5;
COMMIT;
:SYSTEM.MESSAGE_LEVEL := 0;
FETCH dateProcess INTO rProcess;
EXIT WHEN dateProcess%NOTFOUND;
cOut := rProcess.SIR_TRANS_NO || ';'
|| rProcess.SIR_COMPANY || ';'
|| rProcess.SIR_PROJECT || ';'
|| rProcess.SIR_APPL || ';'
|| rProcess.SIR_BUS_FUN || ';'
|| rProcess.SIR_REPORTED_BY || ';'
|| rProcess.SIR_TYPE || ';'
|| rProcess.SIR_CLASSIFICATION || ';'
|| rProcess.SIR_DESCRIPTION || ';'
|| rProcess.SIR_REASON || ';'
|| rProcess.SIR_REQUEST_DATE || ';'
|| rProcess.SIR_ATTACHMENT || ';'
|| rProcess.SIR_REQUEST_SENT || ';'
|| rProcess.SIR_CRE_USERID || ';'
|| rProcess.SIR_CRE_DATE || ';';
TEXT_IO.PUT_LINE(N_FILE, cOut);
END LOOP BeginLoop;
TEXT_IO.FCLOSE(N_FILE);
CLOSE dateProcess;
-- CREATE A CONTROL FILE
n_file := TEXT_IO.FOPEN(mypath||'\'||myfile||'.ctl','W');
-- INSERT THE TEXT ENTRY TO THE CONTROL FILE
TEXT_IO.PUT_LINE( N_FILE ,'LOAD DATA' );
TEXT_IO.PUT_LINE( N_FILE ,'INFILE '||''''||myfile||'''');
TEXT_IO.PUT_LINE( N_FILE ,'APPEND'||' INTO TABLE '||'SIR');
TEXT_IO.PUT_LINE( N_FILE ,'FIELDS TERMINATED BY '';'' OPTIONALLY ENCLOSED BY ''"''
TRAILING NULLCOLS
('||sir_columns||' )
');
if text_io.is_open( n_file ) then
TEXT_IO.FCLOSE(N_FILE);
end if;
EXCEPTION
WHEN OTHERS THEN
IF dateProcess%ISOPEN THEN
CLOSE dateProcess;
END IF;
END;
------
i have tried putting the IF statement right after i open the cursor, but even if there are records with SIR_REQUEST_SENT = 'N', the file is not created anymore even though it should. i figure that it is only checking the first record.
any help would be appreciated! thank you in advance!|||I'm not following your requirement here. You say "i need to check if the SIR_REQUEST_SENT flag is 'Y'". What - on any record? On one particular record? On ALL records?
Guessing that you probably mean: if ALL records have the flag='Y' then there is no need to create a file. In that case the logic should be:
FOR rProcess IN dateProcess LOOP
IF NOT TEXT_IO.ISOPEN(n_file) THEN
-- Open the file
n_file := TEXT_IO.FOPEN(mypath||'\'||myfile||'.dat', 'W');
END IF;
-- Output record to file
...
END LOOP;
-- If file was opened, then complete process
IF v_file_opened THEN
TEXT_IO.FCLOSE(n_file);
-- Update records
UPDATE SIR
SET SIR_REQUEST_SENT = 'Y'
WHERE SIR_REQUEST_DATE BETWEEN fromdate AND todate
AND SIR_REQUEST_SENT = 'N';
-- Create control file
...
--
END IF;|||Originally posted by andrewst
I'm not following your requirement here. You say "i need to check if the SIR_REQUEST_SENT flag is 'Y'". What - on any record? On one particular record? On ALL records?
Guessing that you probably mean: if ALL records have the flag='Y' then there is no need to create a file. In that case the logic should be:
sir andrewst, sorry about that..
to make things clearer, these are my steps:
- Input a from_date and to_date.
- check for records that have SIR_REQUEST_SENT = 'N' that fall between the from_date and to_date. If there are records found, then update the SIR_REQUEST_SENT flag of the found records to 'Y' and create the text file.
- If the SIR_REQUEST_SENT flag is 'Y' for all records between the 2 dates, then I should not create the text file at all.
again, thank you for your help! i'm really appreciate it!|||Originally posted by alram
sir andrewst, sorry about that..
to make things clearer, these are my steps:
- Input a from_date and to_date.
- check for records that have SIR_REQUEST_SENT = 'N' that fall between the from_date and to_date. If there are records found, then update the SIR_REQUEST_SENT flag of the found records to 'Y' and create the text file.
- If the SIR_REQUEST_SENT flag is 'Y' for all records between the 2 dates, then I should not create the text file at all.
again, thank you for your help! i'm really appreciate it!
OK, in that case the logic I showed should work for you.
Friday, February 24, 2012
Best way to update/insert/delete in sqlserver database
I've passed the last days going around with xml, sqlxml, uppdategrams,
diffgrams, transactions, sp's, everything, I suppose, but still have
not a clear line of thoughts...
Im new in the info systems world.. I'm currently developing an
windows app in vb.net running over sqlserver 2000. Till now I was
using simple inserts, updates and deletes in db tables (not much so
far) but now that performance and data consistency issues are being
discussed, I was trying to understand the best way to pass data
to/from database.
Clearly I want to have the business logic in server side, being
executed by storedprocedures, but in large updates, inserts and
deletes I must pass many data and that it's a problem.
For an update I was doing something like this:
Try
loTransaction = lcnPlada.BeginTransaction()
lcmActualiza.Transaction = loTransaction
For Each loLinha In tdtActualizaHistorico.Rows
lcmActualiza.CommandText = "insert into
stetiquetahistorico (encomenda,produto,tamanho,cor,etiqueta,etiquetaem issor,dataemissaoetiqueta,etiquetaqtd,ficheiroemis sao)
" & _
"values (" & CType(loLinha(0), Integer) &
"," & CType(loLinha(1), Integer) & ",'" & loLinha(2).ToString & "'," &
CType(loLinha(3), Integer) & "," & CType(loLinha(4), Integer) & ",'" &
Environment.UserName & "','" & Now & "', " & CType(loLinha(5),
Integer) & "," & tnNumeroEmissao + 1 & ")"
lcmActualiza.ExecuteNonQuery()
Next
loTransaction.Commit()
Return True
end try
But have logic (a for each) inside a transaction its not a good idea,
right?
I've seen many different examples of adodb objects constructing a
stream with the xml statements that executes a sp and maps the table
columns to perform updates or inserts and then
adodb.command.commandstream = stream
adodb.command.execute
and in server side creates a sp that
sp_xml_preparedocument
...
insert...
sp_xml_removedocument
Sqlserver books online has an example of it that works correct in
northwind database but what about using sqlcommand? or
sqlxmlcommand?
Can I use a sqlxmlcommand to execute a stream that was written in
code, instead of using templates?
Is there other ways than sqlxml to perform this kind of tasks?
Anybody can help me, please?
Thanks in advance
sp_xml_preparedocument/OpenXML is a T-SQL feature. So it is independent of
what client you are using to connect to the database. Steps are:
1. Write your stored proc to do the logic
2. Use the normal way that your client requires to pass arguments and
execute the stored proc.
You only need the command stream object if you are using FOR XML to get XML
back from the server.
HTH
Michael
"?scar Martins" <subdueme@.hotmail.com> wrote in message
news:229f0a23.0410141120.1efff3b8@.posting.google.c om...
> Hello
> I've passed the last days going around with xml, sqlxml, uppdategrams,
> diffgrams, transactions, sp's, everything, I suppose, but still have
> not a clear line of thoughts...
> Im new in the info systems world.. I'm currently developing an
> windows app in vb.net running over sqlserver 2000. Till now I was
> using simple inserts, updates and deletes in db tables (not much so
> far) but now that performance and data consistency issues are being
> discussed, I was trying to understand the best way to pass data
> to/from database.
> Clearly I want to have the business logic in server side, being
> executed by storedprocedures, but in large updates, inserts and
> deletes I must pass many data and that it's a problem.
> For an update I was doing something like this:
> Try
> loTransaction = lcnPlada.BeginTransaction()
> lcmActualiza.Transaction = loTransaction
> For Each loLinha In tdtActualizaHistorico.Rows
> lcmActualiza.CommandText = "insert into
> stetiquetahistorico
> (encomenda,produto,tamanho,cor,etiqueta,etiquetaem issor,dataemissaoetiqueta,etiquetaqtd,ficheiroemis sao)
> " & _
> "values (" & CType(loLinha(0), Integer) &
> "," & CType(loLinha(1), Integer) & ",'" & loLinha(2).ToString & "'," &
> CType(loLinha(3), Integer) & "," & CType(loLinha(4), Integer) & ",'" &
> Environment.UserName & "','" & Now & "', " & CType(loLinha(5),
> Integer) & "," & tnNumeroEmissao + 1 & ")"
> lcmActualiza.ExecuteNonQuery()
> Next
> loTransaction.Commit()
> Return True
> end try
> But have logic (a for each) inside a transaction its not a good idea,
> right?
>
> I've seen many different examples of adodb objects constructing a
> stream with the xml statements that executes a sp and maps the table
> columns to perform updates or inserts and then
> adodb.command.commandstream = stream
> adodb.command.execute
> and in server side creates a sp that
> sp_xml_preparedocument
> ...
> insert...
> sp_xml_removedocument
> Sqlserver books online has an example of it that works correct in
> northwind database but what about using sqlcommand? or
> sqlxmlcommand?
> Can I use a sqlxmlcommand to execute a stream that was written in
> code, instead of using templates?
> Is there other ways than sqlxml to perform this kind of tasks?
> Anybody can help me, please?
>
> Thanks in advance
best way to update row by row?
I am (dangerously) new to SQL but alas, have been appointed as the new
DB-everything (admin, programmer, designer, etc.). I have been reading
up on SQL scripting and am not sure about the most efficient way to
proceed and was hoping someone could point me in the right direction.
If anyone could point me to fairly simple, good examples of looping in
SQL script for the purposes of processing and updating data from one
database (or table(s)) to another I would appreciate it - I've been
slogging around for a while and haven't been able to find anything that
compares the methods and/or provides a clear example for emulation.
I know exactly how I would do it using a recordset in ASP to handle the
logic of looping through each record in a table but have not been able
to find "the correct way" to do this strictly in SQL - it seems people
recommend against cursors. So what do you do?
I have an example in a book that utilizes a WHILE loop based on a
variable that stores RECORDCOUNT.
There seems to be a lot of anti-cursor sentiments - though cursors are
made specifically for looping through each record (fetch)
Perhaps the answer is that it's better to update row by row using
ADO/VB.
Any input, suggestions and/or links to good tutorials are much
appreciated. I've been doing a lot of reading on my own but each new
tutorial seems to contradict the last.
mail at mahalie dot comThe Best Practice in a relational database is to use set-based processing
instead of procedural 'cursor' processing whenever possible. In other
words, do all work at once instead of 'row by row'. A simple set-based
UPDATE example
--move all employees in department 12345 to department 98765
UPDATE Employees
SET Department = '98765'
WHERE Department = '12345'
> I have an example in a book that utilizes a WHILE loop based on a
> variable that stores RECORDCOUNT.
>
This technique is probably nothing more than a pseudo-cursor. Although
technically not a cursor, performance is often similar to cursor processing.
> There seems to be a lot of anti-cursor sentiments - though cursors are
> made specifically for looping through each record (fetch)
There are times when one must resort to individual row processing. Cursors
(client or server) are appropriate in that case. However, there are many
situations where cursors can and should be avoided.
> Perhaps the answer is that it's better to update row by row using
> ADO/VB.
This is just a client-side cursor. The point being that it is often
possible to achieve the desired result in a single SQL statement on the
server side rather than any sort of server or client looping construct.
Set-based processing allows the database engine to optimize the statement as
a unit.
Hope this helps.
Dan Guzman
SQL Server MVP
"mahalie" <mahalie@.gmail.com> wrote in message
news:1134446568.289271.282450@.g43g2000cwa.googlegroups.com...
> Hello all,
> I am (dangerously) new to SQL but alas, have been appointed as the new
> DB-everything (admin, programmer, designer, etc.). I have been reading
> up on SQL scripting and am not sure about the most efficient way to
> proceed and was hoping someone could point me in the right direction.
> If anyone could point me to fairly simple, good examples of looping in
> SQL script for the purposes of processing and updating data from one
> database (or table(s)) to another I would appreciate it - I've been
> slogging around for a while and haven't been able to find anything that
> compares the methods and/or provides a clear example for emulation.
> I know exactly how I would do it using a recordset in ASP to handle the
> logic of looping through each record in a table but have not been able
> to find "the correct way" to do this strictly in SQL - it seems people
> recommend against cursors. So what do you do?
> I have an example in a book that utilizes a WHILE loop based on a
> variable that stores RECORDCOUNT.
> There seems to be a lot of anti-cursor sentiments - though cursors are
> made specifically for looping through each record (fetch)
> Perhaps the answer is that it's better to update row by row using
> ADO/VB.
> Any input, suggestions and/or links to good tutorials are much
> appreciated. I've been doing a lot of reading on my own but each new
> tutorial seems to contradict the last.
> mail at mahalie dot com
>|||Thank you for the input!
I should have pointed out that set processing is not really an option.
That is, this example wouldn't apply.
--move all employees in department 12345 to department 98765
UPDATE Employees
SET Department = '98765'
WHERE Department = '12345'
Each row in the table(s) needs to be matched to a specific row in the
other existing table, and updated (conditionally, or course!) using all
sorts of fun look ups to boot!
So it sounds like a cursor is the way to go, especially if other
methods like the row count loop are just pseudo-cursors creating the
same overhead and there's no major benefit to the ASP pages carrying
some of the logic processing burden?
For what it's worth - I'll be able to do this update without lock
issues as it's an internal set of databases and run the script when
everyone's gone!|||Please post DDL+ sample data
Look at David Portas's example , how UPDATE...FROM syntax may affect your
result.
CREATE TABLE Countries
(countryname VARCHAR(20) NOT NULL PRIMARY KEY,
capitalcity VARCHAR(20));
CREATE TABLE Cities
(cityname VARCHAR(20) NOT NULL,
countryname VARCHAR(20) NOT NULL
REFERENCES Countries (countryname),
CONSTRAINT PK_Cities
PRIMARY KEY (cityname, countryname));
INSERT INTO Countries (countryname, capitalcity) VALUES ('USA', NULL);
INSERT INTO Countries (countryname, capitalcity) VALUES ('UK', NULL);
INSERT INTO Cities VALUES ('Washington', 'USA');
INSERT INTO Cities VALUES ('London', 'UK');
INSERT INTO Cities VALUES ('Manchester', 'UK');
The MS-syntax makes it all too easy for the developer to slip-up by
writing ambiguous UPDATE...FROM statements where the JOIN criteria is
not unique on the right side of the join.
Try these two identical UPDATE statements with a small change to the
primary key in between.
UPDATE Countries
SET capitalcity = cityname
FROM Countries JOIN Cities /* evil UPDATE... FROM syntax */
ON Countries.countryname = Cities.countryname;
SELECT * FROM Countries;
ALTER TABLE Cities DROP CONSTRAINT PK_Cities;
ALTER TABLE Cities ADD CONSTRAINT PK_Cities PRIMARY KEY (countryname,
cityname);
UPDATE Countries
SET capitalcity = cityname
FROM Countries JOIN Cities /* don't do this! */
ON Countries.countryname = Cities.countryname;
SELECT * FROM Countries;
You get this from the first SELECT statement:
countryname capitalcity
-- --
UK London
USA Washington
and this from the second:
countryname capitalcity
-- --
UK Manchester
USA Washington
(though these results aren't guaranteed - that's part of the problem).
Why did the result change? The physical implementation has affected the
meaning of the code, with serious, potentially disastrous consequences.
How can you even test your code if its results are subject to change
due to the vagaries of storage, indexing and cacheing?
With the ANSI syntax there is no ambiguity. The UPDATE statement
compels the programmer to design an unambiguous assignment subquery
that returns no more than a single value.
UPDATE Countries
SET capitalcity =
(SELECT MIN(cityname)
FROM Cities
WHERE Countries.countryname = Cities.countryname);
At the very least this forces the developer to reconsider whether the
UPDATE statement makes logical sense. You might want to make an effort
to learn Standard SQL instead of a dialect that can change at any time,
which will not port, cannot be understood by other programmers, etc.
"mahalie" <mahalie@.gmail.com> wrote in message
news:1134449190.817097.83020@.z14g2000cwz.googlegroups.com...
> Thank you for the input!
> I should have pointed out that set processing is not really an option.
> That is, this example wouldn't apply.
> --move all employees in department 12345 to department 98765
> UPDATE Employees
> SET Department = '98765'
> WHERE Department = '12345'
> Each row in the table(s) needs to be matched to a specific row in the
> other existing table, and updated (conditionally, or course!) using all
> sorts of fun look ups to boot!
> So it sounds like a cursor is the way to go, especially if other
> methods like the row count loop are just pseudo-cursors creating the
> same overhead and there's no major benefit to the ASP pages carrying
> some of the logic processing burden?
> For what it's worth - I'll be able to do this update without lock
> issues as it's an internal set of databases and run the script when
> everyone's gone!
>|||mahalie wrote:
> Thank you for the input!
> I should have pointed out that set processing is not really an option.
> That is, this example wouldn't apply.
> --move all employees in department 12345 to department 98765
> UPDATE Employees
> SET Department = '98765'
> WHERE Department = '12345'
> Each row in the table(s) needs to be matched to a specific row in the
> other existing table, and updated (conditionally, or course!) using all
> sorts of fun look ups to boot!
> So it sounds like a cursor is the way to go, especially if other
> methods like the row count loop are just pseudo-cursors creating the
> same overhead and there's no major benefit to the ASP pages carrying
> some of the logic processing burden?
> For what it's worth - I'll be able to do this update without lock
> issues as it's an internal set of databases and run the script when
> everyone's gone!
Nothing you have said implies that a cursor is the best or only
solution. There's no fundamental reason why you can't write set based
code that will "match to a specific row in the other table and update
conditionally using look ups". There are many reasons to choose a set
based solution and not all of them are to do with performance and
scalability. Simplicity, ease of maintenance and portability are
usually advantages of set based code as well.
The best way to get help with your question is to post DDL, sample data
and expected results. See: http://www.aspfaq.com/etiquette.asp?id=5006
David Portas
SQL Server MVP
--|||Mahalie,
Cursors are often discouraged mainly due to the conflict with rolling back
transactions. Just as a safety precaution, use aliases instead of the
tablename.fieldname approach. This will allow you to easily replace UPDATE
with SELECT to check the result set set before you commit the cursor.
Just my twist on it,
Adam Turner
"mahalie" wrote:
> Hello all,
> I am (dangerously) new to SQL but alas, have been appointed as the new
> DB-everything (admin, programmer, designer, etc.). I have been reading
> up on SQL scripting and am not sure about the most efficient way to
> proceed and was hoping someone could point me in the right direction.
> If anyone could point me to fairly simple, good examples of looping in
> SQL script for the purposes of processing and updating data from one
> database (or table(s)) to another I would appreciate it - I've been
> slogging around for a while and haven't been able to find anything that
> compares the methods and/or provides a clear example for emulation.
> I know exactly how I would do it using a recordset in ASP to handle the
> logic of looping through each record in a table but have not been able
> to find "the correct way" to do this strictly in SQL - it seems people
> recommend against cursors. So what do you do?
> I have an example in a book that utilizes a WHILE loop based on a
> variable that stores RECORDCOUNT.
> There seems to be a lot of anti-cursor sentiments - though cursors are
> made specifically for looping through each record (fetch)
> Perhaps the answer is that it's better to update row by row using
> ADO/VB.
> Any input, suggestions and/or links to good tutorials are much
> appreciated. I've been doing a lot of reading on my own but each new
> tutorial seems to contradict the last.
> mail at mahalie dot com
>|||> I should have pointed out that set processing is not really an option.
> That is, this example wouldn't apply.
> --move all employees in department 12345 to department 98765
> UPDATE Employees
> SET Department = '98765'
> WHERE Department = '12345'
> Each row in the table(s) needs to be matched to a specific row in the
> other existing table, and updated (conditionally, or course!) using all
> sorts of fun look ups to boot!
This can easily be done in a single UPDATE statement as Uri pointed out in
his examples. I wouldn't go so far as to say that the proprietary
UPDATE...FROM syntax is 'evil' since I find it especially handy when
multiple columns are to be updated. Note that the gotchas with poorly
formulated queries that Uri pointed out apply to cursors as well.
As David suggested, please post your DDL and script so that we can perhaps
demonstrate a set-based approach for your situation.
Hope this helps.
Dan Guzman
SQL Server MVP
"mahalie" <mahalie@.gmail.com> wrote in message
news:1134449190.817097.83020@.z14g2000cwz.googlegroups.com...
> Thank you for the input!
> I should have pointed out that set processing is not really an option.
> That is, this example wouldn't apply.
> --move all employees in department 12345 to department 98765
> UPDATE Employees
> SET Department = '98765'
> WHERE Department = '12345'
> Each row in the table(s) needs to be matched to a specific row in the
> other existing table, and updated (conditionally, or course!) using all
> sorts of fun look ups to boot!
> So it sounds like a cursor is the way to go, especially if other
> methods like the row count loop are just pseudo-cursors creating the
> same overhead and there's no major benefit to the ASP pages carrying
> some of the logic processing burden?
> For what it's worth - I'll be able to do this update without lock
> issues as it's an internal set of databases and run the script when
> everyone's gone!
>|||This is a small piece of the overall project. I thought I was asking a
more theoretical quesiton - didn't want to bother you all with the
grimy details. But you asked and I'll take any help I can get, so here
we go with the DDL. Sorry if this is more info than needed - I thought
it would be easiest for anyone actually wanting to look at this to be
able to cut and paste tthe whole shebang at once...so below includes
create db, tables and sample data inserts for both databases.
Note I cannot make ANY structural changes to DB_2 - it's the backend of
a large application, I can only write to it. I can, however, make any
modifications to DB_1 necessary that would make porting the data from
it to DB_2 easier.
What I'm trying to do in pseudo-pseudocode:
FOR EACH EMPLOYEE IN DB_1..Employees
Check to see if employee exists in DB_2 using email address to match
(empID in DB_1 is just and ID field and doesn't correpsond to DB_2) -
If they don't exist, retrun an error message, move to next employee
If the employee exists, get employee number (DB_2..EM.Employee)
Look up all degrees for that employee matching DB_1..Employees.empID =
DB_1..Education.empID
For each employee, add degree
lookup DB_2 degree code by matching DB_1.Edcuation.eduDegree to
DB_2..CFGEMDegree.Description
(if the Description does not exist, add it - the code doesn't auto
increment so create number by getting last CFGEMDegree.Code, cast as
int, add 1)
add degree info DB_1..Education to DB_2..Degrees (using
DB_2.CFGEMDegree..Code value instead of DB_1..Education.eduDegree, and
DB_2..EM.Employee instead of DB_1..Education.empID).
As I mentioned, I can figure out how to do this using recordsets and
ASP, and even scripting in SQL using cursors (I think, haven't tried it
yet but it looks straight forward). But if it's both better and
possible to use set based code, please enlighten me because I don't
understand how!
I'm going to have to do similar processing for other employee related
tables - lists of project experience and licenses, but once I'm on the
path with this it should be transferrable.
Many, MANY, thanks in advance!
/*create DB_1*/
CREATE DATABASE [DB_1] ON (NAME = N'DB_1_dat', FILENAME = N'C:\Program
Files\Microsoft SQL Server\MSSQL\data\DB_1.mdf' , SIZE = 2, FILEGROWTH
= 10%) LOG ON (NAME = N'DB_1_log', FILENAME = N'C:\Program
Files\Microsoft SQL Server\MSSQL\data\DB_1.ldf' , SIZE = 2, FILEGROWTH
= 10%)
COLLATE SQL_Latin1_General_CP1_CI_AS
GO
exec sp_dboption N'DB_1', N'autoclose', N'false'
GO
exec sp_dboption N'DB_1', N'bulkcopy', N'true'
GO
exec sp_dboption N'DB_1', N'trunc. log', N'true'
GO
exec sp_dboption N'DB_1', N'torn page detection', N'true'
GO
exec sp_dboption N'DB_1', N'read only', N'false'
GO
exec sp_dboption N'DB_1', N'dbo use', N'false'
GO
exec sp_dboption N'DB_1', N'single', N'false'
GO
exec sp_dboption N'DB_1', N'autoshrink', N'false'
GO
exec sp_dboption N'DB_1', N'ANSI null default', N'false'
GO
exec sp_dboption N'DB_1', N'recursive triggers', N'false'
GO
exec sp_dboption N'DB_1', N'ANSI nulls', N'false'
GO
exec sp_dboption N'DB_1', N'concat null yields null', N'false'
GO
exec sp_dboption N'DB_1', N'cursor close on commit', N'false'
GO
exec sp_dboption N'DB_1', N'default to local cursor', N'false'
GO
exec sp_dboption N'DB_1', N'quoted identifier', N'false'
GO
exec sp_dboption N'DB_1', N'ANSI warnings', N'false'
GO
exec sp_dboption N'DB_1', N'auto create statistics', N'true'
GO
exec sp_dboption N'DB_1', N'auto update statistics', N'true'
GO
USE DB_1
GO
/* tables in DB_1 (origin of data) */
CREATE TABLE [Employees] (
[empID] [varchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[empEmail] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
CONSTRAINT [PK_Employees] PRIMARY KEY CLUSTERED
(
[empID]
) ON [PRIMARY]
) ON [PRIMARY]
GO
CREATE TABLE [Education] (
[eduID] [int] IDENTITY (1, 1) NOT NULL ,
[empID] [varchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[eduDegree] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[eduSpecialty] [varchar] (60) COLLATE SQL_Latin1_General_CP1_CI_AS
NULL ,
[eduInstitution] [varchar] (60) COLLATE SQL_Latin1_General_CP1_CI_AS
NULL ,
[eduYear] [smallint] NOT NULL CONSTRAINT [DF_Education_eduYear]
DEFAULT (0),
CONSTRAINT [FK_Education_Employees] FOREIGN KEY
(
[empID]
) REFERENCES [Employees] (
[empID]
) ON DELETE CASCADE ON UPDATE CASCADE
) ON [PRIMARY]
GO
/* Create Database DB_2 - destination of data */
CREATE DATABASE [DB_2] ON (NAME = N'ART_Data', FILENAME = N'C:\Program
Files\Microsoft SQL Server\MSSQL\Data\DB_2.mdf' , SIZE = 100,
FILEGROWTH = 10%) LOG ON (NAME = N'ART_Log', FILENAME = N'C:\Program
Files\Microsoft SQL Server\MSSQL\Data\DB_2_log.ldf' , SIZE = 9,
FILEGROWTH = 10%)
COLLATE SQL_Latin1_General_CP1_CI_AS
GO
exec sp_dboption N'DB_2', N'autoclose', N'false'
GO
exec sp_dboption N'DB_2', N'bulkcopy', N'false'
GO
exec sp_dboption N'DB_2', N'trunc. log', N'true'
GO
exec sp_dboption N'DB_2', N'torn page detection', N'true'
GO
exec sp_dboption N'DB_2', N'read only', N'false'
GO
exec sp_dboption N'DB_2', N'dbo use', N'false'
GO
exec sp_dboption N'DB_2', N'single', N'false'
GO
exec sp_dboption N'DB_2', N'autoshrink', N'false'
GO
exec sp_dboption N'DB_2', N'ANSI null default', N'false'
GO
exec sp_dboption N'DB_2', N'recursive triggers', N'false'
GO
exec sp_dboption N'DB_2', N'ANSI nulls', N'false'
GO
exec sp_dboption N'DB_2', N'concat null yields null', N'false'
GO
exec sp_dboption N'DB_2', N'cursor close on commit', N'false'
GO
exec sp_dboption N'DB_2', N'default to local cursor', N'false'
GO
exec sp_dboption N'DB_2', N'quoted identifier', N'false'
GO
exec sp_dboption N'DB_2', N'ANSI warnings', N'false'
GO
exec sp_dboption N'DB_2', N'auto create statistics', N'true'
GO
exec sp_dboption N'DB_2', N'auto update statistics', N'true'
GO
/*corresponding tables in DB_2 (destination of data) */
USE DB_2
GO
CREATE TABLE [EM] (
[Employee] [varchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NOT
NULL ,
[LastName] [varchar] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[FirstName] [varchar] (25) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[MiddleName] [varchar] (30) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
,
[EMail] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
CONSTRAINT [EMPK] PRIMARY KEY NONCLUSTERED
(
[Employee]
) WITH FILLFACTOR = 90 ON [PRIMARY]
)
GO
/*Degrees Table*/
CREATE TABLE [EMDegree] (
[RecordID] [varchar] (32) COLLATE SQL_Latin1_General_CP1_CI_AS NOT
NULL ,
[Employee] [varchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NOT
NULL ,
[Seq] [smallint] NOT NULL CONSTRAINT [DF__EMDegree__Seq__04659998]
DEFAULT (0),
[Degree] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Specialty] [varchar] (60) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[Institution] [varchar] (60) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
,
[YearEarned] [smallint] NOT NULL CONSTRAINT
[DF__EMDegree__YearEa__0559BDD1] DEFAULT (0),
CONSTRAINT [EMDegreePK] PRIMARY KEY NONCLUSTERED
(
[RecordID]
) WITH FILLFACTOR = 90 ON [PRIMARY] ,
CONSTRAINT [FK_EMDegree_Employee_EM] FOREIGN KEY
(
[Employee]
) REFERENCES [EM] (
[Employee]
)
) ON [PRIMARY]
GO
/*Degree Codes Table */
CREATE TABLE [CFGEMDegree] (
[Code] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[Description] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT
NULL ,
CONSTRAINT [CFGEMDegreePK] PRIMARY KEY NONCLUSTERED
(
[Code]
) WITH FILLFACTOR = 90 ON [PRIMARY]
) ON [PRIMARY]
GO
USE DB_1
GO
/*sample data, for DB_1 */
/*employees*/
INSERT INTO [Employees]
([empID],[empEmail])VALUES('CathyG','cat
hyg@.domain.com')
INSERT INTO [Employees]
([empID],[empEmail])VALUES('CharlesM','c
harlesm@.domain.com')
INSERT INTO [Employees]
([empID],[empEmail])VALUES('DennisC','de
nnisc@.domain.com')
INSERT INTO [Employees]
([empID],[empEmail])VALUES('DorisD','Dor
isD@.domain.com')
INSERT INTO [Employees]
([empID],[empEmail])VALUES('jenniferp','
jenniferp@.domain.com')
INSERT INTO [Employees]
([empID],[empEmail])VALUES('MahalieP','m
ahaliep@.domain.com')
INSERT INTO [Employees]
([empID],[empEmail])VALUES('mercedesd','
mercedesd@.domain.com')
INSERT INTO [Employees]
([empID],[empEmail])VALUES('TeresaN','te
resan@.domain.com')
/*degrees*/
INSERT INTO [Education]
([empID],[eduDegree],[eduSpecialty],[edu
Institution],[eduYear])VALUES('CharlesM'
,'MFA','Birds','Yale
Design School',99)
INSERT INTO [Education]
([empID],[eduDegree],[eduSpecialty],[edu
Institution],[eduYear])VALUES('mercedesd
','BS','Gymnastics','Yale',2000)
INSERT INTO [Education]
([empID],[eduDegree],[eduSpecialty],[edu
Institution],[eduYear])VALUES('DorisD','
BS','comp
sci','uw',1985)
INSERT INTO [Education]
([empID],[eduDegree],[eduSpecialty],[edu
Institution],[eduYear])VALUES('DorisD','
Masters','architecture','uw',1999)
INSERT INTO [Education]
([empID],[eduDegree],[eduSpecialty],[edu
Institution],[eduYear])VALUES('jenniferp
','AA','arch','yale',2004)
INSERT INTO [Education]
([empID],[eduDegree],[eduSpecialty],[edu
Institution],[eduYear])VALUES('TeresaN',
'BA','Urban
Planning','U of W',1975)
INSERT INTO [Education]
([empID],[eduDegree],[eduSpecialty],[edu
Institution],[eduYear])VALUES('TeresaN',
'BA','Architecture','U
of W',1985)
INSERT INTO [Education]
([empID],[eduDegree],[eduSpecialty],[edu
Institution],[eduYear])VALUES('MahalieP'
,'BA','Jargon','U
of W',1998)
INSERT INTO [Education]
([empID],[eduDegree],[eduSpecialty],[edu
Institution],[eduYear])VALUES('MahalieP'
,'Masters','Bologne','MIT',2002)
USE DB_2
GO
/*sample data, for DB_2 */
/*employees*/
INSERT INTO [EM]
([Employee],[LastName],[FirstName],[Midd
leName],[EMail])VALUES('01416','Goldmeir
','Cathy','M','CathyG@.domain.com')
INSERT INTO [EM]
([Employee],[LastName],[FirstName],[Midd
leName],[EMail])VALUES('00001','Maple','
Charles','R','charlesm@.domain.com')
INSERT INTO [EM]
([Employee],[LastName],[FirstName],[Midd
leName],[EMail])VALUES('00002','Carter',
'Dennis','Richard','dennisc@.domain.com')
INSERT INTO [EM]
([Employee],[LastName],[FirstName],[Midd
leName],[EMail])VALUES('00101','Day','Do
ris','E','DorisD@.domain.com')
INSERT INTO [EM]
([Employee],[LastName],[FirstName],[Midd
leName],[EMail])VALUES('00201','Peterson
','Jennifer','May','jenniferp@.domain.com')
INSERT INTO [EM]
([Employee],[LastName],[FirstName],[Midd
leName],[EMail])VALUES('00005','Porter',
'Mahalie','M','MahalieP@.domain.com')
INSERT INTO [EM]
([Employee],[LastName],[FirstName],[Midd
leName],[EMail])VALUES('00203','Davidson
','Mercedes','Lynn','mercedesd@.domain.com')
INSERT INTO [EM]
([Employee],[LastName],[FirstName],[Midd
leName],[EMail])VALUES('00007','Spencer'
,'Thomas','John','ThomasS@.domain.com')
INSERT INTO [EM]
([Employee],[LastName],[FirstName],[Midd
leName],[EMail])VALUES('00302','Norton',
'Teresa','Ann','TeresaN@.domain.com')
INSERT INTO [EM]
([Employee],[LastName],[FirstName],[Midd
leName],[EMail])VALUES('00009','Smithfie
ld','Herbert','James','HerbertS@.domain.com')
/*degrees*/
INSERT INTO [EMDegree]
([RecordID],[Employee],[Degree],[Special
ty],[Institution],[YearEarned])VALUES('0
02011','01416','01','Architecture','Univ
ersity
of Virginia',1985)
INSERT INTO [EMDegree]
([RecordID],[Employee],[Degree],[Special
ty],[Institution],[YearEarned])VALUES('0
02012','00001','01','Interior
Design','University of North Carolina',1996)
INSERT INTO [EMDegree]
([RecordID],[Employee],[Degree],[Special
ty],[Institution],[YearEarned])VALUES('0
00021','00002','01','Art
History','Univeristy of San Diego',1962)
INSERT INTO [EMDegree]
([RecordID],[Employee],[Degree],[Special
ty],[Institution],[YearEarned])VALUES('0
00022','00002','02','Architecture','Penn
sylvannia
State University',1970)
INSERT INTO [EMDegree]
([RecordID],[Employee],[Degree],[Special
ty],[Institution],[YearEarned])VALUES('0
00031','00101','01','Design','George
Mason University',1992)
INSERT INTO [EMDegree]
([RecordID],[Employee],[Degree],[Special
ty],[Institution],[YearEarned])VALUES('A
DMIN1024963974101','00101','03','Archite
cture','Virginia
State University',1995)
INSERT INTO [EMDegree]
([RecordID],[Employee],[Degree],[Special
ty],[Institution],[YearEarned])VALUES('S
ANDERSON1012487091374','00101','02','Civ
il
Engineering','University of Florida',1982)
INSERT INTO [EMDegree]
([RecordID],[Employee],[Degree],[Special
ty],[Institution],[YearEarned])VALUES('S
ANDERSON1012487091424','00201','01','Str
uctural
Engineering','University of Florida',1985)
INSERT INTO [EMDegree]
([RecordID],[Employee],[Degree],[Special
ty],[Institution],[YearEarned])VALUES('S
ANDERSON1012487091474','00302','01','Str
uctural
Engineering','College of William and Mary',1988)
INSERT INTO [EMDegree]
([RecordID],[Employee],[Degree],[Special
ty],[Institution],[YearEarned])VALUES('S
ANDERSON1012854189480','00009','01','Soc
iology','Old
Dominion University',1987)
/*degrees code table*/
INSERT INTO [CFGEMDegree] ([Code],[Description])VALUES('01','Bache
lor
of Arts')
INSERT INTO [CFGEMDegree] ([Code],[Description])VALUES('02','Bache
lor
of Science')
INSERT INTO [CFGEMDegree] ([Code],[Description])VALUES('03','Bache
lor
of Architecture')
INSERT INTO [CFGEMDegree] ([Code],[Description])VALUES('04','Maste
rs of
Arts')
INSERT INTO [CFGEMDegree] ([Code],[Description])VALUES('05','Maste
rs of
Science')|||mahalie wrote:
> This is a small piece of the overall project. I thought I was asking a
> more theoretical quesiton - didn't want to bother you all with the
> grimy details. But you asked and I'll take any help I can get, so here
> we go with the DDL. Sorry if this is more info than needed - I thought
> it would be easiest for anyone actually wanting to look at this to be
> able to cut and paste tthe whole shebang at once...so below includes
> create db, tables and sample data inserts for both databases.
> Note I cannot make ANY structural changes to DB_2 - it's the backend of
> a large application, I can only write to it. I can, however, make any
> modifications to DB_1 necessary that would make porting the data from
> it to DB_2 easier.
> What I'm trying to do in pseudo-pseudocode:
> FOR EACH EMPLOYEE IN DB_1..Employees
> Check to see if employee exists in DB_2 using email address to match
> (empID in DB_1 is just and ID field and doesn't correpsond to DB_2) -
> If they don't exist, retrun an error message, move to next employee
> If the employee exists, get employee number (DB_2..EM.Employee)
> Look up all degrees for that employee matching DB_1..Employees.empID =
> DB_1..Education.empID
> For each employee, add degree
> lookup DB_2 degree code by matching DB_1.Edcuation.eduDegree to
> DB_2..CFGEMDegree.Description
> (if the Description does not exist, add it - the code doesn't auto
> increment so create number by getting last CFGEMDegree.Code, cast as
> int, add 1)
> add degree info DB_1..Education to DB_2..Degrees (using
> DB_2.CFGEMDegree..Code value instead of DB_1..Education.eduDegree, and
> DB_2..EM.Employee instead of DB_1..Education.empID).
> As I mentioned, I can figure out how to do this using recordsets and
> ASP, and even scripting in SQL using cursors (I think, haven't tried it
> yet but it looks straight forward). But if it's both better and
> possible to use set based code, please enlighten me because I don't
> understand how!
> I'm going to have to do similar processing for other employee related
> tables - lists of project experience and licenses, but once I'm on the
> path with this it should be transferrable.
>
> Many, MANY, thanks in advance!
>
This is an excellent example of how important it is to "think in SQL".
You've conceptualized the problem in terms of procedural steps and that
mindset unfortunately doesn't help you to write efficient code in SQL's
declarative language. That's why you thought of using a cursor. My
advice is always that only experts should write cursors. Only when you
master "real" SQL do you gain the knowledge and experience to judge
when a cursor makes sense. Until then, it's probably safest to assume
that a cursor is the wrong solution. OK, that's the sermon over with...
:-).
You have at least two steps it seems. 1) Insert the missing degree
codes in DB2. 2) Return a result set containing employee id, email and
the DB2 degree code. We know there are two steps because INSERTs are
always separate operations to SELECTs.
The INSERT is the only tricky part. Given that your degree names in DB1
and DB2 aren't very comparable I'm not clear how you want to handle
them. Possibly you'll need to review the data by eye and decide exactly
what codes are new ones and what already exist. Given that you want to
automate it I'll leave the precise logic to you but here's a partial
exaple to prove that it's possible. Notice that the temp table doesn't
use a cursor - it just helps us generate the code. (In SQL2005 there's
a much neater solution without a temp table, but I'm assuming 2000
here).
DECLARE @.t TABLE (description VARCHAR(50) NOT NULL PRIMARY KEY) ;
/* Pull the missing degrees */
INSERT INTO @.t (description)
SELECT DISTINCT D1.edudegree
FROM DB_1.dbo.education AS D1
LEFT JOIN DB_2.dbo.cfgemdegree AS D2
ON D2.description =
CASE D1.edudegree
WHEN 'BS' THEN 'Bachelor of Science'
WHEN 'BA' THEN 'Bachelor of Arts'
/* ... etc */
ELSE D1.edudegree
END
WHERE D2.code IS NULL ;
/* Generate the code and insert */
INSERT INTO DB_2.dbo.cfgemdegree (description, code)
SELECT T1.description,
RIGHT('0'+CAST(COUNT(*)+
(SELECT CAST(MAX(code) AS INT)
FROM DB_2.dbo.cfgemdegree) AS VARCHAR(2)),2) AS code
FROM @.t AS T1, @.t AS T2
WHERE T1.description >= T2.description
GROUP BY T1.description ;
Once you've sorted out the codes, the main part of the query is easy.
I'm assuming that you have matching descriptions for the degrees in
each database at this stage. If not, then you'll want to repeat
something like the CASE expression in the above INSERT. Alternatively
it might be easier to create an intermediate table to perform the
translation degree code -> description.
SELECT E1.empid, E1.empemail,
CASE WHEN E2.employee IS NULL
THEN 'Error!'
ELSE 'OK'
END AS emp_exists,
E2.employee,
D2.code
FROM DB_1.dbo.employees AS E1
LEFT JOIN DB_2.dbo.em AS E2
ON E1.empemail = E2.email
LEFT JOIN DB_1.dbo.Education AS D1
ON E1.empid = D1.empid
LEFT JOIN DB_2.dbo.cfgemdegree AS D2
ON D2.description = D1.edudegree ;
Hope this helps.
David Portas
SQL Server MVP
--|||David, thanks - sermon appreciated and your examples are exactly what I
was trying to get at without knowing what I was looking for. My
scripting background must be pretty obvious!
I'd just finished writing a temp table with new codes and distinct
degree types to UNION with the cfgemdegree table when I read your
resonse so I do think I'm not beyond hope.
All the aliases and joins have my head spinning but I think I
understand!
THANKS!!
Would it be worth creating a view that includes the
DB_2.dbo.cfgemdegree description in the DB_2.dbo.emdegree table to make
the joins a little more comprehensible or would that be more of a
maintenance issue? E.g. what's the best practice in a case like this?
Best Way To Update Existing Rows
IUpdating existing rows in a SQL server 2005 destination table
--I have a CSV file and every row needs to perform an update on a production table
The most straight forward is to use an OLEDB Command Data Flow Transformation Object that sends one SQL statement per row coming in. How do I configure this object ?
That might be okay if you've only got a small number of updates, however if you've got a large set, I prefer landing this set to a temporary table and then doing Set based updates in a following Execute SQL task. Can you please tell me how I can set up the Execute SQL Task to do set based updates in the Control flow?
thanks in advance
Dave
Mr Pro Tools wrote:
IUpdating existing rows in a SQL server 2005 destination table
--I have a CSV file and every row needs to perform an update on a production table
The most straight forward is to use an OLEDB Command Data Flow Transformation Object that sends one SQL statement per row coming in. How do I configure this object ?
I assume you can write SQL, right?
All explained here: http://msdn2.microsoft.com/en-us/library/ms141138.aspx
Mr Pro Tools wrote:
That might be okay if you've only got a small number of updates, however if you've got a large set, I prefer landing this set to a temporary table and then doing Set based updates in a following Execute SQL task. Can you please tell me how I can set up the Execute SQL Task to do set based updates in the Control flow?
thanks in advance
Dave
Again, I assume you can write SQL. Just paste your SQL into the 'SQL Statement' proeprty of the Execute SQL Task.
-Jamie
|||
Thanks Jamie
First method works --Profiler is telling me its taking about a second per update
but is too slow as I have 900,000 records to update across two tables each having 7 million records which will take
889,641 records in the file supplied / 3600 per hour (1 per second)
= 250 hours
= 10 days
the Excecute SQL task looks like another option and sure I can paste into the SQL Statement window my sproc Exec dbo.MySproc ?,?
How do I map ?,? to each row in the Execute SQL Task--in other words what are the main properties I have to set for row by row processing and is it much faster than the OLE DB Command Transformation method --which is nice and simple and slow
thanks in advance
Dave
|||Mr Pro Tools wrote:
Thanks Jamie
First method works --Profiler is telling me its taking about a second per update
but is too slow as I have 900,000 records to update across two tables each having 7 million records which will take
889,641 records in the file supplied / 3600 per hour (1 per second)
= 250 hours
= 10 days
the Excecute SQL task looks like another option and sure I can paste into the SQL Statement window my sproc Exec dbo.MySproc ?,?
How do I map ?,? to each row in the Execute SQL Task--in other words what are the main properties I have to set for row by row processing and is it much faster than the OLE DB Command Transformation method --which is nice and simple and slow
thanks in advance
Dave
I'm confused (seems to happen alot these days ) Why are you trying to map any columns at all using '?' ?
All you have to do is load the data into a temporary staging area (using OLE DB Destination) and then issue a SQL statement along the lines of:
UPDATE t
SET t.<col> = te.<col> ,......
FROM target t INNER JOIN temp te ON <some join columns>
Also, if OLE DB Command really is taking one second per row then your bottleneck is not the OLE DB Comamnd. its something else.
-Jamie
|||Mr Pro Tools wrote:
How do I map ?,? to each row in the Execute SQL Task--in other words what are the main properties I have to set for row by row processing and is it much faster than the OLE DB Command Transformation method --which is nice and simple and slow
Dave,
The OLEDB command is just slow becasue performs the same command in a row by row basis. The intention of using a Execute SQL task is to be able to perform execute a SQL command over a set of rows at once. What Jamie is sugesting is to push the rows in your CSV file to an staging table and then, in control flow to issue un Update stament against your target table using the staging table rows. That would be a one time update.
|||UPDATE t
SET t.<col> = te.<col> ,......
FROM target t INNER JOIN temp te ON <some join columns>
Yeah that is simple enough what Im getting at is can the Execute SQL Task do row by row processing like the DTS DDQ task and how do set the properties if so
In my case the updates will only occur if one of columns in the temp staging table has an particular value in a column
If it has --we do the update otherwise do a delete
thats why I had ? as parameters for a store prod
|||No, the Execute SQL Task does not do row-by-row processing. As already established, that's what the OLE DB Command does.
-Jamie
Best way to update different servers
update the other systems Sql Server system when certain events happens -
names change for instance.
Looking at using Web Services as one way of handling the Data movement
between the 2 systems.
But if they are both Sql Servers, is there some way, through the use of
Triggers for instance, that we could call a stored procedure on another
server (not at our location) to do the updates. Web Services are fine, but
we still have the push/pull/polling problem of knowing when to update System
B when System A's data changes and vice versa.
Is it best to do this outside of the Sql world or would it be reasonable to
do this from Sql to Sql which I assume would be more efficient?
Thanks,
TomThere is nothing to stop you from updating another SQL Server as a result of
an event on your SQL Server. Triggers seem to be the most obvious.
Ideally, you would have a stored proc on the linked server and then execute
something like:
MyLinkedServer.MyDB.dbo.MyProc @.ID, 'The New Name'
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
.
"tshad" <tscheiderich@.ftsolutions.com> wrote in message
news:O5kXts3oFHA.1148@.TK2MSFTNGP12.phx.gbl...
We are looking at combining some systems that would allow our Sql Server
update the other systems Sql Server system when certain events happens -
names change for instance.
Looking at using Web Services as one way of handling the Data movement
between the 2 systems.
But if they are both Sql Servers, is there some way, through the use of
Triggers for instance, that we could call a stored procedure on another
server (not at our location) to do the updates. Web Services are fine, but
we still have the push/pull/polling problem of knowing when to update System
B when System A's data changes and vice versa.
Is it best to do this outside of the Sql world or would it be reasonable to
do this from Sql to Sql which I assume would be more efficient?
Thanks,
Tom|||"Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
news:OtY0lv3oFHA.1444@.TK2MSFTNGP10.phx.gbl...
> There is nothing to stop you from updating another SQL Server as a result
> of
> an event on your SQL Server. Triggers seem to be the most obvious.
> Ideally, you would have a stored proc on the linked server and then
> execute
> something like:
> MyLinkedServer.MyDB.dbo.MyProc @.ID, 'The New Name'
Would you have to set up each server as a linked server on each others
machines?
Would this be the best way even if we are doing this over the Internet?
Thanks,
Tom
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
> SQL Server MVP
> Columnist, SQL Server Professional
> Toronto, ON Canada
> www.pinpub.com
> .
> "tshad" <tscheiderich@.ftsolutions.com> wrote in message
> news:O5kXts3oFHA.1148@.TK2MSFTNGP12.phx.gbl...
> We are looking at combining some systems that would allow our Sql Server
> update the other systems Sql Server system when certain events happens -
> names change for instance.
> Looking at using Web Services as one way of handling the Data movement
> between the 2 systems.
> But if they are both Sql Servers, is there some way, through the use of
> Triggers for instance, that we could call a stored procedure on another
> server (not at our location) to do the updates. Web Services are fine,
> but
> we still have the push/pull/polling problem of knowing when to update
> System
> B when System A's data changes and vice versa.
> Is it best to do this outside of the Sql world or would it be reasonable
> to
> do this from Sql to Sql which I assume would be more efficient?
> Thanks,
> Tom
>|||Would you have to set up each server as a linked server on each others
machines?
Would this be the best way even if we are doing this over the Internet?
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
.
"tshad" <tscheiderich@.ftsolutions.com> wrote in message
news:elYgqz3oFHA.708@.TK2MSFTNGP09.phx.gbl...
"Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
news:OtY0lv3oFHA.1444@.TK2MSFTNGP10.phx.gbl...
> There is nothing to stop you from updating another SQL Server as a result
> of
> an event on your SQL Server. Triggers seem to be the most obvious.
> Ideally, you would have a stored proc on the linked server and then
> execute
> something like:
> MyLinkedServer.MyDB.dbo.MyProc @.ID, 'The New Name'
Would you have to set up each server as a linked server on each others
machines?
Would this be the best way even if we are doing this over the Internet?
Thanks,
Tom
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
> SQL Server MVP
> Columnist, SQL Server Professional
> Toronto, ON Canada
> www.pinpub.com
> .
> "tshad" <tscheiderich@.ftsolutions.com> wrote in message
> news:O5kXts3oFHA.1148@.TK2MSFTNGP12.phx.gbl...
> We are looking at combining some systems that would allow our Sql Server
> update the other systems Sql Server system when certain events happens -
> names change for instance.
> Looking at using Web Services as one way of handling the Data movement
> between the 2 systems.
> But if they are both Sql Servers, is there some way, through the use of
> Triggers for instance, that we could call a stored procedure on another
> server (not at our location) to do the updates. Web Services are fine,
> but
> we still have the push/pull/polling problem of knowing when to update
> System
> B when System A's data changes and vice versa.
> Is it best to do this outside of the Sql world or would it be reasonable
> to
> do this from Sql to Sql which I assume would be more efficient?
> Thanks,
> Tom
>