Showing posts with label working. Show all posts
Showing posts with label working. Show all posts

Tuesday, March 27, 2012

Bind Variable in CURSOR

SQL Server 2000 SP4 with AWE hotfix. Windows 2003 SP1.

I have a stored procedure which is not working the way I think it
should be.

I have a CURSOR which has a variable in the WHERE clause:

DECLARE get_tabs CURSOR local fast_forward FOR
SELECT distinct tablename, id, shcontig1dt, shcontig2dt
FROM db_ind
WHERE dbname = @.dbname
ORDER BY tablename

It won't return anything, even when I verify that @.dbname has a value
and if I run the query in Query Analyzer with the value, it returns
rows:

SELECT distinct tablename, id, shcontig1dt, shcontig2dt
FROM db_ind
WHERE dbname = 'Archive'
ORDER BY tablename

DB_Rpt_Fragmentation11575791622006-03-29 09:52:11.7772006-03-29
09:52:11.823
DtsAdtStdArchive_DataSourceType5175768822006-03-29
09:52:11.8702006-03-29 09:52:11.887
DtsADTstdArchiveNotUsed3575763122006-03-29 09:52:11.8872006-03-29
09:52:12.103

I've taken out most of the guts for simplicity, but here's what I've
got:

--CREATE TABLE dbo.db_ind
--(
--db_ind_tkintIDENTITY,
-- id int NULL,
-- tablename sysname NOT NULL,
-- indid int NULL,
-- indexname sysname NOT NULL,
-- shcontig1dt datetime NULL,
-- defragdt datetime NULL,
-- shcontig2dt datetime NULL,
-- reindexdt datetime NULL
--)

ALTER PROCEDURE IDR
(@.hours int
)
AS

--SET NOCOUNT ON
--SET ANSI_WARNINGS OFF

DECLARE @.tabname varchar(100),
@.indname varchar(100),
@.dbname varchar(50),
@.vsql varchar(1000),
@.v_hours varchar(4),
@.shcontig1dtdatetime,
@.shcontig2dtdatetime,
@.defragdtdatetime,
@.reindexdtdatetime,
@.idint,
@.indidint,
@.rundbcursorint,
@.runtabcursorint,
@.runindcursorint

DECLARE get_dbs CURSOR local fast_forward FOR
SELECT dbname
FROM db_jobs
WHERE idrdate < getdate() - 4
or idrdate is null
ORDER BY dbname

DECLARE get_tabs CURSOR local fast_forward FOR
SELECT distinct tablename, id, shcontig1dt, shcontig2dt
FROM db_ind
WHERE dbname = @.dbname
ORDER BY tablename

DECLARE get_inds CURSOR local fast_forward FOR
SELECT indid, indexname, defragdt, reindexdt
FROM db_ind
WHERE dbname = @.dbname
AND tablename = @.tabname
ORDER BY indexname

OPEN get_dbs
FETCH NEXT FROM get_dbs
INTO @.dbname

IF @.@.FETCH_STATUS = 0
SELECT @.rundbcursor = 1
ELSE
SELECT @.rundbcursor = 0

SELECT @.v_hours = CONVERT(varchar,@.hours)

--================================================== ================================================== =====
--================================================== ================================================== =====
--================================================== ================================================== =====

WHILE @.rundbcursor = 1
BEGIN -- db while

PRINT '============================='
PRINT @.dbname
PRINT '============================='

--================================================== ================================================== =====
--================================================== ================================================== =====

OPEN get_tabs

FETCH NEXT FROM get_tabs
INTO @.tabname, @.id, @.shcontig1dt, @.shcontig2dt

IF @.@.FETCH_STATUS = 0
BEGIN
PRINT 'table: ' + @.tabname
SELECT @.runtabcursor = 1
end
ELSE
BEGIN
PRINT 'not getting any tables! '-- <<<<< THIS IS WHERE IT HITS
SELECT @.runtabcursor = 0
end

WHILE @.runtabcursor = 1
BEGIN
PRINT @.dbname
PRINT @.tabname

--================================================== ================================================== =====

OPEN get_inds
FETCH NEXT FROM get_inds
INTO @.indid, @.indname, @.defragdt, @.reindexdt

IF @.@.FETCH_STATUS = 0
SELECT @.runindcursor = 1
ELSE
SELECT @.runindcursor = 0

WHILE @.runindcursor = 1
BEGIN
PRINT 'Index:' + @.dbname + '.' + @.tabname + '.' + @.indname

FETCH NEXT FROM get_inds
INTO @.indid, @.indname, @.defragdt, @.reindexdt

IF @.@.FETCH_STATUS = 0
SELECT @.runindcursor = 1
ELSE
SELECT @.runindcursor = 0

END-- 1st loop through indexes
CLOSE get_inds

--================================================== ================================================== =====

--==========
PRINT 'db.tab: ' + @.dbname + '.' + @.tabname

--==========

--================================================== ================================================== =====

OPEN get_inds
FETCH NEXT FROM get_inds
INTO @.indid, @.indname, @.defragdt, @.reindexdt

IF @.@.FETCH_STATUS = 0
SELECT @.runindcursor = 1
ELSE
SELECT @.runindcursor = 0

WHILE @.runindcursor = 1
BEGIN

PRINT 'dbname: ' + @.dbname
PRINT 'tabname: ' + @.tabname
PRINT 'indname: ' + @.indname

FETCH NEXT FROM get_inds
INTO @.indid, @.indname, @.defragdt, @.reindexdt

IF @.@.FETCH_STATUS = 0
SELECT @.runindcursor = 1
ELSE
SELECT @.runindcursor = 0

END -- 2nd loop through indexes
CLOSE get_inds

--================================================== ================================================== =====

FETCH NEXT FROM get_tabs
INTO @.tabname, @.id, @.shcontig1dt, @.shcontig2dt

IF @.@.FETCH_STATUS = 0
SELECT @.runtabcursor = 1
ELSE
SELECT @.runtabcursor = 0

END-- loop through tables
CLOSE get_tabs

--================================================== ================================================== =====
--================================================== ================================================== =====

PRINT 'Index Maintenence complete. Job report in
[DB_Rpt_Fragmentation]'
PRINT ''

FETCH NEXT FROM get_dbs
INTO @.dbname

IF @.@.FETCH_STATUS = 0
SELECT @.rundbcursor = 1
ELSE
SELECT @.rundbcursor = 0

END -- loop through databases
CLOSE get_dbs
deallocate get_dbs
deallocate get_tabs
deallocate get_inds

--================================================== ================================================== =====
--================================================== ================================================== =====
--================================================== ================================================== =====

GO

And this is what I'm getting:

=============================
Archive
=============================

(0 row(s) affected)

not getting any tables!
Index Maintenence complete. Job report in [DB_Rpt_Fragmentation]

..
..
..
etc.

Am I missing something obvious?

Thank you for any help you can provide!!One of my fellow emps got it - apparently the CURSOR needed to be
declare w/in the loop right before I opened it.

I moved the get_tabs and get_inds cursor declarations and all is well .
.. .sql

Thursday, March 22, 2012

Big-IP and SSL

Hi! I am working on some issues related to Reporting Services and SSL, I
wanted to know:
- If I install a NLB cluster and I add "n" servers, how many SSL
Certificates will I need? 1 or "n"?
- does anyone know if there are any issue related to MRS, SSL and the
Big-IP Load Balancer?
Thank you very much in advance :-)
Jose
--
------
Jose Ignacio Rodas, A+, CCEA, MCSEI use BigIP's in production, LVS' in test and development. If I understand
your first question about NLB clusters, you will need one certificate per
domain name. If your NLB cluster, comprised of "n" servers, answers up to
www.mydomain.com then you will need a certificate on www.mydomain.com. I
worked with NLB in 2000 and do not remember certificate management as being a
part of it, I could be wrong, not sure about 2003. As far as the BigIP and
SQL RS, good luck. It will work, if configured properly. Supposedly SP2 of
SQL RS added a feature to support SSL termination prior to the web server if
the proper HTTP headers are passed. I was able to get Reports working, but
not ReportServer. The information is available in the SP2 update
documentation. Hope this helps.
"Jose Ignacio Rodas" wrote:
> Hi! I am working on some issues related to Reporting Services and SSL, I
> wanted to know:
> - If I install a NLB cluster and I add "n" servers, how many SSL
> Certificates will I need? 1 or "n"?
> - does anyone know if there are any issue related to MRS, SSL and the
> Big-IP Load Balancer?
> Thank you very much in advance :-)
> Jose
> --
>
> ------
> Jose Ignacio Rodas, A+, CCEA, MCSE|||http://download.microsoft.com/download/5/1/3/513534ae-a0e7-44e6-9a04-ba3c549a5f5f/sp2Readme_EN.htm#_http_headers
That is the SP2 readme that talks about SSL termination
"Brian" wrote:
> I use BigIP's in production, LVS' in test and development. If I understand
> your first question about NLB clusters, you will need one certificate per
> domain name. If your NLB cluster, comprised of "n" servers, answers up to
> www.mydomain.com then you will need a certificate on www.mydomain.com. I
> worked with NLB in 2000 and do not remember certificate management as being a
> part of it, I could be wrong, not sure about 2003. As far as the BigIP and
> SQL RS, good luck. It will work, if configured properly. Supposedly SP2 of
> SQL RS added a feature to support SSL termination prior to the web server if
> the proper HTTP headers are passed. I was able to get Reports working, but
> not ReportServer. The information is available in the SP2 update
> documentation. Hope this helps.
> "Jose Ignacio Rodas" wrote:
> > Hi! I am working on some issues related to Reporting Services and SSL, I
> > wanted to know:
> >
> > - If I install a NLB cluster and I add "n" servers, how many SSL
> > Certificates will I need? 1 or "n"?
> >
> > - does anyone know if there are any issue related to MRS, SSL and the
> > Big-IP Load Balancer?
> >
> > Thank you very much in advance :-)
> >
> > Jose
> > --
> >
> >
> >
> > ------
> > Jose Ignacio Rodas, A+, CCEA, MCSE

Tuesday, March 20, 2012

big problem with sql server 2000 msde sp 4 (stops working by accident)

Hi,
I get the following error-message from sqlserver 2000 (msde edition)
I have no idea what is going wrong.
I was only able to get back to working state by
DBCC CHECKDB ALLOW-DATA-LOSS
(or restoring a backup)
But i do not want the db to crash at my customers ...
In the knowledge-base I found:
"An assertion or Msg 7987 may occur when an operation is performed on an
instance of SQL Server"
without any reason why inconsitencies can occur
This bug is at least known since 5th april 2005 ... and still known for
sql-server 9 (2005 I think)
I hope I oversaw something ... or does SQLServer realy stops to work by
chance?
please help!
2006-07-26 03:43:04.00 spid51 ex_raise2: Exception raised, major=79,
minor=87, severity=22, attempting to create symptom dump
2006-07-26 03:43:04.28 spid51 Using 'dbghelp.dll' version '4.0.5'
*Dump thread - spid = 51, PSS = 0x414491a8, EC = 0x414494d8
Event Type: Error
Event Source: MSSQLSERVER
Event Category: (2)
Event ID: 17052
Date: 7/26/2006
Time: 5:32:26 AM
User: N/A
Computer: HLX02
Description:
Error: 7987, Severity: 22, State: 3
A possible database consistency problem has been detected on database
'faroer'. DBCC CHECKDB and DBCC CHECKCATALOG should be run on database
'faroer'.
For more information, see Help and Support Center at
http://go.microsoft.com/fwlink/events.asp.
Data:
0000: 33 1f 00 00 16 00 00 00 3......
0008: 06 00 00 00 48 00 4c 00 ...H.L.
0010: 58 00 30 00 32 00 00 00 X.0.2...
0018: 07 00 00 00 66 00 61 00 ...f.a.
0020: 72 00 6f 00 65 00 72 00 r.o.e.r.
0028: 00 00 ..Sascha Bohnenkamp wrote:
> Hi,
> I get the following error-message from sqlserver 2000 (msde edition)
> I have no idea what is going wrong.
> I was only able to get back to working state by
> DBCC CHECKDB ALLOW-DATA-LOSS
> (or restoring a backup)
> But i do not want the db to crash at my customers ...
> In the knowledge-base I found:
> "An assertion or Msg 7987 may occur when an operation is performed on an
> instance of SQL Server"
> without any reason why inconsitencies can occur
> This bug is at least known since 5th april 2005 ... and still known for
> sql-server 9 (2005 I think)
> I hope I oversaw something ... or does SQLServer realy stops to work by
> chance?
> please help!
> 2006-07-26 03:43:04.00 spid51 ex_raise2: Exception raised, major=79,
> minor=87, severity=22, attempting to create symptom dump
> 2006-07-26 03:43:04.28 spid51 Using 'dbghelp.dll' version '4.0.5'
> *Dump thread - spid = 51, PSS = 0x414491a8, EC = 0x414494d8
>
> Event Type: Error
> Event Source: MSSQLSERVER
> Event Category: (2)
> Event ID: 17052
> Date: 7/26/2006
> Time: 5:32:26 AM
> User: N/A
> Computer: HLX02
> Description:
> Error: 7987, Severity: 22, State: 3
> A possible database consistency problem has been detected on database
> 'faroer'. DBCC CHECKDB and DBCC CHECKCATALOG should be run on database
> 'faroer'.
> For more information, see Help and Support Center at
> http://go.microsoft.com/fwlink/events.asp.
> Data:
> 0000: 33 1f 00 00 16 00 00 00 3......
> 0008: 06 00 00 00 48 00 4c 00 ...H.L.
> 0010: 58 00 30 00 32 00 00 00 X.0.2...
> 0018: 07 00 00 00 66 00 61 00 ...f.a.
> 0020: 72 00 6f 00 65 00 72 00 r.o.e.r.
> 0028: 00 00 ..
No, SQL Server does not "stop working by accident". Most likely you
have flaky hardware that is causing corruption in your database file.
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Tracy McKibben schrieb:
> No, SQL Server does not "stop working by accident". Most likely you
> have flaky hardware that is causing corruption in your database file.
well ... are there any chance to find someting to help find the problem
in the log files of the server (or in the dumps) ?
From teh stack dump I cannot see any i/o problems ...
* Short Stack Dump
* 009BA08C Module(sqlservr+005BA08C) (GetOSErrString+00004F68)
* 009BA9B5 Module(sqlservr+005BA9B5) (GetOSErrString+00005891)
* 006EE757 Module(sqlservr+002EE757) (SQLExit+00186C60)
* 005FA686 Module(sqlservr+001FA686) (SQLExit+00092B8F)
* 006BA785 Module(sqlservr+002BA785) (SQLExit+00152C8E)
* 004040DA Module(sqlservr+000040DA)
* 0042ADAE Module(sqlservr+0002ADAE)
* 0042A3D3 Module(sqlservr+0002A3D3)
* 0043638B Module(sqlservr+0003638B)
* 0043288B Module(sqlservr+0003288B)
* 00432542 Module(sqlservr+00032542)
* 00434980 Module(sqlservr+00034980)
* 00432542 Module(sqlservr+00032542)
* 00434980 Module(sqlservr+00034980)
* 00432542 Module(sqlservr+00032542)
* 00861732 Module(sqlservr+00461732) (GetIMallocForMsxml+0006CBB2)
* 0086116B Module(sqlservr+0046116B) (GetIMallocForMsxml+0006C5EB)
* 00434980 Module(sqlservr+00034980)
* 00432542 Module(sqlservr+00032542)
* 005B810A Module(sqlservr+001B810A) (SQLExit+00050613)
* 005B8266 Module(sqlservr+001B8266) (SQLExit+0005076F)
* 004398A5 Module(sqlservr+000398A5)
* 004398A5 Module(sqlservr+000398A5)
* 0041D396 Module(sqlservr+0001D396)
* 00861732 Module(sqlservr+00461732) (GetIMallocForMsxml+0006CBB2)
* 0086116B Module(sqlservr+0046116B) (GetIMallocForMsxml+0006C5EB)
* 005B810A Module(sqlservr+001B810A) (SQLExit+00050613)
* 005B8266 Module(sqlservr+001B8266) (SQLExit+0005076F)
* 00434980 Module(sqlservr+00034980)
* 00432542 Module(sqlservr+00032542)
* 004194B9 Module(sqlservr+000194B9)
* 004193E4 Module(sqlservr+000193E4)
* 00429EAA Module(sqlservr+00029EAA)
* 00415D04 Module(sqlservr+00015D04)
* 00416214 Module(sqlservr+00016214)
* 00415F28 Module(sqlservr+00015F28)
* 0076C7D0 Module(sqlservr+0036C7D0) (SQLExit+00204CD9)
* 007704EF Module(sqlservr+003704EF) (SQLExit+002089F8)
* 0077099C Module(sqlservr+0037099C) (SQLExit+00208EA5)
* 006332EC Module(sqlservr+002332EC) (SQLExit+000CB7F5)
* 0043D005 Module(sqlservr+0003D005)
* 0042598D Module(sqlservr+0002598D)
* 41075309 Module(ums+00005309) (UmsThreadScheduler::ExitUser+00000459)|||Sascha Bohnenkamp wrote:
> Tracy McKibben schrieb:
>> No, SQL Server does not "stop working by accident". Most likely you
>> have flaky hardware that is causing corruption in your database file.
> well ... are there any chance to find someting to help find the problem
> in the log files of the server (or in the dumps) ?
> From teh stack dump I cannot see any i/o problems ...
> * Short Stack Dump
> * 009BA08C Module(sqlservr+005BA08C) (GetOSErrString+00004F68)
> * 009BA9B5 Module(sqlservr+005BA9B5) (GetOSErrString+00005891)
> * 006EE757 Module(sqlservr+002EE757) (SQLExit+00186C60)
> * 005FA686 Module(sqlservr+001FA686) (SQLExit+00092B8F)
> * 006BA785 Module(sqlservr+002BA785) (SQLExit+00152C8E)
> * 004040DA Module(sqlservr+000040DA)
> * 0042ADAE Module(sqlservr+0002ADAE)
> * 0042A3D3 Module(sqlservr+0002A3D3)
> * 0043638B Module(sqlservr+0003638B)
> * 0043288B Module(sqlservr+0003288B)
> * 00432542 Module(sqlservr+00032542)
> * 00434980 Module(sqlservr+00034980)
> * 00432542 Module(sqlservr+00032542)
> * 00434980 Module(sqlservr+00034980)
> * 00432542 Module(sqlservr+00032542)
> * 00861732 Module(sqlservr+00461732) (GetIMallocForMsxml+0006CBB2)
> * 0086116B Module(sqlservr+0046116B) (GetIMallocForMsxml+0006C5EB)
> * 00434980 Module(sqlservr+00034980)
> * 00432542 Module(sqlservr+00032542)
> * 005B810A Module(sqlservr+001B810A) (SQLExit+00050613)
> * 005B8266 Module(sqlservr+001B8266) (SQLExit+0005076F)
> * 004398A5 Module(sqlservr+000398A5)
> * 004398A5 Module(sqlservr+000398A5)
> * 0041D396 Module(sqlservr+0001D396)
> * 00861732 Module(sqlservr+00461732) (GetIMallocForMsxml+0006CBB2)
> * 0086116B Module(sqlservr+0046116B) (GetIMallocForMsxml+0006C5EB)
> * 005B810A Module(sqlservr+001B810A) (SQLExit+00050613)
> * 005B8266 Module(sqlservr+001B8266) (SQLExit+0005076F)
> * 00434980 Module(sqlservr+00034980)
> * 00432542 Module(sqlservr+00032542)
> * 004194B9 Module(sqlservr+000194B9)
> * 004193E4 Module(sqlservr+000193E4)
> * 00429EAA Module(sqlservr+00029EAA)
> * 00415D04 Module(sqlservr+00015D04)
> * 00416214 Module(sqlservr+00016214)
> * 00415F28 Module(sqlservr+00015F28)
> * 0076C7D0 Module(sqlservr+0036C7D0) (SQLExit+00204CD9)
> * 007704EF Module(sqlservr+003704EF) (SQLExit+002089F8)
> * 0077099C Module(sqlservr+0037099C) (SQLExit+00208EA5)
> * 006332EC Module(sqlservr+002332EC) (SQLExit+000CB7F5)
> * 0043D005 Module(sqlservr+0003D005)
> * 0042598D Module(sqlservr+0002598D)
> * 41075309 Module(ums+00005309) (UmsThreadScheduler::ExitUser+00000459)
Doesn't have to be an I/O problem - it could be bad RAM, a bad CPU, any
number of things. You should run some hardware diagnostics on the
machine, perhaps even open an incident with Microsoft - they can help
you decipher the logs.
Tracy McKibben
MCDBA
http://www.realsqlguy.com

Big Problem on Merge Replication

Hi, I'm working on a project taht involve SQLSever 2K and SQLSevre CE.
We have about 50 remote terminal that daily sincronize with one server but
each day something wrong happend.
The last one in order of time is this error
Error number 80004005
Native number 29006
Error descripcion: A call to SQL Server reconcilier failed. [,,,,,]
SSCE_M_SQLRECONCILIERFAILED
Followed by:
Source: Merge replication provider
Error number 80045063
Error descripcion: Failed to enumerate changes in the filtered articles
and then by this:
Source: srtv
Error number : 0
Native number: 0
Error descripcion: Call sp_MSsetupbelongs (?,?,?,?,?,0,?,?,1,?,?,?,?,?,?)
Usually at this time the only possibility is to delete the sdf database and
to resincronize the remote terminal.
I have already found an article in KB 328314 - FIX: Cannot Upload Null
Values in Sql_variant Data Type by Using SQL Server CE Merge Replication
but without possible solution and the strange thing is that I have no
Sql_variant Data on my Db.
Please help me because the project is at high risk of crash!!!
Thanx in advance
Best Regards
Gianluca Padovani
what happens if you restart your application that runs the merge pull?
Normally this will clear this error.
Hilary Cotter
Looking for a book on SQL Server replication?
http://www.nwsu.com/0974973602.html
"Gianluca Padovani" <Gianluca Padovani@.discussions.microsoft.com> wrote in
message news:2F7758AC-A8C6-40B9-8EE1-3C2B0ACEBD1F@.microsoft.com...
> Hi, I'm working on a project taht involve SQLSever 2K and SQLSevre CE.
> We have about 50 remote terminal that daily sincronize with one server but
> each day something wrong happend.
> The last one in order of time is this error
> Error number 80004005
> Native number 29006
> Error descripcion: A call to SQL Server reconcilier failed. [,,,,,]
> SSCE_M_SQLRECONCILIERFAILED
> Followed by:
> Source: Merge replication provider
> Error number 80045063
> Error descripcion: Failed to enumerate changes in the filtered articles
> and then by this:
> Source: srtv
> Error number : 0
> Native number: 0
> Error descripcion: Call sp_MSsetupbelongs (?,?,?,?,?,0,?,?,1,?,?,?,?,?,?)
> Usually at this time the only possibility is to delete the sdf database
and
> to resincronize the remote terminal.
> I have already found an article in KB 328314 - FIX: Cannot Upload Null
> Values in Sql_variant Data Type by Using SQL Server CE Merge Replication
> but without possible solution and the strange thing is that I have no
> Sql_variant Data on my Db.
> Please help me because the project is at high risk of crash!!!
> Thanx in advance
> Best Regards
> Gianluca Padovani
|||Hi Hilary,
thanks for your help, I'm waiting for the response to your test request and
asap i'll give you the answer.
In the meanwhile I can tell you another thing. In My previous post I put the
PocjePC error but at the same time I have a corresponding one on SQLServer
Merge Agent.
The erorr is :
Last Command: {call sp_MSsetupbelongs(?,?,?,?,?,0,?,?,1,?,?,?,?,?,?)}
Failed to enumerate changes in the filtered articles.
(Source: Merge Replication Provider (Agent); Error number: -2147200925)
The merge process timed out while executing a query. Reconfigure the
QueryTimeout parameter and retry the operation.
(Source: srvtv (Data source); Error number: 0)
I made a quick search into the NG and I have found another couple of article
regarding this error.
I one of those article Paul Ibison suggest one FIX
(http://support.microsoft.com/?id=814916 ) to correct this error. May I use
it too?
Again thanks a lot for your help
Best Regards
Gianluca Padovani
|||Hi Hilary
I have got the confirm that if I reset the application and run it again the
error is cleared. This is a goal but is notthe solution because two session
of sincronization means an overhead of time not acceptable for our situation.
I hope in your answer about my previus message regarding the hotfix
Thanx again
Gianluca PAdovani
"Hilary Cotter" wrote:

> what happens if you restart your application that runs the merge pull?
> Normally this will clear this error.
> --
> Hilary Cotter
> Looking for a book on SQL Server replication?
> http://www.nwsu.com/0974973602.html
>
> "Gianluca Padovani" <Gianluca Padovani@.discussions.microsoft.com> wrote in
> message news:2F7758AC-A8C6-40B9-8EE1-3C2B0ACEBD1F@.microsoft.com...
> and
>
>

Monday, March 19, 2012

BIDS autohide not always working

has anyone else experienced this where you have areas on BIDS set to auto
hide (like dataset or solution or properties) and they DONT autohide always
... I end up having to toggle autohide off and on again to make it hide ...
this is getting really annoying.I have it all the time. I have come across some really bad bugs in this
version. Auto-hide not working being one of them. Seems small at first, but
when you have to do it about 100 times a day, it becomes very frustrating.
They need to get this fixed. I don't remember having it in VS2003.
"MJT" wrote:
> has anyone else experienced this where you have areas on BIDS set to auto
> hide (like dataset or solution or properties) and they DONT autohide always
> ... I end up having to toggle autohide off and on again to make it hide ...
> this is getting really annoying.|||I havent even mentioned how many times I have received the message that
visual studio cannot close because there is a modal dialog active ... when I
have closed absolutely EVERY WINDOW in the tool ... there is nothing but a
SHELL left... so I end up having to force the task to end with task manager
... not a good thing. I am talking about the scaled down version of VS2005
that comes with reporting services ... not the full-blown install of VS2005.
Seems it should be addressed and fixed.
"bsod55" wrote:
> I have it all the time. I have come across some really bad bugs in this
> version. Auto-hide not working being one of them. Seems small at first, but
> when you have to do it about 100 times a day, it becomes very frustrating.
> They need to get this fixed. I don't remember having it in VS2003.
> "MJT" wrote:
> > has anyone else experienced this where you have areas on BIDS set to auto
> > hide (like dataset or solution or properties) and they DONT autohide always
> > ... I end up having to toggle autohide off and on again to make it hide ...
> > this is getting really annoying.

Wednesday, March 7, 2012

between not betweening inclusive

In the sql statements below, both BETWEEN and (>= and <=) are giving unexpected results. I am working with monthly data where in one table the time part of [DateTime]=09:31:00 the result is correct but if the time is 16:00:00 (in another table) the statement returns empty. Also if I change either sql statement to use 16:01:00, it returns the second table correctly but that should not be necessary because both statements below are supposed to be inclusive. Anybody know why this is happening and what to do about it?

select distinct * from [clean].[table2] where

convert(varchar, [DateTime],126) between '2005-11-30' and '2005-12-01' and

convert(varchar, [DateTime],114) between '09:30:00' and '16:00:00'

order by [DateTime]

The following statement has the same problem:

select distinct * from [clean].[table2] where

(convert(varchar,[DateTime],126) >= '2006-06-05' and

convert(varchar,[DateTime],126) <= '2006-11-05') and

(convert(varchar,[DateTime],114) >= '09:30:00' and

convert(varchar,[DateTime],114) <= '16:00:00')

order by [DateTime]

Time also has a milliseconds value. It is quite likely that your time values are in actuallity greater than exactly 16:00:00 -perhaps 16:00:00.001.

If you wish to include the times that are between 16:00:00 and 16:00:01 (meaning all times with any millisecond value, then use 16:00:01.

If you wish to ignore the milliseconds value, you may be better served using the smalldatetime datatype.

|||

rwbogosian wrote:

In the sql statements below, both BETWEEN and (>= and <=) are giving unexpected results. I am working with monthly data where in one table the time part of [DateTime]=09:31:00 the result is correct but if the time is 16:00:00 (in another table) the statement returns empty. Also if I change either sql statement to use 16:01:00, it returns the second table correctly but that should not be necessary because both statements below are supposed to be inclusive. Anybody know why this is happening and what to do about it?

select distinct * from [clean].[table2] where

convert(varchar, [DateTime],126) between '2005-11-30' and '2005-12-01' and

convert(varchar, [DateTime],114) between '09:30:00' and '16:00:00'

order by [DateTime]

The following statement has the same problem:

select distinct * from [clean].[table2] where

(convert(varchar,[DateTime],126) >= '2006-06-05' and

convert(varchar,[DateTime],126) <= '2006-11-05') and

(convert(varchar,[DateTime],114) >= '09:30:00' and

convert(varchar,[DateTime],114) <= '16:00:00')

order by [DateTime]

Why cant you just use

SELECT DISTINCT
*
FROM
clean.table2
WHERE
DateTime >= '2006-06-05 09:30:00' AND DateTime <= '2006-11-05 16:00:00'
|||Your query is doing string compares, which never work with between.

You need to do this:

SELECT DISTINCT * FROM [clean].[table2] WHERE
[DateTime] between CAST('2005-11-30' AS DATETIME) AND CAST('2005-12-01' AS DATETIME) AND
CONVERT(DATETIME,CONVERT(varchar,[DateTime],114)) BETWEEN CONVERT(datetime, '09:30:00') AND CONVERT(datetime, '16:00:00')

This will convert the dates to 1900-01-01 and compare the times correctly.

There is no reason to convert the dates to strings.|||

Changing the statement to use 16:00:01 does work. The actual value in the table is 16:00:00.000 but not even 16:00:00.9999 works; it has to be a full second after the actual time in the database table.

I know in the near future I'm going to have problems when we start analyzing even smaller time frames. While I am happy that changing one second works, shouldn't BETWEEN and (<= or >=) be inclusive. Is this a bug or a feature?

|||Can't use your statement because I can't use anything before 09:30:00 or after 16:00:00 on any of the dates between (inclusive).|||Thanks Tom; that appears to be working.

Friday, February 24, 2012

beta headache

Hi Folks,

I'm still using the the SQL Server 2005 June CTP to develop. I'm working for a client that intends to upgrade, but has not done so yet. We believed that we had 365 days from the date of installation before it would expire (late July). But Visual Studio 2005 Beta 2 just expired, which is used by SQL Server Business Intelligence Development Studio, so now we can no longer develop SSIS packages or Reporting Services reports.

What can I do?

I thought that maybe installing Visual Studio 2005 express might work, but when I went there, it seems that it is split up into modules (e.g. Visual Basic 2005 express, Visual C++ express, etc...). I don't know if just one of these would be sufficient and if so, which one. Even if it would, it says I'll need to completely remove the SQL Server 2005 beta, as well as the Visual Studio 2005 beta, and I think the .net framework.

My contract is up at the end of June, and it was understood that the client was going to take care of the upgrade. Is there a quick fix here? What can I do with minimal effort? (I don't really have time to wait for the client to get the upgrade approved and done. I know, sniff, sniff! lol.)

Uninstalling and reinstalling really shouldn't be terribly time consuming. You should definitely get off the betas and onto released software. For SQL, you can go straight to Express at SP1 level (i.e., don't bother installing Express RTM because the remastered version with SP1 bits is now available, and better). If you don't have stuff in master like special logins, a simple uninstall/reinstall will be easiest. If you do feel a need to retain the Master database, use the special /savesysdb switch when you uninstall and then you can carry all that stuff over to the current build.

Sorry, I don't have any answers for you on the Visual Studio Express bits to use. I'm sure they've got a forum up here where you can ask that question.

Bestpractice for ReportingServices 2005

Hello All,

I am working on ReportingServices 2005 and embading the reports in ASP.NET using ReportViewer.

I would like to know Best Practice for this above.

Like Header, Footer, Logo, Report Run time, ReportName, Start and EndDate, Width (on browser and while printing)
PageCount, MasterPages for ReportingServices 2005

Also on the Web Site(ASP.NET with AJAX ), how to navigate to these reports, like Menu or TreeView

Is there any BestPractice or StarterKit for .Net 2.0?

Here are useful links

http://www.microsoft.com/technet/prodtechnol/sql/2005/rsdesign.mspx

http://msdn2.microsoft.com/en-us/library/ms159162.aspx

http://msdn2.microsoft.com/en-us/library/ms170246.aspx

http://msdn2.microsoft.com/en-us/library/ms170246.aspx|||

Thanks for the reply.

Could you please point me to, where I can see live demo/working copy/starter kit to get a feel of the Bestpractice.

Thanks in Advance.

|||

Hi,

Check this link too -

http://www.ssw.com.au/ssw/Standards/Rules/RulesToBetterSQLReportingServices.aspx#underline4HyperlinkAction

Best way to synch data without admin rights?

I have a local copy of SQL Server 2000 installed with full admin
rights. I am working with a database on a remote server and I only have
permissions to that DB. I cannot setup
backup/synchro/replication/push/pull/etc due to not having the proper
permissions.
I have been using DTS to transfer the data to my local DB. But, is there
a good way to automate this process? Basically I just want to copy the
data locally everyday as a backup in case the remote server dies. Would
it be possible to create local job that transfers the data or am I stuck
doing this manually?
____________________________
Steven K. Lewis
Website Nation, LLC
Website Development Services
www dot Website Nation dot comSchedule your DTS package to run every day.
I believe you just right-click on the package to scedule it.|||Thanks. I will settle with doing that. I was also considering a 3rd
party tool if this does not do the job.
____________________________
Steven K. Lewis
Website Nation, LLC
Website Development Services
www dot Website Nation dot com
On 4/24/2005 3:13 PM, Greg Gilman wrote:
> Schedule your DTS package to run every day.
> I believe you just right-click on the package to scedule it.
>|||So far it is working fine. But, I remember the reason I did not use the
schedule was because sometimes tables would be added and skipped by the
DTS since I am not copying all objects. I find that copying tables is
more reliable then objects/data. If only DTS could also add any new
table created. But, to make a long story short, this not seems to be the
best option considering it is the only option I have to transfer data.
Thanks
____________________________
Steven K. Lewis
Website Nation, LLC
Website Development Services
www dot Website Nation dot com
On 4/24/2005 3:13 PM, Greg Gilman wrote:
> Schedule your DTS package to run every day.
> I believe you just right-click on the package to scedule it.
>|||Well I have to copy objects anyway, just realized copying table data
fails when the table object already exists in the source database...
____________________________
Steven K. Lewis
Website Nation, LLC
Website Development Services
www dot Website Nation dot com
On 4/24/2005 3:13 PM, Greg Gilman wrote:
> Schedule your DTS package to run every day.
> I believe you just right-click on the package to scedule it.
>|||You may consider using DB Ghost (http://www.dbghost.com) which can be
scheduled and as it does a comparison - it will only get the data that it
needs where as DTS will have to get all the data. In fact I syncrhonize my
local database with the production system every day using this script:
http://www.innovartis.co.uk/downloa...J
ob.zip
regards,
Mark Baekdal
http://www.dbghost.com
http://www.innovartis.co.uk
+44 (0)208 241 1762
Build, Comparison and Synchronization from Source Control = Database change
management for SQL Server
"Steve Lewis - Website Nation" wrote:

> Thanks. I will settle with doing that. I was also considering a 3rd
> party tool if this does not do the job.
> ____________________________
> Steven K. Lewis
> Website Nation, LLC
> Website Development Services
> www dot Website Nation dot com
>
> On 4/24/2005 3:13 PM, Greg Gilman wrote:
>|||Thanks for the script Mark. I will also checkout your software. So far
the DTS object/data transfer is working fine.
____________________________
Steven K. Lewis
Website Nation, LLC
Website Development Services
www dot Website Nation dot com
On 4/25/2005 3:58 AM, mark baekdal wrote:
> You may consider using DB Ghost (http://www.dbghost.com) which can be
> scheduled and as it does a comparison - it will only get the data that it
> needs where as DTS will have to get all the data. In fact I syncrhonize my
> local database with the production system every day using this script:
> http://www.innovartis.co.uk/downloa...br />
rJob.zip
>
> regards,
> Mark Baekdal
> http://www.dbghost.com
> http://www.innovartis.co.uk
> +44 (0)208 241 1762
> Build, Comparison and Synchronization from Source Control = Database chang
e
> management for SQL Server
>
> "Steve Lewis - Website Nation" wrote:
>

Sunday, February 19, 2012

Best way to split data in a table

I am working on a stored procedure that ultimately puts data into an excel report. What I'm looking for is the most efficient way to "split" the data in the table if the table has more than 65535 rows. I have already coded for more than one report if this happens, but I still need to split the data. I could do "SELECT TOP 65535..." for the first report, but that leaves me with the problem of getting the next 65535 into the next report, etc. Also, I thought about using a cursor, but as far as I know a SQL Server cursor can only return 1 row at a time. This is the code I have so far for this part of the stored procedure:

DECLARE @.RecordCount FLOAT, @.RowCap FLOAT, @.Counter INT, @.NumberOfReports INT

DECLARE @.AttachmentList NVARCHAR(MAX)

SET @.RowCap = 65535

SELECT @.RecordCount = COUNT(*) FROM ##MyTempTable

SET @.NumberOfReports = CAST(ROUND(CEILING(@.RecordCount/@.RowCap), 0) AS INT)

SET @.Counter = 1

WHILE @.Counter <= @.NumberOfReports

BEGIN

SET @.rptLongDesc = '<h5><center>ReportFrom ' + Convert(varchar,@.@.ServerName) + '</center></h5>'

SET @.rptName = Report_' + CONVERT (char (3),DATENAME(Month, GetDate())) + CONVERT (char (3),DATENAME(day, GetDate()))+

CONVERT (varchar (4),YEAR (GetDate())) + CAST(@.Counter AS CHAR(1))

SET @.rptPath = '\\Reports\'

SET @.rptOutputFile = @.rptPath + @.rptName + '.xls'

IF @.Counter = 1

SET @.AttachmentList = @.rptOutputFile

ELSE

SET @.AttachmentList = @.AttachmentList + ';' + @.rptOutputFile

EXEC sp_makewebtask

@.htmlheader = 3,

@.outputfile = @.rptOutputFile,

@.query = 'SELECT * FROM ##MyTempTable', --This needs to be split into groups of 65535 for each report.

@.resultstitle = @.rptLongDesc,

@.webpagetitle = @.rptName

SET @.Counter = @.Counter + 1

END

As you can see, the @.query parameter for sp_makewebtask needs to be fixed. I would appreciate any suggestions. Thanks.

Dave

DECLARE
@.RecordCount FLOAT,
@.RowCap FLOAT,
@.Counter INT,
@.NumberOfReports INT

DECLARE @.AttachmentList NVARCHAR(MAX)

SET @.RowCap = 65535

SELECT @.RecordCount = COUNT(*) FROM ##MyTempTable
SET @.NumberOfReports = CAST(ROUND(CEILING(@.RecordCount/@.RowCap), 0) AS INT)
SET @.Counter = 1
WHILE @.Counter <= @.NumberOfReports
BEGIN
SET @.rptLongDesc = '<h5><center>ReportFrom ' + Convert(varchar,@.@.ServerName) + '</center></h5>'
SET @.rptName = 'Report_' + CONVERT (char (3),DATENAME(Month, GetDate())) + CONVERT (char (3),DATENAME(day, GetDate()))+
CONVERT (varchar (4),YEAR (GetDate())) + CAST(@.Counter AS CHAR(1))
SET @.rptPath = '\\Reports\'
SET @.rptOutputFile = @.rptPath + @.rptName + '.xls'
IF @.Counter = 1
SET @.AttachmentList = @.rptOutputFile
ELSE
SET @.AttachmentList = @.AttachmentList + ';' + @.rptOutputFile
EXEC sp_makewebtask
@.htmlheader = 3,
@.outputfile = @.rptOutputFile,
@.query = ' SELECT TOP (@.cnt*65535) * FROM ##MyTempTable
EXCEPT
SELECT TOP ((@.Counter-1) * 65535) * FROM ##MyTempTable', --This needs to be split into groups of 65535 for each report.
@.resultstitle = @.rptLongDesc,
@.webpagetitle = @.rptName
SET @.Counter = @.Counter + 1
END

Note: If you think of performance issues, I would suggest to create a identity column on your temptable and split the data based on that.|||

Hi,

Another possibility is using partioned tables. You set an identity column in the table and create a partition function on that column with intervals of 65535.

Reference: http://msdn2.microsoft.com/en-us/library/ms188730.aspx

Greetz,

Geert

Geert Verhoeven
Consultant @. Ausy Belgium

My Personal Blog

Thursday, February 16, 2012

best way to order results sequentially starting from somewhere in the middle

I'm working with SQL Server 2005, and I'm trying to sort the results based on a user selected letter. Say the user selects 'D' to filter his results. I'd like to return the results starting from D followed by E, F, G...Z, A, B, C. What I'm getting is the results for the D entries at the top of the result set, followed by A, B, C, E...Z.

A solution comes to mind that would be very long and db intensive, by querying on 'like 'D', followed by like 'E', followed by like 'F', etc, but I'm sure that there is a much more efficient way to do this. Below is the code that I'm using now.

' where @.SortString = 'd' and @.Test is a temp Table

BEGIN

Insert into @.Test

Select CompanyName,ContactId, CompanyId

from vContacts where CompanyName like @.SortString +'%'

Order by CompanyName

Insert into @.Test

Select CompanyName,ContactId, CompanyId

from vContacts where CompanyName not like @.SortString +'%'

Order by CompanyName

END

Thanks in advance for your help

Code Snippet


SELECT
OrderSet = 1,
CompanyName,
ContactId,
CompanyId
FROM vContacts
WHERE CompanyName >= @.SortString

UNION


Select
2
CompanyName,
ContactId,
CompanyId
FROM vContacts
WHERE CompanyName < @.SortString


ORDER BY
OrderSet,

CompanyName


|||Thank you Arnie. Right on with the answer

Tuesday, February 14, 2012

Best way to insert data into tables without primary keys

I am working on a SQL Server database in which there are no primary
keys set on the tables. I can tell what they are using for a key. It
is usually named ID, has a data type of int and does not allow nulls.
However, since it is not set as a primary key you can create a
duplicate key.
This whole thing was created by someone who is long gone. I don't
know how long I will be here and I don't want to break anything. I
just want to work with things the way they are.
So if I want to insert a new record, and I want the key, which is
named ID, to be the next number in the sequence, is there something I
can do in an insert sql statement to do this?Often, the quality of the responses received is related to our ability to
'bounce' ideas off of each other. In the future, to prevent folks from
wasting time on already answered questions, please:
Don't post to multiple newsgroups. Choose the one that best fits your
question and post there. Only post to another newsgroup if you get no answer
in a day or two (or if you accidentally posted to the wrong newsgroup), and
indicate that you've already posted elsewhere.
If you really think that a question belongs into more than one newsgroup,
then use your newsreader's capability of multi-posting, i.e., posting one
occurrence of a message into several newsgroups at once. If you multi-post
appropriately, answers 'should' appear in all the newsgroups. Folks
responding in different newsgroups will see responses from each other, even
if the responses were posted in a different newsgroup.
See .programming
Arnie Rowland, Ph.D.
Westwood Consulting, Inc
Most good judgment comes from experience.
Most experience comes from bad judgment.
- Anonymous
"tom c" <tomcarr1@.gmail.com> wrote in message
news:1157578232.173886.295680@.m79g2000cwm.googlegroups.com...
>I am working on a SQL Server database in which there are no primary
> keys set on the tables. I can tell what they are using for a key. It
> is usually named ID, has a data type of int and does not allow nulls.
> However, since it is not set as a primary key you can create a
> duplicate key.
> This whole thing was created by someone who is long gone. I don't
> know how long I will be here and I don't want to break anything. I
> just want to work with things the way they are.
> So if I want to insert a new record, and I want the key, which is
> named ID, to be the next number in the sequence, is there something I
> can do in an insert sql statement to do this?
>|||Will do Arnie. Sorry for the double post. I found .programming after I
had already posted here and then realized it was a better place for the
question and re posted it there. I won't do it again. Thanks for you
help.
Arnie Rowland wrote:
> Often, the quality of the responses received is related to our ability to
> 'bounce' ideas off of each other. In the future, to prevent folks from
> wasting time on already answered questions, please:
> Don't post to multiple newsgroups. Choose the one that best fits your
> question and post there. Only post to another newsgroup if you get no answer
> in a day or two (or if you accidentally posted to the wrong newsgroup), and
> indicate that you've already posted elsewhere.
> If you really think that a question belongs into more than one newsgroup,
> then use your newsreader's capability of multi-posting, i.e., posting one
> occurrence of a message into several newsgroups at once. If you multi-post
> appropriately, answers 'should' appear in all the newsgroups. Folks
> responding in different newsgroups will see responses from each other, even
> if the responses were posted in a different newsgroup.
>
> See .programming
>
> --
> Arnie Rowland, Ph.D.
> Westwood Consulting, Inc
> Most good judgment comes from experience.
> Most experience comes from bad judgment.
> - Anonymous
>
> "tom c" <tomcarr1@.gmail.com> wrote in message
> news:1157578232.173886.295680@.m79g2000cwm.googlegroups.com...
> >I am working on a SQL Server database in which there are no primary
> > keys set on the tables. I can tell what they are using for a key. It
> > is usually named ID, has a data type of int and does not allow nulls.
> > However, since it is not set as a primary key you can create a
> > duplicate key.
> >
> > This whole thing was created by someone who is long gone. I don't
> > know how long I will be here and I don't want to break anything. I
> > just want to work with things the way they are.
> >
> > So if I want to insert a new record, and I want the key, which is
> > named ID, to be the next number in the sequence, is there something I
> > can do in an insert sql statement to do this?
> >

Best way to insert data into tables without primary keys

I am working on a SQL Server database in which there are no primary
keys set on the tables. I can tell what they are using for a key. It
is usually named ID, has a data type of int and does not allow nulls.
However, since it is not set as a primary key you can create a
duplicate key.
This whole thing was created by someone who is long gone. I don't
know how long I will be here and I don't want to break anything. I
just want to work with things the way they are.
So if I want to insert a new record, and I want the key, which is
named ID, to be the next number in the sequence, is there something I
can do in an insert sql statement to do this?Often, the quality of the responses received is related to our ability to
'bounce' ideas off of each other. In the future, to prevent folks from
wasting time on already answered questions, please:
Don't post to multiple newsgroups. Choose the one that best fits your
question and post there. Only post to another newsgroup if you get no answer
in a day or two (or if you accidentally posted to the wrong newsgroup), and
indicate that you've already posted elsewhere.
If you really think that a question belongs into more than one newsgroup,
then use your newsreader's capability of multi-posting, i.e., posting one
occurrence of a message into several newsgroups at once. If you multi-post
appropriately, answers 'should' appear in all the newsgroups. Folks
responding in different newsgroups will see responses from each other, even
if the responses were posted in a different newsgroup.
See .programming
Arnie Rowland, Ph.D.
Westwood Consulting, Inc
Most good judgment comes from experience.
Most experience comes from bad judgment.
- Anonymous
"tom c" <tomcarr1@.gmail.com> wrote in message
news:1157578232.173886.295680@.m79g2000cwm.googlegroups.com...
>I am working on a SQL Server database in which there are no primary
> keys set on the tables. I can tell what they are using for a key. It
> is usually named ID, has a data type of int and does not allow nulls.
> However, since it is not set as a primary key you can create a
> duplicate key.
> This whole thing was created by someone who is long gone. I don't
> know how long I will be here and I don't want to break anything. I
> just want to work with things the way they are.
> So if I want to insert a new record, and I want the key, which is
> named ID, to be the next number in the sequence, is there something I
> can do in an insert sql statement to do this?
>|||Will do Arnie. Sorry for the double post. I found .programming after I
had already posted here and then realized it was a better place for the
question and re posted it there. I won't do it again. Thanks for you
help.
Arnie Rowland wrote:[vbcol=seagreen]
> Often, the quality of the responses received is related to our ability to
> 'bounce' ideas off of each other. In the future, to prevent folks from
> wasting time on already answered questions, please:
> Don't post to multiple newsgroups. Choose the one that best fits your
> question and post there. Only post to another newsgroup if you get no answ
er
> in a day or two (or if you accidentally posted to the wrong newsgroup), an
d
> indicate that you've already posted elsewhere.
> If you really think that a question belongs into more than one newsgroup,
> then use your newsreader's capability of multi-posting, i.e., posting one
> occurrence of a message into several newsgroups at once. If you multi-post
> appropriately, answers 'should' appear in all the newsgroups. Folks
> responding in different newsgroups will see responses from each other, eve
n
> if the responses were posted in a different newsgroup.
>
> See .programming
>
> --
> Arnie Rowland, Ph.D.
> Westwood Consulting, Inc
> Most good judgment comes from experience.
> Most experience comes from bad judgment.
> - Anonymous
>
> "tom c" <tomcarr1@.gmail.com> wrote in message
> news:1157578232.173886.295680@.m79g2000cwm.googlegroups.com...

Sunday, February 12, 2012

Best Way to Empty Transaction Log

Hi All,
What is a best way to empty (shrink) a database transaction log. I am
working with huge database - 10GB to 100GB- and basically doesn't care
about transaction log much when data is being deleted by a task. I have a
deletion task consisted of many stored procedures which is running every
night. In some of the client's database, when those stored procedure
deleted data in different tables, the transaction log getting to big and SQL
appears to hang up. Sometimes a deletion task will take a whole weekend.
Note that the stored procedures do use TRUNCATE TRANSACTION and CHECK POINT
trying to reduce the size of the transaction log.
My question is, how can I tell SQL not to log the transactions when my
deletion procedures are running. If there is no way to to tell SQL not to
log the transactions, is the a better way to control the size of the
transaction log file, other than using the CHECK POINT?
Thanks.A CHECKPOINT followed by a DBCC SHRINKFILE, works for us. We shrink our log
before backup everynight, the log itself will hit 100GB(400GB+ DB) over
weekend maintenace. Shrinkfile usually goes quick on the log.
" David N" <dq.ninh@.netiq.com> wrote in message
news:en54Y$qWDHA.2328@.TK2MSFTNGP12.phx.gbl...
> Hi All,
> What is a best way to empty (shrink) a database transaction log. I am
> working with huge database - 10GB to 100GB- and basically doesn't care
> about transaction log much when data is being deleted by a task. I have a
> deletion task consisted of many stored procedures which is running every
> night. In some of the client's database, when those stored procedure
> deleted data in different tables, the transaction log getting to big and
SQL
> appears to hang up. Sometimes a deletion task will take a whole weekend.
> Note that the stored procedures do use TRUNCATE TRANSACTION and CHECK
POINT
> trying to reduce the size of the transaction log.
> My question is, how can I tell SQL not to log the transactions when my
> deletion procedures are running. If there is no way to to tell SQL not to
> log the transactions, is the a better way to control the size of the
> transaction log file, other than using the CHECK POINT?
> Thanks.
>|||Did not read well, sry. All your DELETE FROMs are going to be logged, that
is a good thing. If you are clearing out the entire table, you can use the
TRUNCATE TABLE which is non-logged. If you cannot use TRUNCATE TABLE and
must use DELETE FROM, you can try an optimize it to run more efficiently.
The DELETE might run quicker if you can base the delete off the CLUSTERED
INDEX(use temp tables if needed to achieve) or if this table is index heavy
you might drop some indexes(non-clustered only) and recreate them when the
delete process is finished.
My personal way if it is large delete filling up your log(ie one big logged
transaction). I would suck all the CLUSTERED INDEX values into a temp table
and use a WHILE with TOP functionality in it to loop the delete in smaller
digestible chunks. This will allow the log backups to clear out the
transaction log(make sure these are running in good intervals, ours go every
5mins). We have even added functionality to some of our bigger maint procs
to check logsize during loops and run a log backup to keep it in check.
HTH.
"Kevin Brooks" <kbrooks@.sagetelecom.net> wrote in message
news:OsdM$DrWDHA.2032@.TK2MSFTNGP11.phx.gbl...
> A CHECKPOINT followed by a DBCC SHRINKFILE, works for us. We shrink our
log
> before backup everynight, the log itself will hit 100GB(400GB+ DB) over
> weekend maintenace. Shrinkfile usually goes quick on the log.
>
> " David N" <dq.ninh@.netiq.com> wrote in message
> news:en54Y$qWDHA.2328@.TK2MSFTNGP12.phx.gbl...
> > Hi All,
> >
> > What is a best way to empty (shrink) a database transaction log. I am
> > working with huge database - 10GB to 100GB- and basically doesn't care
> > about transaction log much when data is being deleted by a task. I have
a
> > deletion task consisted of many stored procedures which is running every
> > night. In some of the client's database, when those stored procedure
> > deleted data in different tables, the transaction log getting to big and
> SQL
> > appears to hang up. Sometimes a deletion task will take a whole
weekend.
> > Note that the stored procedures do use TRUNCATE TRANSACTION and CHECK
> POINT
> > trying to reduce the size of the transaction log.
> >
> > My question is, how can I tell SQL not to log the transactions when my
> > deletion procedures are running. If there is no way to to tell SQL not
to
> > log the transactions, is the a better way to control the size of the
> > transaction log file, other than using the CHECK POINT?
> >
> > Thanks.
> >
> >
>

Best way to duplicate almost whole database (same data for next period scenario)

Hi there
I'm working on a budgeting app which will be used to prepare a budget
for a given period. In the beginning of the next period all data from
the previous one should be duplicated and inserted into a new period so
they would become a base for preparing new data (by updating old
values).
I think first thing should be adding PERIOD column to each table. Or
not each one? For example it wouldn't be necessary to put this column
into intermediary tables, since period value for each row could be
identified by their parent tables. But I'm afraid it could make things
less transparent.
Also, I think that it's more convenient to have natural composite
primary keys for such use instead of single surrogates, since I could
just duplicate each row and just update PERIOD column which would be
part of a composite key (and composite foreign key, hence). But not
always natural keys are possible, so I will have to find a solution to
preserve relationships anyway.
Do you have any suggestions, or could you show me some resources on the
scenario? Should I (in one big transaction) turn off check constraints
(for tables where foreign keys cannot be null), duplicate all tables
with updating the PERIOD column and then update foreign keys basing on
data from previous period (ie. insert into current row's foreign key
column(s) the id of current version of row which was related to
previous version of the current row)?
thanks for any suggestion and sorry for not being too clear.
greets
hpHP
What is the version are you using? I think you need read about partition in
the SQL Server 2000/2005
"HP" <ha5en1@.gmail.com> wrote in message
news:1164677515.886061.267820@.l12g2000cwl.googlegroups.com...
> Hi there
> I'm working on a budgeting app which will be used to prepare a budget
> for a given period. In the beginning of the next period all data from
> the previous one should be duplicated and inserted into a new period so
> they would become a base for preparing new data (by updating old
> values).
> I think first thing should be adding PERIOD column to each table. Or
> not each one? For example it wouldn't be necessary to put this column
> into intermediary tables, since period value for each row could be
> identified by their parent tables. But I'm afraid it could make things
> less transparent.
>
> Also, I think that it's more convenient to have natural composite
> primary keys for such use instead of single surrogates, since I could
> just duplicate each row and just update PERIOD column which would be
> part of a composite key (and composite foreign key, hence). But not
> always natural keys are possible, so I will have to find a solution to
> preserve relationships anyway.
> Do you have any suggestions, or could you show me some resources on the
> scenario? Should I (in one big transaction) turn off check constraints
> (for tables where foreign keys cannot be null), duplicate all tables
> with updating the PERIOD column and then update foreign keys basing on
> data from previous period (ie. insert into current row's foreign key
> column(s) the id of current version of row which was related to
> previous version of the current row)?
> thanks for any suggestion and sorry for not being too clear.
> greets
> hp
>|||> What is the version are you using? I think you need read about partition in
> the SQL Server 2000/2005
it's 2k, sorry.
thanks for info, I'm reading about it at the moment. isn't it a
warehose solution? wouldn't it be an overkill for normal db where
performance isn't so important? or is there a way in which partitions
would make abovementioned operations more convenient?
thanks a lot
hp

Best way to duplicate almost whole database (same data for next period scenario)

Hi there
I'm working on a budgeting app which will be used to prepare a budget
for a given period. In the beginning of the next period all data from
the previous one should be duplicated and inserted into a new period so
they would become a base for preparing new data (by updating old
values).
I think first thing should be adding PERIOD column to each table. Or
not each one? For example it wouldn't be necessary to put this column
into intermediary tables, since period value for each row could be
identified by their parent tables. But I'm afraid it could make things
less transparent.
Also, I think that it's more convenient to have natural composite
primary keys for such use instead of single surrogates, since I could
just duplicate each row and just update PERIOD column which would be
part of a composite key (and composite foreign key, hence). But not
always natural keys are possible, so I will have to find a solution to
preserve relationships anyway.
Do you have any suggestions, or could you show me some resources on the
scenario? Should I (in one big transaction) turn off check constraints
(for tables where foreign keys cannot be null), duplicate all tables
with updating the PERIOD column and then update foreign keys basing on
data from previous period (ie. insert into current row's foreign key
column(s) the id of current version of row which was related to
previous version of the current row)?
thanks for any suggestion and sorry for not being too clear.
greets
hp
HP
What is the version are you using? I think you need read about partition in
the SQL Server 2000/2005
"HP" <ha5en1@.gmail.com> wrote in message
news:1164677515.886061.267820@.l12g2000cwl.googlegr oups.com...
> Hi there
> I'm working on a budgeting app which will be used to prepare a budget
> for a given period. In the beginning of the next period all data from
> the previous one should be duplicated and inserted into a new period so
> they would become a base for preparing new data (by updating old
> values).
> I think first thing should be adding PERIOD column to each table. Or
> not each one? For example it wouldn't be necessary to put this column
> into intermediary tables, since period value for each row could be
> identified by their parent tables. But I'm afraid it could make things
> less transparent.
>
> Also, I think that it's more convenient to have natural composite
> primary keys for such use instead of single surrogates, since I could
> just duplicate each row and just update PERIOD column which would be
> part of a composite key (and composite foreign key, hence). But not
> always natural keys are possible, so I will have to find a solution to
> preserve relationships anyway.
> Do you have any suggestions, or could you show me some resources on the
> scenario? Should I (in one big transaction) turn off check constraints
> (for tables where foreign keys cannot be null), duplicate all tables
> with updating the PERIOD column and then update foreign keys basing on
> data from previous period (ie. insert into current row's foreign key
> column(s) the id of current version of row which was related to
> previous version of the current row)?
> thanks for any suggestion and sorry for not being too clear.
> greets
> hp
>
|||
> What is the version are you using? I think you need read about partition in
> the SQL Server 2000/2005
it's 2k, sorry.
thanks for info, I'm reading about it at the moment. isn't it a
warehose solution? wouldn't it be an overkill for normal db where
performance isn't so important? or is there a way in which partitions
would make abovementioned operations more convenient?
thanks a lot
hp

Best way to duplicate almost whole database (same data for next period scenario)

Hi there
I'm working on a budgeting app which will be used to prepare a budget
for a given period. In the beginning of the next period all data from
the previous one should be duplicated and inserted into a new period so
they would become a base for preparing new data (by updating old
values).
I think first thing should be adding PERIOD column to each table. Or
not each one? For example it wouldn't be necessary to put this column
into intermediary tables, since period value for each row could be
identified by their parent tables. But I'm afraid it could make things
less transparent.
Also, I think that it's more convenient to have natural composite
primary keys for such use instead of single surrogates, since I could
just duplicate each row and just update PERIOD column which would be
part of a composite key (and composite foreign key, hence). But not
always natural keys are possible, so I will have to find a solution to
preserve relationships anyway.
Do you have any suggestions, or could you show me some resources on the
scenario? Should I (in one big transaction) turn off check constraints
(for tables where foreign keys cannot be null), duplicate all tables
with updating the PERIOD column and then update foreign keys basing on
data from previous period (ie. insert into current row's foreign key
column(s) the id of current version of row which was related to
previous version of the current row)?
thanks for any suggestion and sorry for not being too clear.
greets
hpHP
What is the version are you using? I think you need read about partition in
the SQL Server 2000/2005
"HP" <ha5en1@.gmail.com> wrote in message
news:1164677515.886061.267820@.l12g2000cwl.googlegroups.com...
> Hi there
> I'm working on a budgeting app which will be used to prepare a budget
> for a given period. In the beginning of the next period all data from
> the previous one should be duplicated and inserted into a new period so
> they would become a base for preparing new data (by updating old
> values).
> I think first thing should be adding PERIOD column to each table. Or
> not each one? For example it wouldn't be necessary to put this column
> into intermediary tables, since period value for each row could be
> identified by their parent tables. But I'm afraid it could make things
> less transparent.
>
> Also, I think that it's more convenient to have natural composite
> primary keys for such use instead of single surrogates, since I could
> just duplicate each row and just update PERIOD column which would be
> part of a composite key (and composite foreign key, hence). But not
> always natural keys are possible, so I will have to find a solution to
> preserve relationships anyway.
> Do you have any suggestions, or could you show me some resources on the
> scenario? Should I (in one big transaction) turn off check constraints
> (for tables where foreign keys cannot be null), duplicate all tables
> with updating the PERIOD column and then update foreign keys basing on
> data from previous period (ie. insert into current row's foreign key
> column(s) the id of current version of row which was related to
> previous version of the current row)?
> thanks for any suggestion and sorry for not being too clear.
> greets
> hp
>|||
> What is the version are you using? I think you need read about partition
in
> the SQL Server 2000/2005
it's 2k, sorry.
thanks for info, I'm reading about it at the moment. isn't it a
warehose solution? wouldn't it be an overkill for normal db where
performance isn't so important? or is there a way in which partitions
would make abovementioned operations more convenient?
thanks a lot
hp

Friday, February 10, 2012

Best way to connect from VB6

I'm currently working on a project that is going to read/write lot of data into a SQL EXPRESS 2005.

What is BEST WAY(faster and reliable) for connecting, reading, writing updating from VB6...

The code below is working except the RecordCount that always returns -1

I've tried to do a Movelast before, doing so produce this error --> rowset does not support fetching backward

Private ObjConn As New ADODB.Connection
Private ObjRS As New ADODB.Recordset

...

in a function....
Dim i As Integer, j As Integer

With ObjConn
.ConnectionTimeout = 30
.CommandTimeout = 30
.Provider = "SQLOLEDB"
.ConnectionString = "Driver={SQL Native Client};Server=MACHINE\SQLEXPRESS;Database=DSD;UID=sa;PWD=MYPass;"
End With
ObjConn.Open
strRequest = "SELECT * FROM dbo.Site"
ObjRS.Open strRequest, ObjConn
j = 0
MsgBox ObjRS.RecordCount ' How to get NUMBER OF RECORDS BEFORE LOOPING ?

Do While Not ObjRS.EOF
task(j).SiteID = ObjRS.Fields(ObjRS.Fields(0).Name).Value
' THIS SECTIONS WORKS
ObjRS.MoveNext

j = j + 1
Loop
ObjConn.Close
test = task
End FunctionThanks for helping ! I'm stuck in the middle...

Merci

For MsgBox ObjRS.RecordCount ' How to get NUMBER OF RECORDS BEFORE LOOPING ?

Try one of the following

1) ObjRS.CursorLocation = adUseClient

2) ObjRS.Open strRequest, ObjConn, adOpenKeyset

Hope this helps

|||

The reason you can not move backwards or get the record count is because of the fact that you are getting the SQL Server Firehose cursor as a result. You can not get the record count until you reach the last record for your query. Similarly Firehose cursors do not support going backwards. The advantage of firehose cursors is that they are the fastest way of fetching SQL Server data, however they do not support updates. If you want to perform updates you have to request different cursor types when calling Open on the Recordset object. For example if you use Static cursor using adOpenStatic parameter to Open you will get the Recordcount correctly. Read the section title : "Understanding Cursors and Locks" in the ADO documentation in MSDN.

Thanks

Waseem Basheer

|||

RecordCount returns -1 when you are using a forward-only cursor or a dynamic cursor (see http://msdn.microsoft.com/library/default.asp?url=/library/en-us/ado270/htm/mdprorecordcount.asp). Since the default is a forward only cursor, that would explain the behavior that you're seeing.

As Waseem mentions, this cursor type would also explain the behavior you're seeing with MoveLast.

If you switch to using a static cursor, this property should have the row count value that you were expecting (the code snippet doesn't appear to have any need to update data, so this would presumably be sufficient).