Showing posts with label system. Show all posts
Showing posts with label system. Show all posts

Sunday, March 11, 2012

bi-directional vs updatable subscriptions

Bi-directional replication strictly speaking is where you have data
modification on any node moving to all nodes in a system, it includes merge
replication, bi-directional transactional replication, snapshot replication
with immediate updating, snapshot replication with queued updating,
transactional replication with
immediate updating, transactional replication with queued updating, and peer
to peer replication.
I am ignoring immediate updating with queued failover here.
Now, let me climb down from my soap box,
bi-directional transactional replication doesn't scale well to multiple
nodes, it locks you into a schema, but there are no schema modifications.
Immediate uses MSDTS (distributed transactions) which is not resilient to
network hicupps, and adds latency to all DML originating on the subscriber.
It adds a guid column to each table you are replicating, it is not really
scalable beyond 10 subscribers, and it will log conflicts but not allow you
to roll them back. It is best used when the majority of the DML originates
on the published. Queued updating uses a queue to replicate DML, it is not
scalable beyond 10 subscribers, adds a guid column to every table you are
replicating, adds latency to all DML originating on the subscriber, and is
best used when the majority of the DML originates on the publisher.
Conflicts are logged but cannot be rolled back.
Merge replication is designed for large numbers of subscribes, works well
when the majority of the DML originates on the publisher or subscriber, has
rich conflict logging and resolution, and adds a guid column to every table
you are replicating.
I use bi-directional replication in DR environments where it can work well,
keeping in mind there is no automatic failover and you are locked into a
schema.
HTH
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Rob" <Rob@.discussions.microsoft.com> wrote in message
news:9C060D84-80E7-4754-B1B0-6280E64ADDFE@.microsoft.com...
> What is the difference between Bi-directional and Updatable Subscriptions
> Transactional Replication? Conceptually, they seem to be the same... or
> are
> they?
No, most of it is experience
The caveats about where the majority of the DML occurs is documented. Have a
look at these links for more info.
http://msdn2.microsoft.com/en-us/library/aa237486(SQL.80).aspx
http://msdn2.microsoft.com/en-US/library/aa237271(sql.80).aspx
http://msdn2.microsoft.com/en-US/library/aa237346(SQL.80).aspx
http://msdn2.microsoft.com/en-US/library/aa256086(SQL.80).aspx
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Rob" <Rob@.discussions.microsoft.com> wrote in message
news:8E39CB8A-06F6-4318-A3A2-00948098CC07@.microsoft.com...[vbcol=seagreen]
> Wow! Thank you for taking the time to explain this so well.
> Just wondering on the some of the limitation that you had mentioned. For
> instance, "...not scalable beyond 10 subscribers"; "...locks you into a
> schema, but there are no schema modifications". Is that documented
> anywhere
> in BOL and/or on Microsoft's website where more info on those limitations
> are
> discussed in depth?
> Thanks again.
> "Hilary Cotter" wrote:

BI Studio: Poor Performance When Opening a Solution

When I am opening a Business Intelligence Solution it takes up to 6 minutes
until the system is ready to work.
The first 4 minutes I see Visual Studio with the grey background. In the
remaining time the packages appear but don't accept any input. Meanwhile
Visual Studio is reporting that it is busy and is waiting for an internal
operation to complete.
Any idea how to speed up this process?
Stefan Bublitz
does your cube metadata is big?
My metadata takes 10Mb and required few minutes to be opened.
"SBublitz" <SBublitz@.discussions.microsoft.com> wrote in message
news:095B0A32-78DE-418B-854B-243798CAF2C0@.microsoft.com...
> When I am opening a Business Intelligence Solution it takes up to 6
> minutes
> until the system is ready to work.
> The first 4 minutes I see Visual Studio with the grey background. In the
> remaining time the packages appear but don't accept any input. Meanwhile
> Visual Studio is reporting that it is busy and is waiting for an internal
> operation to complete.
> Any idea how to speed up this process?
> Stefan Bublitz
|||I browsed the newsgroups in the meantime and I found a hint that I tried out:
I uninstalled the SSIS SP1 and I re-installed SSIS without SP1.
I don't know if this has been the cause of the problem but obviously it
solved it...
(I found entries in the event log regarding DCOM and MsDtsServer).
Stefan Bublitz
"Jeje" wrote:

> does your cube metadata is big?
> My metadata takes 10Mb and required few minutes to be opened.
> "SBublitz" <SBublitz@.discussions.microsoft.com> wrote in message
> news:095B0A32-78DE-418B-854B-243798CAF2C0@.microsoft.com...
>
|||oh you talk about an SSIS solution not an SSAS one.
Sorry for this mistake.
and what about the SP2?
with SSIS I suffer slow opening only if my solution was closed and if my
some packages are opened. When I reopen the solution, this reopen all these
packages and takes few times because SSIS validate each package.
You can improve this by changing the validate metadata option of your
connection to false (true by default until SP2, but after a package is
created you have to change this your self)
You can also check the "work offline" option before opening your big
solution.
"SBublitz" <SBublitz@.discussions.microsoft.com> wrote in message
news:3A16F4D2-5655-43DD-ACD7-1035EFBBA835@.microsoft.com...[vbcol=seagreen]
>I browsed the newsgroups in the meantime and I found a hint that I tried
>out:
> I uninstalled the SSIS SP1 and I re-installed SSIS without SP1.
> I don't know if this has been the cause of the problem but obviously it
> solved it...
> (I found entries in the event log regarding DCOM and MsDtsServer).
> --
> Stefan Bublitz
>
> "Jeje" wrote:

BI Studio: Poor Performance When Opening a Solution

When I am opening a Business Intelligence Solution it takes up to 6 minutes
until the system is ready to work.
The first 4 minutes I see Visual Studio with the grey background. In the
remaining time the packages appear but don't accept any input. Meanwhile
Visual Studio is reporting that it is busy and is waiting for an internal
operation to complete.
Any idea how to speed up this process?
Stefan Bublitzdoes your cube metadata is big?
My metadata takes 10Mb and required few minutes to be opened.
"SBublitz" <SBublitz@.discussions.microsoft.com> wrote in message
news:095B0A32-78DE-418B-854B-243798CAF2C0@.microsoft.com...
> When I am opening a Business Intelligence Solution it takes up to 6
> minutes
> until the system is ready to work.
> The first 4 minutes I see Visual Studio with the grey background. In the
> remaining time the packages appear but don't accept any input. Meanwhile
> Visual Studio is reporting that it is busy and is waiting for an internal
> operation to complete.
> Any idea how to speed up this process?
> Stefan Bublitz|||I browsed the newsgroups in the meantime and I found a hint that I tried out
:
I uninstalled the SSIS SP1 and I re-installed SSIS without SP1.
I don't know if this has been the cause of the problem but obviously it
solved it...
(I found entries in the event log regarding DCOM and MsDtsServer).
Stefan Bublitz
"Jeje" wrote:

> does your cube metadata is big?
> My metadata takes 10Mb and required few minutes to be opened.
> "SBublitz" <SBublitz@.discussions.microsoft.com> wrote in message
> news:095B0A32-78DE-418B-854B-243798CAF2C0@.microsoft.com...
>|||oh you talk about an SSIS solution not an SSAS one.
Sorry for this mistake.
and what about the SP2?
with SSIS I suffer slow opening only if my solution was closed and if my
some packages are opened. When I reopen the solution, this reopen all these
packages and takes few times because SSIS validate each package.
You can improve this by changing the validate metadata option of your
connection to false (true by default until SP2, but after a package is
created you have to change this your self)
You can also check the "work offline" option before opening your big
solution.
"SBublitz" <SBublitz@.discussions.microsoft.com> wrote in message
news:3A16F4D2-5655-43DD-ACD7-1035EFBBA835@.microsoft.com...[vbcol=seagreen]
>I browsed the newsgroups in the meantime and I found a hint that I tried
>out:
> I uninstalled the SSIS SP1 and I re-installed SSIS without SP1.
> I don't know if this has been the cause of the problem but obviously it
> solved it...
> (I found entries in the event log regarding DCOM and MsDtsServer).
> --
> Stefan Bublitz
>
> "Jeje" wrote:
>

Wednesday, March 7, 2012

Beyond 8 GB memory

We upgraded our server to Windows 2003 Ent edition and added extra 8 GB memory to the machine, operating system sees all 16 GB also ent mgr properties shows all 16 GB memory, I assigned 14 GB to SQL server but perf monitor (total server memory) only shows 7GB memory, any reason for this?Did you restart your SQL Service?
--
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
"Sam Moayedi" <anonymous@.discussions.microsoft.com> wrote in message
news:CF7CA24D-D1FD-4C6F-A891-F9BBE6559DAC@.microsoft.com...
> We upgraded our server to Windows 2003 Ent edition and added extra 8 GB
memory to the machine, operating system sees all 16 GB also ent mgr
properties shows all 16 GB memory, I assigned 14 GB to SQL server but perf
monitor (total server memory) only shows 7GB memory, any reason for this?

Saturday, February 25, 2012

Better to backup up all server databases or backup system databases separately from databa

Gurus,
Running SQL Server 2005 SP2 and using named instances. I am having lots of
important application databases on my server these days. My question is
this. Is it better to back up all server databases in one maintenance plan
or backup system databases separately from databases?
Spin
I don't think it matters one way or another. But then it depends on what
issues you may be struggling with.
If you have scheduling issues running a single maintenance plan, break it up
into multiple plans. Personally, I prefer dealing system databases
separately. In particular, when you are dealing relatively large user
databases, you may not want to use the MS maintenance plans and may choose to
schedule their backups in your own schduler for better flexibility.
Linchi
Linchi
"Spin" wrote:

> Gurus,
> Running SQL Server 2005 SP2 and using named instances. I am having lots of
> important application databases on my server these days. My question is
> this. Is it better to back up all server databases in one maintenance plan
> or backup system databases separately from databases?
> --
> Spin
>

Better to backup up all server databases or backup system databases separately from databa

Gurus,
Running SQL Server 2005 SP2 and using named instances. I am having lots of
important application databases on my server these days. My question is
this. Is it better to back up all server databases in one maintenance plan
or backup system databases separately from databases?
--
SpinI don't think it matters one way or another. But then it depends on what
issues you may be struggling with.
If you have scheduling issues running a single maintenance plan, break it up
into multiple plans. Personally, I prefer dealing system databases
separately. In particular, when you are dealing relatively large user
databases, you may not want to use the MS maintenance plans and may choose to
schedule their backups in your own schduler for better flexibility.
Linchi
Linchi
"Spin" wrote:
> Gurus,
> Running SQL Server 2005 SP2 and using named instances. I am having lots of
> important application databases on my server these days. My question is
> this. Is it better to back up all server databases in one maintenance plan
> or backup system databases separately from databases?
> --
> Spin
>

Friday, February 24, 2012

Best way to update different servers

We are looking at combining some systems that would allow our Sql Server
update the other systems Sql Server system when certain events happens -
names change for instance.
Looking at using Web Services as one way of handling the Data movement
between the 2 systems.
But if they are both Sql Servers, is there some way, through the use of
Triggers for instance, that we could call a stored procedure on another
server (not at our location) to do the updates. Web Services are fine, but
we still have the push/pull/polling problem of knowing when to update System
B when System A's data changes and vice versa.
Is it best to do this outside of the Sql world or would it be reasonable to
do this from Sql to Sql which I assume would be more efficient?
Thanks,
TomThere is nothing to stop you from updating another SQL Server as a result of
an event on your SQL Server. Triggers seem to be the most obvious.
Ideally, you would have a stored proc on the linked server and then execute
something like:
MyLinkedServer.MyDB.dbo.MyProc @.ID, 'The New Name'
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
.
"tshad" <tscheiderich@.ftsolutions.com> wrote in message
news:O5kXts3oFHA.1148@.TK2MSFTNGP12.phx.gbl...
We are looking at combining some systems that would allow our Sql Server
update the other systems Sql Server system when certain events happens -
names change for instance.
Looking at using Web Services as one way of handling the Data movement
between the 2 systems.
But if they are both Sql Servers, is there some way, through the use of
Triggers for instance, that we could call a stored procedure on another
server (not at our location) to do the updates. Web Services are fine, but
we still have the push/pull/polling problem of knowing when to update System
B when System A's data changes and vice versa.
Is it best to do this outside of the Sql world or would it be reasonable to
do this from Sql to Sql which I assume would be more efficient?
Thanks,
Tom|||"Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
news:OtY0lv3oFHA.1444@.TK2MSFTNGP10.phx.gbl...
> There is nothing to stop you from updating another SQL Server as a result
> of
> an event on your SQL Server. Triggers seem to be the most obvious.
> Ideally, you would have a stored proc on the linked server and then
> execute
> something like:
> MyLinkedServer.MyDB.dbo.MyProc @.ID, 'The New Name'
Would you have to set up each server as a linked server on each others
machines?
Would this be the best way even if we are doing this over the Internet?
Thanks,
Tom
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
> SQL Server MVP
> Columnist, SQL Server Professional
> Toronto, ON Canada
> www.pinpub.com
> .
> "tshad" <tscheiderich@.ftsolutions.com> wrote in message
> news:O5kXts3oFHA.1148@.TK2MSFTNGP12.phx.gbl...
> We are looking at combining some systems that would allow our Sql Server
> update the other systems Sql Server system when certain events happens -
> names change for instance.
> Looking at using Web Services as one way of handling the Data movement
> between the 2 systems.
> But if they are both Sql Servers, is there some way, through the use of
> Triggers for instance, that we could call a stored procedure on another
> server (not at our location) to do the updates. Web Services are fine,
> but
> we still have the push/pull/polling problem of knowing when to update
> System
> B when System A's data changes and vice versa.
> Is it best to do this outside of the Sql world or would it be reasonable
> to
> do this from Sql to Sql which I assume would be more efficient?
> Thanks,
> Tom
>|||Would you have to set up each server as a linked server on each others
machines?

Would this be the best way even if we are doing this over the Internet?
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
.
"tshad" <tscheiderich@.ftsolutions.com> wrote in message
news:elYgqz3oFHA.708@.TK2MSFTNGP09.phx.gbl...
"Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
news:OtY0lv3oFHA.1444@.TK2MSFTNGP10.phx.gbl...
> There is nothing to stop you from updating another SQL Server as a result
> of
> an event on your SQL Server. Triggers seem to be the most obvious.
> Ideally, you would have a stored proc on the linked server and then
> execute
> something like:
> MyLinkedServer.MyDB.dbo.MyProc @.ID, 'The New Name'
Would you have to set up each server as a linked server on each others
machines?
Would this be the best way even if we are doing this over the Internet?
Thanks,
Tom
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
> SQL Server MVP
> Columnist, SQL Server Professional
> Toronto, ON Canada
> www.pinpub.com
> .
> "tshad" <tscheiderich@.ftsolutions.com> wrote in message
> news:O5kXts3oFHA.1148@.TK2MSFTNGP12.phx.gbl...
> We are looking at combining some systems that would allow our Sql Server
> update the other systems Sql Server system when certain events happens -
> names change for instance.
> Looking at using Web Services as one way of handling the Data movement
> between the 2 systems.
> But if they are both Sql Servers, is there some way, through the use of
> Triggers for instance, that we could call a stored procedure on another
> server (not at our location) to do the updates. Web Services are fine,
> but
> we still have the push/pull/polling problem of knowing when to update
> System
> B when System A's data changes and vice versa.
> Is it best to do this outside of the Sql world or would it be reasonable
> to
> do this from Sql to Sql which I assume would be more efficient?
> Thanks,
> Tom
>

best way to tolerate an unreliable WAN connection?

I'm looking at a problem two sites have - the both run the same ERP system
on one server, and their WAN and internet connections are unreliable. What
is the preferred method to keep people from sitting on their thumbs when the
WAN is unavailable? What about if you want to be able to change data at both
locations while the connection is down?
PaulI would cehck with the ERP vendor to see if they support
replication...either through SQL Server or through their own processes.
Many ERP/CRM vendors do have a "remote server" concept built-in for just
such a situation
--
Kevin Hill
President
3NF Consulting
www.3nf-inc.com/NewsGroups.htm
www.DallasDBAs.com/forum - new DB forum for Dallas/Ft. Worth area DBAs.
www.experts-exchange.com - experts compete for points to answer your
questions
"Paul" <noone@.executespammers.org> wrote in message
news:OhI52pujFHA.3288@.TK2MSFTNGP09.phx.gbl...
> I'm looking at a problem two sites have - the both run the same ERP system
> on one server, and their WAN and internet connections are unreliable. What
> is the preferred method to keep people from sitting on their thumbs when
> the
> WAN is unavailable? What about if you want to be able to change data at
> both
> locations while the connection is down?
> Paul
>|||I messed with replication while back. It made making changes to the database
(new columns, SP's etc) a pain. I was hoping there might be an alternative,
maybe a third-party product.
The ERP system is a custom one. I'm trying to find a solution for a problem
my friend is having. Definitely anything he settles on to fix it would be of
interest to me, because I do custom ERP systems as well. We're fortunate
enough here though where WAN downtime hasn't been an issue thus far.
Paul
"Kevin3NF" <KHill@.NopeIDontNeedNoSPAM3NF-inc.com> wrote in message
news:u6VabyujFHA.2852@.TK2MSFTNGP15.phx.gbl...
> I would cehck with the ERP vendor to see if they support
> replication...either through SQL Server or through their own processes.
> Many ERP/CRM vendors do have a "remote server" concept built-in for just
> such a situation
> --
> Kevin Hill
> President
> 3NF Consulting
> www.3nf-inc.com/NewsGroups.htm
> www.DallasDBAs.com/forum - new DB forum for Dallas/Ft. Worth area DBAs.
> www.experts-exchange.com - experts compete for points to answer your
> questions
>
> "Paul" <noone@.executespammers.org> wrote in message
> news:OhI52pujFHA.3288@.TK2MSFTNGP09.phx.gbl...
> > I'm looking at a problem two sites have - the both run the same ERP
system
> > on one server, and their WAN and internet connections are unreliable.
What
> > is the preferred method to keep people from sitting on their thumbs when
> > the
> > WAN is unavailable? What about if you want to be able to change data at
> > both
> > locations while the connection is down?
> >
> > Paul
> >
> >
>|||If you want the system usable in two places that are not connected (during a
WAN outage), you are definitely looking into replication. Unless one of the
systems is going to be read-only. In that case you may be able to look into
log shipping, or 3rd party products like double-take from
www.nsisoftware.com (no affiliation)
--
Kevin Hill
President
3NF Consulting
www.3nf-inc.com/NewsGroups.htm
www.DallasDBAs.com/forum - new DB forum for Dallas/Ft. Worth area DBAs.
www.experts-exchange.com - experts compete for points to answer your
questions
"Paul" <noone@.executespammers.org> wrote in message
news:%231ALn%23ujFHA.3348@.tk2msftngp13.phx.gbl...
>I messed with replication while back. It made making changes to the
>database
> (new columns, SP's etc) a pain. I was hoping there might be an
> alternative,
> maybe a third-party product.
> The ERP system is a custom one. I'm trying to find a solution for a
> problem
> my friend is having. Definitely anything he settles on to fix it would be
> of
> interest to me, because I do custom ERP systems as well. We're fortunate
> enough here though where WAN downtime hasn't been an issue thus far.
> Paul
> "Kevin3NF" <KHill@.NopeIDontNeedNoSPAM3NF-inc.com> wrote in message
> news:u6VabyujFHA.2852@.TK2MSFTNGP15.phx.gbl...
>> I would cehck with the ERP vendor to see if they support
>> replication...either through SQL Server or through their own processes.
>> Many ERP/CRM vendors do have a "remote server" concept built-in for just
>> such a situation
>> --
>> Kevin Hill
>> President
>> 3NF Consulting
>> www.3nf-inc.com/NewsGroups.htm
>> www.DallasDBAs.com/forum - new DB forum for Dallas/Ft. Worth area DBAs.
>> www.experts-exchange.com - experts compete for points to answer your
>> questions
>>
>> "Paul" <noone@.executespammers.org> wrote in message
>> news:OhI52pujFHA.3288@.TK2MSFTNGP09.phx.gbl...
>> > I'm looking at a problem two sites have - the both run the same ERP
> system
>> > on one server, and their WAN and internet connections are unreliable.
> What
>> > is the preferred method to keep people from sitting on their thumbs
>> > when
>> > the
>> > WAN is unavailable? What about if you want to be able to change data at
>> > both
>> > locations while the connection is down?
>> >
>> > Paul
>> >
>> >
>>
>|||Hi
Why not get the network sorted out. If the WAN goes down, hit ISDN/Dial Up
and keep the location working.
Implementing workarounds because the network guys can't keep their act
together is a difficult and costly thing.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Paul" <noone@.executespammers.org> wrote in message
news:OhI52pujFHA.3288@.TK2MSFTNGP09.phx.gbl...
> I'm looking at a problem two sites have - the both run the same ERP system
> on one server, and their WAN and internet connections are unreliable. What
> is the preferred method to keep people from sitting on their thumbs when
> the
> WAN is unavailable? What about if you want to be able to change data at
> both
> locations while the connection is down?
> Paul
>|||Yeah, that's the advice I gave.
According to the guy suffering from this, one site is out in the sticks and
has only Verizon for a provider and (apparently) one option for
connectivity. I told him to try to set up a failover connection over a
regular cable broadband connection. Never seen it done, but I don't see why
it couldn't work. People may turn their noses up at residential connections
but it's better than nothing, and some of them are damn fast. I've been
looking at Verizon's FIOS and salivating. 30Mb/s!
Paul
"Mike Epprecht (SQL MVP)" <mike@.epprecht.net> wrote in message
news:u8tKsY3jFHA.3144@.TK2MSFTNGP12.phx.gbl...
> Hi
> Why not get the network sorted out. If the WAN goes down, hit ISDN/Dial Up
> and keep the location working.
> Implementing workarounds because the network guys can't keep their act
> together is a difficult and costly thing.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> IM: mike@.epprecht.net
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
> "Paul" <noone@.executespammers.org> wrote in message
> news:OhI52pujFHA.3288@.TK2MSFTNGP09.phx.gbl...
> > I'm looking at a problem two sites have - the both run the same ERP
system
> > on one server, and their WAN and internet connections are unreliable.
What
> > is the preferred method to keep people from sitting on their thumbs when
> > the
> > WAN is unavailable? What about if you want to be able to change data at
> > both
> > locations while the connection is down?
> >
> > Paul
> >
> >
>

Sunday, February 19, 2012

Best way to setup data structure

I have an online application that works and functions like an accounting,
payroll, scheduliing and billing system. In the past, each center that
signed up would be setup in thier own database on SQL 2000. As this grows
more and more multisite organizations are using this service. It will
possible to have one organization with 2,000 centers using this service. My
question is: Should I continue setting up individual databases for each
center or should I combine the 2,000 centers for the one company into one
database and have a center key to distinguse the data. The only issue with
the mulit sites are reporting across the 2,000 or so centers. 95% of the
application is center specific the only time the data needs to be combined is
for some reports that are company wide.
Thanks,
Mark
Hi
I would do one database, but it depends on how well normalised the Db is and
what a stress test would show up.
Managing 2000 DB's becomes a nightmare.
How big ar the DB's and how big do you expect them to grow?
Regards
Mike
"Mark" wrote:

> I have an online application that works and functions like an accounting,
> payroll, scheduliing and billing system. In the past, each center that
> signed up would be setup in thier own database on SQL 2000. As this grows
> more and more multisite organizations are using this service. It will
> possible to have one organization with 2,000 centers using this service. My
> question is: Should I continue setting up individual databases for each
> center or should I combine the 2,000 centers for the one company into one
> database and have a center key to distinguse the data. The only issue with
> the mulit sites are reporting across the 2,000 or so centers. 95% of the
> application is center specific the only time the data needs to be combined is
> for some reports that are company wide.
> Thanks,
> Mark
|||The databse is normalized and where it is not, we are in the process of
fixing that now. The database size after 2 years of use is about 3 gigs - I
would imagine each database could grow to 10 gigs a piece at which time we
will get rid of some of the data. The size of the database is really
dependant on the activity of the centers.
Thanks,
Mark
"Mike Epprecht (SQL MVP)" wrote:
[vbcol=seagreen]
> Hi
> I would do one database, but it depends on how well normalised the Db is and
> what a stress test would show up.
> Managing 2000 DB's becomes a nightmare.
> How big ar the DB's and how big do you expect them to grow?
> Regards
> Mike
> "Mark" wrote:
|||I would put everything in one DB. one of our companies has around 400
lawfirms and everything is in one database. The main thing that you have to
think about is security. In our case each lawfirm has a role and many tables
contain a field that specifies to which law firm the row belongs (row level
security). This is basically the idea but it gets more complex when you have
multiple groups with each law firm. This is based on the requirements and
something that you have to analyze). The
process might be painful but you have a lot to win as a DBA:
-amount of work for you and the developers ($$$)
-more simplified disaster recovery
-For reporting you can replicate your database on a different server and let
your clients hit the your reporting box instead of your prod.
-You will not need as many connection strings!
-Avoiding an administrative nightmare
Sasan Saidi, MSc in CS
Senior DBA
Brascan Business Services
"I saw it work in a cartoon once so I am pretty sure I can do it."
"Mark" wrote:
[vbcol=seagreen]
> The databse is normalized and where it is not, we are in the process of
> fixing that now. The database size after 2 years of use is about 3 gigs - I
> would imagine each database could grow to 10 gigs a piece at which time we
> will get rid of some of the data. The size of the database is really
> dependant on the activity of the centers.
> Thanks,
> Mark
> "Mike Epprecht (SQL MVP)" wrote:

Best way to setup data structure

I have an online application that works and functions like an accounting,
payroll, scheduliing and billing system. In the past, each center that
signed up would be setup in thier own database on SQL 2000. As this grows
more and more multisite organizations are using this service. It will
possible to have one organization with 2,000 centers using this service. My
question is: Should I continue setting up individual databases for each
center or should I combine the 2,000 centers for the one company into one
database and have a center key to distinguse the data. The only issue with
the mulit sites are reporting across the 2,000 or so centers. 95% of the
application is center specific the only time the data needs to be combined is
for some reports that are company wide.
Thanks,
MarkHi
I would do one database, but it depends on how well normalised the Db is and
what a stress test would show up.
Managing 2000 DB's becomes a nightmare.
How big ar the DB's and how big do you expect them to grow?
Regards
Mike
"Mark" wrote:
> I have an online application that works and functions like an accounting,
> payroll, scheduliing and billing system. In the past, each center that
> signed up would be setup in thier own database on SQL 2000. As this grows
> more and more multisite organizations are using this service. It will
> possible to have one organization with 2,000 centers using this service. My
> question is: Should I continue setting up individual databases for each
> center or should I combine the 2,000 centers for the one company into one
> database and have a center key to distinguse the data. The only issue with
> the mulit sites are reporting across the 2,000 or so centers. 95% of the
> application is center specific the only time the data needs to be combined is
> for some reports that are company wide.
> Thanks,
> Mark|||The databse is normalized and where it is not, we are in the process of
fixing that now. The database size after 2 years of use is about 3 gigs - I
would imagine each database could grow to 10 gigs a piece at which time we
will get rid of some of the data. The size of the database is really
dependant on the activity of the centers.
Thanks,
Mark
"Mike Epprecht (SQL MVP)" wrote:
> Hi
> I would do one database, but it depends on how well normalised the Db is and
> what a stress test would show up.
> Managing 2000 DB's becomes a nightmare.
> How big ar the DB's and how big do you expect them to grow?
> Regards
> Mike
> "Mark" wrote:
> > I have an online application that works and functions like an accounting,
> > payroll, scheduliing and billing system. In the past, each center that
> > signed up would be setup in thier own database on SQL 2000. As this grows
> > more and more multisite organizations are using this service. It will
> > possible to have one organization with 2,000 centers using this service. My
> > question is: Should I continue setting up individual databases for each
> > center or should I combine the 2,000 centers for the one company into one
> > database and have a center key to distinguse the data. The only issue with
> > the mulit sites are reporting across the 2,000 or so centers. 95% of the
> > application is center specific the only time the data needs to be combined is
> > for some reports that are company wide.
> >
> > Thanks,
> > Mark|||I would put everything in one DB. one of our companies has around 400
lawfirms and everything is in one database. The main thing that you have to
think about is security. In our case each lawfirm has a role and many tables
contain a field that specifies to which law firm the row belongs (row level
security). This is basically the idea but it gets more complex when you have
multiple groups with each law firm. This is based on the requirements and
something that you have to analyze). The
process might be painful but you have a lot to win as a DBA:
-amount of work for you and the developers ($$$)
-more simplified disaster recovery
-For reporting you can replicate your database on a different server and let
your clients hit the your reporting box instead of your prod.
-You will not need as many connection strings!
-Avoiding an administrative nightmare
--
Sasan Saidi, MSc in CS
Senior DBA
Brascan Business Services
"I saw it work in a cartoon once so I am pretty sure I can do it."
"Mark" wrote:
> The databse is normalized and where it is not, we are in the process of
> fixing that now. The database size after 2 years of use is about 3 gigs - I
> would imagine each database could grow to 10 gigs a piece at which time we
> will get rid of some of the data. The size of the database is really
> dependant on the activity of the centers.
> Thanks,
> Mark
> "Mike Epprecht (SQL MVP)" wrote:
> > Hi
> >
> > I would do one database, but it depends on how well normalised the Db is and
> > what a stress test would show up.
> >
> > Managing 2000 DB's becomes a nightmare.
> >
> > How big ar the DB's and how big do you expect them to grow?
> >
> > Regards
> > Mike
> >
> > "Mark" wrote:
> >
> > > I have an online application that works and functions like an accounting,
> > > payroll, scheduliing and billing system. In the past, each center that
> > > signed up would be setup in thier own database on SQL 2000. As this grows
> > > more and more multisite organizations are using this service. It will
> > > possible to have one organization with 2,000 centers using this service. My
> > > question is: Should I continue setting up individual databases for each
> > > center or should I combine the 2,000 centers for the one company into one
> > > database and have a center key to distinguse the data. The only issue with
> > > the mulit sites are reporting across the 2,000 or so centers. 95% of the
> > > application is center specific the only time the data needs to be combined is
> > > for some reports that are company wide.
> > >
> > > Thanks,
> > > Mark

Best way to setup data structure

I have an online application that works and functions like an accounting,
payroll, scheduliing and billing system. In the past, each center that
signed up would be setup in thier own database on SQL 2000. As this grows
more and more multisite organizations are using this service. It will
possible to have one organization with 2,000 centers using this service. My
question is: Should I continue setting up individual databases for each
center or should I combine the 2,000 centers for the one company into one
database and have a center key to distinguse the data. The only issue with
the mulit sites are reporting across the 2,000 or so centers. 95% of the
application is center specific the only time the data needs to be combined i
s
for some reports that are company wide.
Thanks,
MarkHi
I would do one database, but it depends on how well normalised the Db is and
what a stress test would show up.
Managing 2000 DB's becomes a nightmare.
How big ar the DB's and how big do you expect them to grow?
Regards
Mike
"Mark" wrote:

> I have an online application that works and functions like an accounting,
> payroll, scheduliing and billing system. In the past, each center that
> signed up would be setup in thier own database on SQL 2000. As this grows
> more and more multisite organizations are using this service. It will
> possible to have one organization with 2,000 centers using this service.
My
> question is: Should I continue setting up individual databases for each
> center or should I combine the 2,000 centers for the one company into one
> database and have a center key to distinguse the data. The only issue wit
h
> the mulit sites are reporting across the 2,000 or so centers. 95% of the
> application is center specific the only time the data needs to be combined
is
> for some reports that are company wide.
> Thanks,
> Mark|||The databse is normalized and where it is not, we are in the process of
fixing that now. The database size after 2 years of use is about 3 gigs - I
would imagine each database could grow to 10 gigs a piece at which time we
will get rid of some of the data. The size of the database is really
dependant on the activity of the centers.
Thanks,
Mark
"Mike Epprecht (SQL MVP)" wrote:
[vbcol=seagreen]
> Hi
> I would do one database, but it depends on how well normalised the Db is a
nd
> what a stress test would show up.
> Managing 2000 DB's becomes a nightmare.
> How big ar the DB's and how big do you expect them to grow?
> Regards
> Mike
> "Mark" wrote:
>|||I would put everything in one DB. one of our companies has around 400
lawfirms and everything is in one database. The main thing that you have to
think about is security. In our case each lawfirm has a role and many tables
contain a field that specifies to which law firm the row belongs (row level
security). This is basically the idea but it gets more complex when you have
multiple groups with each law firm. This is based on the requirements and
something that you have to analyze). The
process might be painful but you have a lot to win as a DBA:
-amount of work for you and the developers ($$$)
-more simplified disaster recovery
-For reporting you can replicate your database on a different server and let
your clients hit the your reporting box instead of your prod.
-You will not need as many connection strings!
-Avoiding an administrative nightmare
Sasan Saidi, MSc in CS
Senior DBA
Brascan Business Services
"I saw it work in a cartoon once so I am pretty sure I can do it."
"Mark" wrote:
[vbcol=seagreen]
> The databse is normalized and where it is not, we are in the process of
> fixing that now. The database size after 2 years of use is about 3 gigs -
I
> would imagine each database could grow to 10 gigs a piece at which time we
> will get rid of some of the data. The size of the database is really
> dependant on the activity of the centers.
> Thanks,
> Mark
> "Mike Epprecht (SQL MVP)" wrote:
>

Best way to send email from a C# stored procedure?

I thought I could just copy over some asp.net code like:

System.Web.Mail.MailMessage mailMessage =new System.Web.Mail.MailMessage();

But VS2005 doesn't seem to want me touching System.Web.Mail.

Any ideas?

Thanks,

Allen

OK, I figured out that I need to useSystem.Net.Mail, but here is the next problem- when I create an instance of the SmtpClient object like this:

SmtpClient

client =newSmtpClient("localhost",25);I get this exception:
System.Security.SecurityException: Requestfor the permissionof type'System.Net.Mail.SmtpPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.System.Security.SecurityException: at System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) at System.Security.CodeAccessPermission.Demand() at System.Net.Mail.SmtpClient.Initialize() at System.Net.Mail.SmtpClient..ctor(String host, Int32 port)
|||

has anyone managed to solve this one?

am getting the same error...

|||

I was able to get around that error (and a few others) by doing the following:

ALTER DATABASE [PUBS]SET TRUSTWORTHYONGOALTER ASSEMBLY [TaskScheduler]WITH PERMISSION_SET = UNSAFE

I hope this helps.

-Allen Cryer

|||

I was able to get around that error (and a few others) by doing the following:

ALTER DATABASE [PUBS]SET TRUSTWORTHYONGOALTER ASSEMBLY [TaskScheduler]WITH PERMISSION_SET = UNSAFE

I hope this helps.

-Allen Cryer

|||mate,you're a champ.I've looked for the last 24 hours all over.Thank you very much. Works like a charm now.

Best way to prevent multiple notifications for an event

I am setting up a notification system using the SQL Event Provider, and I have a question about preventing multiple notifications for a single event. The events are provided from a SQL select, and the notification emails are sent once a day. How does NS decide if it's already handled an event? If I change the email frequency, do I need to add a check in my SQL select, or does NS handle this?

Thanks
Erick

Chronicles tables can be used to prevent duplicate notifications. Check out ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/sqlntsv9/html/17e2debd-485f-4962-b974-6be6597983f9.htm for more information.

HTH...

Thursday, February 16, 2012

Best way to move SQL 6.5 to new hardware

Hi,
Can anyone tell me the best way to moving MS SQL 6.5 System to new
Hardware Box?
I already setup new SQL Server using different computer name, just want
to copy all data and security info to new one.
Many thanks in advance.
Regards,
Chris Lee
Chris,
there's a huge amount of things to take into account, but when I did it I
found this article very helpful:
http://www.microsoft.com/technet/pro...y/sqlugrd.mspx
HTH,
Paul Ibison
|||Dear Paul,
Thanks for your reply, but the article apply to SQL 6.5 to SQL 6.5 upgarde?
Regards,
Chris Lee
|||Hi,
Did you installed the SQL Server with the same directory structure as actual
(Old) SQL server. If yes then:-
1. Apply the same service pack in the new server (Probably sp5a + post
update 5a)
2. Stop the SQL server in old server and new server
3. Copy all the DAT (Including system databases) files from (
drive:\mssql\data) old server to
new server
4. Start SQL server in new server
5. Using ISQLw login to SQL Server and execute the below script
sp_dropserver 'old_server_name',
go
sp_addserver 'new_server_name', 'local'
6. Stop and Start the SQL Server service
Thanks
Hari
MCDBA
"Chris Lee" <c_h_r_i_s_l_e_e_@.h_o_t_m_a_i_l_._c_o_m> wrote in message
news:ca93g0$2pf8@.imsp212.netvigator.com...
> Hi,
> Can anyone tell me the best way to moving MS SQL 6.5 System to new
> Hardware Box?
> I already setup new SQL Server using different computer name, just want
> to copy all data and security info to new one.
> Many thanks in advance.
> Regards,
> Chris Lee
|||Apologies - just assumed you were upgrading. Please refer
to Hari's post then.
Regards,
Paul Ibison
|||Thanks Hari,
It's work!!! Great Thanks!
Regards,
Chris Lee

Best way to move SQL 6.5 to new hardware

Hi,
Can anyone tell me the best way to moving MS SQL 6.5 System to new
Hardware Box?
I already setup new SQL Server using different computer name, just want
to copy all data and security info to new one.
Many thanks in advance.
Regards,
Chris LeeChris,
there's a huge amount of things to take into account, but when I did it I
found this article very helpful:
http://www.microsoft.com/technet/pr...oy/sqlugrd.mspx
HTH,
Paul Ibison|||Dear Paul,
Thanks for your reply, but the article apply to SQL 6.5 to SQL 6.5 upgarde?
Regards,
Chris Lee|||Hi,
Did you installed the SQL Server with the same directory structure as actual
(Old) SQL server. If yes then:-
1. Apply the same service pack in the new server (Probably sp5a + post
update 5a)
2. Stop the SQL server in old server and new server
3. Copy all the DAT (Including system databases) files from (
drive:\mssql\data) old server to
new server
4. Start SQL server in new server
5. Using ISQLw login to SQL Server and execute the below script
sp_dropserver 'old_server_name',
go
sp_addserver 'new_server_name', 'local'
6. Stop and Start the SQL Server service
Thanks
Hari
MCDBA
"Chris Lee" <c_h_r_i_s_l_e_e_@.h_o_t_m_a_i_l_._c_o_m> wrote in message
news:ca93g0$2pf8@.imsp212.netvigator.com...
> Hi,
> Can anyone tell me the best way to moving MS SQL 6.5 System to new
> hardware Box?
> I already setup new SQL Server using different computer name, just want
> to copy all data and security info to new one.
> Many thanks in advance.
> Regards,
> Chris Lee|||Apologies - just assumed you were upgrading. Please refer
to Hari's post then.
Regards,
Paul Ibison|||Thanks Hari,
It's work!!! Great Thanks!
Regards,
Chris Lee

Best way to move SQL 6.5 to new hardware

Hi,
Can anyone tell me the best way to moving MS SQL 6.5 System to new
Hardware Box?
I already setup new SQL Server using different computer name, just want
to copy all data and security info to new one.
Many thanks in advance.
Regards,
Chris LeeChris,
there's a huge amount of things to take into account, but when I did it I
found this article very helpful:
http://www.microsoft.com/technet/prodtechnol/sql/2000/deploy/sqlugrd.mspx
HTH,
Paul Ibison|||Dear Paul,
Thanks for your reply, but the article apply to SQL 6.5 to SQL 6.5 upgarde?
Regards,
Chris Lee|||Hi,
Did you installed the SQL Server with the same directory structure as actual
(Old) SQL server. If yes then:-
1. Apply the same service pack in the new server (Probably sp5a + post
update 5a)
2. Stop the SQL server in old server and new server
3. Copy all the DAT (Including system databases) files from (
drive:\mssql\data) old server to
new server
4. Start SQL server in new server
5. Using ISQLw login to SQL Server and execute the below script
sp_dropserver 'old_server_name',
go
sp_addserver 'new_server_name', 'local'
6. Stop and Start the SQL Server service
Thanks
Hari
MCDBA
"Chris Lee" <c_h_r_i_s_l_e_e_@.h_o_t_m_a_i_l_._c_o_m> wrote in message
news:ca93g0$2pf8@.imsp212.netvigator.com...
> Hi,
> Can anyone tell me the best way to moving MS SQL 6.5 System to new
> Hardware Box?
> I already setup new SQL Server using different computer name, just want
> to copy all data and security info to new one.
> Many thanks in advance.
> Regards,
> Chris Lee|||Apologies - just assumed you were upgrading. Please refer
to Hari's post then.
Regards,
Paul Ibison|||Thanks Hari,
It's work!!! Great Thanks!
Regards,
Chris Lee

Tuesday, February 14, 2012

Best way to implement external Log file

Hi,

I am developing application for our clinet.
some of requirements are system failure scenarios.
SQL Server is generating numbers for items inserted in some tables ( something like "oridinal number / year". logic for this is specified by my client and is somewhat complicated ) . most important requirement is to have external( usb stick on the server for example ) log file with latest numbers generated so that in the case of system failure it can be red and work can be countinued on the paper( with countinuing numbers) until system recovery.
Ideally, write it should be done inside of number generating and record inserting transaction.
There is avarage of 1 number generated per minute, but there is also a batch insert option for inserting thousands of records which must have numbers generated.

What is the best way to implement this:
-new data file on usb stic (external hard disk?) and single table in that data file? What about speed of usb stick. i suppose that usb stick with caching on can be fast enough (we are talkong about few rows only).
-maybe to configure SQL Server log to be written on that usb stick?
-some of the offices may have Internet access and some may not, so sending numbers via e-mail is not an option

-maybe some new MS SQL Server 2005 feature?.
What is the best practice in this case?

PS.
I know this sounds like an MCP exam, so, come on, MCSD's nad MCDBAs ;)

Hi,

if this is business critical I would suggest you to harden the availbility o your SQL Server instead :-). The idea about the USB stick sounds fancy but wouldn′t be professional at all. So use a job which reads periodically the numbers from that table and persit that in a log (I assume that you numbers dont have to be machine-readable, so the option with the logfile of the job is sufficient for you). otherwise if it shoudl be machine-readable, you could use a job which uses SQLCMD / or OSQL (depending on your version) does a export of some data.

HTH, Jens Suessmeyer.

http://www.sqlserver2005.de

|||

I understans what are you saying, but i need to conform to specs i recieved. They have some old-school requsts that they think are robust. I thik it is becouse they were thinking it will be done in older and less reliable system than SQL Server.

Job is not suitable becouse there will be lost data if SQL Express goes down. It needs to be done within insert transaction.

|||

ok, then you would need to implement something which purges out a log within OSQL.

Using a trigger / or within your stored procedure, depends on how you insert the data in the table) which will use sort of the following code:

OSQL -Q"SELECT Yournumber from youtable" -SServername -O"C:\yourlogfile"

Keep in mind that triggers react asynchronously, so the transaction will nlock the rest of the system unless the process comes back with either an error message or the successfull execution of the command.

HTH, Jens Suessmeyer.

http://www.sqlserver2005.de

Friday, February 10, 2012

best way to compile thousands of TSQL stored procedures?

I have a custom application that on occasion requires thousands of TSQL
files (on the file system) to be compiled to the database.

What is the quickest way to accomplish this?

We currently have a small vbs script that gets a list of all the files,
the loops around a call to "osql". each call to osql opens/closes a
connection to the destination database (currently across the network).<murray_shane56@.hotmail.com> wrote in message
news:1110557746.585156.86170@.f14g2000cwb.googlegro ups.com...
>I have a custom application that on occasion requires thousands of TSQL
> files (on the file system) to be compiled to the database.
> What is the quickest way to accomplish this?
> We currently have a small vbs script that gets a list of all the files,
> the loops around a call to "osql". each call to osql opens/closes a
> connection to the destination database (currently across the network).

Since text files compress well, you could zip them up, FTP or copy them to
the server, then unzip them and run your vbs script on the server side
(using xp_cmdshell, a scheduled job, DTS etc.).

Also, are you able to reduce the number of files you run? Do you change
thousands of procedures at a time, or are you able to use your source
control system to identify only the objects which have been modified?

Simon|||(murray_shane56@.hotmail.com) writes:
> I have a custom application that on occasion requires thousands of TSQL
> files (on the file system) to be compiled to the database.
> What is the quickest way to accomplish this?
> We currently have a small vbs script that gets a list of all the files,
> the loops around a call to "osql". each call to osql opens/closes a
> connection to the destination database (currently across the network).

VBS is not my best game, but I would expect it to be possible to use
ADO from VB Script. Thus, you could open a connection, and a command
object, and the run .Execute with the option adExecuteNoRecords.

This will not only save you from opening an closing the connection;
but also from a spawning an OSQL process for each procedure.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||"Simon Hayes" <sql@.hayes.ch> wrote in message
news:4231d822$1_2@.news.bluewin.ch...
> <murray_shane56@.hotmail.com> wrote in message
> news:1110557746.585156.86170@.f14g2000cwb.googlegro ups.com...
>>I have a custom application that on occasion requires thousands of TSQL
>> files (on the file system) to be compiled to the database.
>>
>> What is the quickest way to accomplish this?
>>
>> We currently have a small vbs script that gets a list of all the files,
>> the loops around a call to "osql". each call to osql opens/closes a
>> connection to the destination database (currently across the network).
>>
> Since text files compress well, you could zip them up, FTP or copy them to
> the server, then unzip them and run your vbs script on the server side
> (using xp_cmdshell, a scheduled job, DTS etc.).
> Also, are you able to reduce the number of files you run? Do you change
> thousands of procedures at a time, or are you able to use your source
> control system to identify only the objects which have been modified?
> Simon

Ditto on the
moving operation to the server and
seeing if you trim the number of objects down.

If there are no object dependencies on order of execution,
I would look into multi-threading this operation as well.

I would write out a series of CMD file scripts calling OSQL with your
existing vb script.
And then call a master CMD script that uses START to run eacho of the
sub-CMD scripts in its own process.*

Also, make sure that you are using integrated security with OSQL as I recall
it runs faster than SQL security.

* Don't use the START before each call to OSQL, or you will end up like
mickey did in that movie with all those brooms).|||(murray_shane56@.hotmail.com) writes:
> I have a custom application that on occasion requires thousands of TSQL
> files (on the file system) to be compiled to the database.
> What is the quickest way to accomplish this?
> We currently have a small vbs script that gets a list of all the files,
> the loops around a call to "osql". each call to osql opens/closes a
> connection to the destination database (currently across the network).

One more thing, if you continue to use OSQL, be sure to specify the
-I option to have SET QUOTED_IDENTIFIERS ON. This is good if you use
indexed views or indexed computed columns.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Hi

You may want to look at:
http://tinyurl.com/5299q

Another alternative is to concatenate the files before running them.

John

<murray_shane56@.hotmail.com> wrote in message
news:1110557746.585156.86170@.f14g2000cwb.googlegro ups.com...
>I have a custom application that on occasion requires thousands of TSQL
> files (on the file system) to be compiled to the database.
> What is the quickest way to accomplish this?
> We currently have a small vbs script that gets a list of all the files,
> the loops around a call to "osql". each call to osql opens/closes a
> connection to the destination database (currently across the network).

best way to clean up temp files

I have a custom Data Flow task that creates temp files to the system temp directory during processing. A lot of times, we'll use SSIS to do one data transformation, running and tweaking the package along the way... we do this in the designer ... if we notice something that's incorrect in the data view, we just hit the stop button and fix it. However, when we do this, the Cleanup() function isn't called, and my temp files are left in the temp directory, when they really ought to be disposed of.

Is there a method that gets called every time when the DtsDebugHost quits, whether it finished, didn't finish properly, or was stopped in the middle? What would be a good way (other than having some service that monitors what temp files are used by what processes) to clean up temp files after we don't need them?

~Steve

There is no way to do this in SSIS since stopping a package using the designer is just like stopping a process while using a debugger (no cleanup happens, generally speaking). However, there is a open mode called delete on close (in native code, I don't know if this maps the managed code too) and you could use that setting to have the OS perform the cleanup.

Thanks,

Matt

Best way for the performance

I would like to know which way is better for the performance of my
system:
Creating a huge table with more than 5 000 000 000 rows or multiple
tables with the same structure containing 5 000 000 rows?
Here the context of our application:
Our application offers the possibility to create up to 3000 projects.
Each project contains a list of tags and a list of value-timestamp for
each tag.
So, a project can contain up to 500 tags with a maximum of 5000 values
for each tag. In a hypothetical scenario where everything is at
maximun, only one table containing every value-timestamp would be huge
with more than 7 500 000 000 rows.
My first thought was to create a table containning the values for each
project.
Project
=========== Project_ID - PK
ValDataName_ID
... etc...
ValDataName
=========== ValDataName_ID - PK
TableName (represents the ValDataXXX)
Project_ID
ValDataXXX where XXX is an unique id
=========== ValDataXXX_ID - PK
Tag_ID
TagVal
TagTime
So, I would have a table (ValDataName) containing the names of each
ValData table linked to the project and a table containing my projects.
Then, I would have up to 5000 tables of type ValData.
Since my ValData tables are completely independant, any action
affecting one table does not affect the other.
So, which way would be the better approach?
Thanks.Francis B. wrote:
> I would like to know which way is better for the performance of my
> system:
> Creating a huge table with more than 5 000 000 000 rows or multiple
> tables with the same structure containing 5 000 000 rows?
> Here the context of our application:
> Our application offers the possibility to create up to 3000 projects.
> Each project contains a list of tags and a list of value-timestamp for
> each tag.
> So, a project can contain up to 500 tags with a maximum of 5000 values
> for each tag. In a hypothetical scenario where everything is at
> maximun, only one table containing every value-timestamp would be huge
> with more than 7 500 000 000 rows.
> My first thought was to create a table containning the values for each
> project.
> Project
> ===========> Project_ID - PK
> ValDataName_ID
> ... etc...
> ValDataName
> ===========> ValDataName_ID - PK
> TableName (represents the ValDataXXX)
> Project_ID
> ValDataXXX where XXX is an unique id
> ===========> ValDataXXX_ID - PK
> Tag_ID
> TagVal
> TagTime
> So, I would have a table (ValDataName) containing the names of each
> ValData table linked to the project and a table containing my projects.
> Then, I would have up to 5000 tables of type ValData.
> Since my ValData tables are completely independant, any action
> affecting one table does not affect the other.
> So, which way would be the better approach?
> Thanks.
You didn't mention one very important piece of information. What
version of SQL Server are you using? SQL Server 2005 has a lot of new
scalability features, especially table partitioning. For a major new
application I assume SQL Server 2005 will be the obvious choice for
you.
Achieving a scalable solution will depend very much on the design of
your indexing, queries, your application and your hardware
implementation. You've told us nothing of those. I can only suggest
that creating new tables in a bid to improve performance should
generally be a long way down your list of options. Usually there are
much more effective ways to optimise performance. More likely you might
choose to implement horizontal partitioning in order to make certain
admin tasks easier. How you do that depends a lot on the version and
edition you are using.
My other suggestion would be that you rethink very, very carefully
about logical design. When I see tables with names like "ValDataXXX"
and columns with names like "Tag_ID" and "TagVal" I just get an a awful
sinking feeling that this is some naive data-model-on-the-fly kind of
solution - what is sometimes euphemistically called an "EAV" model. If
I'm right then you can probably say goodbye to scalability, performance
and data integrity. Your database will have none of them.
--
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--|||On 19 Sep 2006 12:59:25 -0700, "Francis B."
<francisbeliveau@.gmail.com> wrote:
>I would like to know which way is better for the performance of my
>system:
>Creating a huge table with more than 5 000 000 000 rows or multiple
>tables with the same structure containing 5 000 000 rows?
>Here the context of our application:
>Our application offers the possibility to create up to 3000 projects.
>Each project contains a list of tags and a list of value-timestamp for
>each tag.
No way should a project management system have 10^9 rows.
Beyond that, I second what David has already said.
J.