Try this for starters...
http://www.ftponline.com/vsm/2002_07/online/hottips/esposito/
Sunday, March 25, 2012
BinaryWrite
Thursday, March 22, 2012
Binary files in DB : set name on retrieve
Hello,
I have a table that stores binary files. When I serve them up to the user, I call the following page (serveDocument.aspx?DocumentID=xxx) which holds only the following code and pass the document ID:
Protected Sub Page_Load(ByVal senderAs Object,ByVal eAs System.EventArgs)Handles Me.Load
Dim DocumentIDAs Integer = Convert.ToInt32(Request.QueryString("DocumentID"))'Connect to the database and bring back the image contents & MIME type for the specified picture Using myConnectionAs New SqlConnection(ConfigurationManager.ConnectionStrings("kelly_lmConnectionString1").ConnectionString)
Const SQLAs String ="SELECT [Content_Type], [Document_Data] FROM [lm_Service_Detail_Documents] WHERE [Document_id] = @.Document_id"Dim myCommandAs New SqlCommand(SQL, myConnection)
myCommand.Parameters.AddWithValue("@.Document_id", DocumentID)myConnection.Open()
Dim myReaderAs SqlDataReader = myCommand.ExecuteReaderIf myReader.ReadThen Response.ContentType = myReader("Content_Type").ToString()
Response.BinaryWrite(myReader("Document_Data"))
End If myReader.Close() myConnection.Close()End Using
End Sub
It works perfectly. But heres the thing: I want users to be able to right click on the link and chose 'Save Target As'. In IE 7, this works fine. In FireFox it saves the document as 'serveDocument.aspx'. Is there any way toinjectthe filename that I want to save the document as?
Thank you.
Add this Header also
context.Response.ContentType = "application/octet-stream";
context.Response.AddHeader("Content-Disposition", "attachment; filename=\"" + filename+".txt" + "\"");
------------------
Mark as Answer if you feel
|||What does 'context' refer to?
And, are you saying I shouldn't set the ContentType to the actual type of file?
REmove the context
it uses to html encode when you are using http handler
|||Well, that partially works. When a user clicks the linkthen saves the file, the correct name is present. But, my issue isn't that... I want the user to be able to right click the link and 'Save Link As...' with the correct filename. When I do that, it still says 'documentServe.aspx'.
Hi,
But heres the thing: I want users to be able to right click on the link and chose 'Save Target As'. In IE 7, this works fine. In FireFox it saves the document as 'serveDocument.aspx'.
From your description, it seems that when you right click on the link and choose the 'save target as' in IE7, you can got the whole url with parameters, but not in FireFox, right?
If so, I'm afraid that the behavior is related to the Explorer, and based on my knowledge, there's not any workaround to fix the problem, since working mechanism for IE and FireFox is just not the same.
Thanks.
Monday, March 19, 2012
big log file
I have a db with 2 data files and 2 log file, but one log files is about
9GB, I would like first reduce the db to only one pair of file, one data and
one log and second reduce the size of log about 1gb or 2. how can I do that?
Best regards,
Owen.Firstly to remove 1 of the log files use
DBCC SHRINKFILE with the EMPTYFILE option.
Then ALTER DATABASE to remove it.
As for shrinking the other log, there are numerous articles on this, here a
few...
http://www.support.microsoft.com/?id=256650 INF: How to Shrink the SQL
Server 7.0 Tran Log
http://www.support.microsoft.com/?id=317375 Log File Grows too big
http://www.support.microsoft.com/?id=110139 Log file filling up
http://www.mssqlserver.com/faq/logs-shrinklog.asp Shrink File
http://www.support.microsoft.com/?id=315512 Considerations for Autogrow
and AutoShrink
http://www.support.microsoft.com/?id=272318 INF: Shrinking Log in SQL
Server 2000 with DBCC SHRINKFILE
http://www.support.microsoft.com/?id=317375 Log File Grows too big
http://www.support.microsoft.com/?id=110139 Log file filling up
http://www.mssqlserver.com/faq/logs-shrinklog.asp Shrink File
http://www.support.microsoft.com/?id=315512 Considerations for Autogrow
and AutoShrink
HTH. Ryan
"Owen" <anibal@.prensa-latina.cu> wrote in message
news:O0nvZdRHGHA.1424@.TK2MSFTNGP12.phx.gbl...
> Hello:
> I have a db with 2 data files and 2 log file, but one log files is about
> 9GB, I would like first reduce the db to only one pair of file, one data
> and
> one log and second reduce the size of log about 1gb or 2. how can I do
> that?
> Best regards,
> Owen.
>
Thursday, March 8, 2012
BI Accelerator SetCurrentDay Problem
of the BI Accelerator.
I'm loading a set of data using text files therefore I'm using
Master_Import.dts and Master_Update.dts unchanged. Then I run the VBScripts that are alsgo generated by Accelerator (VAMPanorama1_Load_VAMPanorama.vbs).
This loads the data from text files into the Staging Database and SM Database, and also runs the Set Current Day package with a fixed date of "2001-12-31".
The problem is data for the year of the Current Date is multiplied by a factor of six. Now, when I run SetCurrentDay Package Alone, to try to set a new date (say "2003-07-01") either opening the package in Enterprise Manager and changing the global variable, or using DTSRUN utility I get the same problem. The data es multiplied by a factor of 6 for the year 2003 and data for 2002 and 2001 shows up correctly at the year, qtr and month level.
I tried this on the Retail Analysis sample project that comes with Accelerator and had exactly the same problem.
Has anoyone seen something like this? am I doing something wrong here?
Any help is appreciated,I assume you've done something else by now, but I had the same problem. After verifying that the proper number of facts were loaded, and that a join wasn't creating additional records, I stumbled on the solution: There is no problem with the cube.
The problem is that the auto-generated base time dimensions each have an 'All' level and they should not. Setting all of them (for me they are Time.Standard and Week.Standard) to 'Current' will fix the problem and show the proper number of transactions. The reason they are multiplied is that 'All' aggregates across all time periods including the actual periods and the calculated ones like Current, Last Year, Change from Last, etc.
Originally posted by rsada
I'm having a problem with the SetCurrentDay DTS package
of the BI Accelerator.
I'm loading a set of data using text files therefore I'm using
Master_Import.dts and Master_Update.dts unchanged. Then I run the VBScripts that are alsgo generated by Accelerator (VAMPanorama1_Load_VAMPanorama.vbs).
This loads the data from text files into the Staging Database and SM Database, and also runs the Set Current Day package with a fixed date of "2001-12-31".
The problem is data for the year of the Current Date is multiplied by a factor of six. Now, when I run SetCurrentDay Package Alone, to try to set a new date (say "2003-07-01") either opening the package in Enterprise Manager and changing the global variable, or using DTSRUN utility I get the same problem. The data es multiplied by a factor of 6 for the year 2003 and data for 2002 and 2001 shows up correctly at the year, qtr and month level.
I tried this on the Retail Analysis sample project that comes with Accelerator and had exactly the same problem.
Has anoyone seen something like this? am I doing something wrong here?
Any help is appreciated,|||It's been some time since I last saw this thing. But could you refresh my memery
What do you mean by "Setting all of them (for me they are Time.Standard and Week.Standard) to 'Current'"? . Is this changing the name of the All Member or removing the All member ?
thanks|||It's actually constraining the dates to 'Current' but I found the real source of the problem.
The auto-generated date dimensions have custom members built into the dimension table, and one of them is called 'Current'. My original recommendation was that you could not leave the date dimensions set to 'All' (meaning that they are unconstrained) because that caused the totals to be much higher than they should be. Setting them to 'Current' or to some other specific value (a particular year, month, etc.) would fix the totals.
Here's the real problem: the unary operator property is not set by the spreadsheet generator. If you go into Analysis Manager and edit the Time.Standard dimension, you can fix it. For each level, look at the advanced properties and click the ellipse button beside the value for Unary Operators. A dialog comes up that lets you check "Enable Unary Operators" and tell it there is an existing field of <level name>_Oper (e.g. Year_Oper). This one change (setting it for all levels) fixes all of the auto-generated date dimensions.
What's happening is that without the unary operator, calculated dates like Current, Previous, Change, %Change, etc. are treated just like any other year value (like 2004) so the cube adds together all of the values causing data to be added both for the actual date that it occurs, and to the calculated dates (current, etc.). The unary operator lets the value in <level name>_Oper determine if the values will aggregate or not, and thankfully, the attribute has the right values.|||Thanks a lot Barclay I got it workin fine now..!! :)
Saturday, February 25, 2012
better disk config for staging & tempdb files...
I want to have your feedback on how to configure my drives to insure a good
performance during loading process & transformation against a staging
database.
Image you have 4 drives available for the staging database & tempdb
database...
loosing these disk is not important, so no redundancy required.
I read from an external database into the staging, I do some updates and
copy into the staging himself, and finally I read the staging to load the
datawarehouse.
I execute more then 70 DTS packages, the sequence of these packages is
optimized. So I could write a fact table in the staging while I read another
to fill the datawarehouse at the same time.
I have 4 "big" fact tables (from 1 millions of rows to 20 millions)
There is "only" 4Gb to 5Gb used by the staging database during the loading
process.
how to configure these disks?
Does it better to setup all disks in raid 0? (the system do everything)
Does it better to keep the 4 disks separatly, and create multiple files &
file groups and dedicating 1 disk by fact table?
Creating 1 filegroup and 4 files in this file group (1 on each disk)? (SQL
Server manage the sharing usage of the disks)
Where to put log files?
thanks for your feedback.
Jerome.Jerome,
What's the cost to the business if your DW is down for several days? Having
nonredundant disk for tempdb is generally unacceptable. Having nonredundant
disk for staging is a bit unusual. Push back to get more disk.
If I were placed in this situation and could not get more disk, I'd mirror
and strip with the controller and say that's all the performance she's got.
If management and the business owners are ready to signoff on the potential
of several days down time. The amount of down time depends on outside
hardware availability and support staff availability. Put your logs for
tempdb and staging on one disk. Use the disk controller to strip the other
three and place one file per processor (create all the same size) within the
file group. Normally I would break it out more but four disks isn't enough
to do much with.
"Jj" <willgart@.BBBhotmailAAA.com> wrote in message
news:eT0XnuEfFHA.3692@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I want to have your feedback on how to configure my drives to insure a
> good performance during loading process & transformation against a staging
> database.
> Image you have 4 drives available for the staging database & tempdb
> database...
> loosing these disk is not important, so no redundancy required.
> I read from an external database into the staging, I do some updates and
> copy into the staging himself, and finally I read the staging to load the
> datawarehouse.
> I execute more then 70 DTS packages, the sequence of these packages is
> optimized. So I could write a fact table in the staging while I read
> another to fill the datawarehouse at the same time.
> I have 4 "big" fact tables (from 1 millions of rows to 20 millions)
> There is "only" 4Gb to 5Gb used by the staging database during the loading
> process.
> how to configure these disks?
> Does it better to setup all disks in raid 0? (the system do everything)
> Does it better to keep the 4 disks separatly, and create multiple files &
> file groups and dedicating 1 disk by fact table?
> Creating 1 filegroup and 4 files in this file group (1 on each disk)? (SQL
> Server manage the sharing usage of the disks)
> Where to put log files?
> thanks for your feedback.
> Jerome.
>|||the cost is 0 if we can't load data.
a downtime of 4hour to change a disk if we don't have one in the office is
not an issue.
I don't understand why having the tempdb database on a non-fault taulerant
disk is not a good idea from your point of view!!!
tempdb is the only database recreated when er restart SQL server, so the
downtime in case of a crash is ... 5 minutes?
Thanks for the guide about 1 disk by processor.
I'll plan this config.
"Danny" <someone@.nowhere.com> wrote in message
news:1oRwe.14848$Xr6.6230@.trnddc07...
> Jerome,
> What's the cost to the business if your DW is down for several days?
> Having nonredundant disk for tempdb is generally unacceptable. Having
> nonredundant disk for staging is a bit unusual. Push back to get more
> disk.
> If I were placed in this situation and could not get more disk, I'd mirror
> and strip with the controller and say that's all the performance she's
> got.
> If management and the business owners are ready to signoff on the
> potential of several days down time. The amount of down time depends on
> outside hardware availability and support staff availability. Put your
> logs for tempdb and staging on one disk. Use the disk controller to strip
> the other three and place one file per processor (create all the same
> size) within the file group. Normally I would break it out more but four
> disks isn't enough to do much with.
> "Jj" <willgart@.BBBhotmailAAA.com> wrote in message
> news:eT0XnuEfFHA.3692@.TK2MSFTNGP09.phx.gbl...
>
better disk config for staging & tempdb files...
I want to have your feedback on how to configure my drives to insure a good
performance during loading process & transformation against a staging
database.
Image you have 4 drives available for the staging database & tempdb
database...
loosing these disk is not important, so no redundancy required.
I read from an external database into the staging, I do some updates and
copy into the staging himself, and finally I read the staging to load the
datawarehouse.
I execute more then 70 DTS packages, the sequence of these packages is
optimized. So I could write a fact table in the staging while I read another
to fill the datawarehouse at the same time.
I have 4 "big" fact tables (from 1 millions of rows to 20 millions)
There is "only" 4Gb to 5Gb used by the staging database during the loading
process.
how to configure these disks?
Does it better to setup all disks in raid 0? (the system do everything)
Does it better to keep the 4 disks separatly, and create multiple files &
file groups and dedicating 1 disk by fact table?
Creating 1 filegroup and 4 files in this file group (1 on each disk)? (SQL
Server manage the sharing usage of the disks)
Where to put log files?
thanks for your feedback.
Jerome.
Jerome,
What's the cost to the business if your DW is down for several days? Having
nonredundant disk for tempdb is generally unacceptable. Having nonredundant
disk for staging is a bit unusual. Push back to get more disk.
If I were placed in this situation and could not get more disk, I'd mirror
and strip with the controller and say that's all the performance she's got.
If management and the business owners are ready to signoff on the potential
of several days down time. The amount of down time depends on outside
hardware availability and support staff availability. Put your logs for
tempdb and staging on one disk. Use the disk controller to strip the other
three and place one file per processor (create all the same size) within the
file group. Normally I would break it out more but four disks isn't enough
to do much with.
"Jj" <willgart@.BBBhotmailAAA.com> wrote in message
news:eT0XnuEfFHA.3692@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I want to have your feedback on how to configure my drives to insure a
> good performance during loading process & transformation against a staging
> database.
> Image you have 4 drives available for the staging database & tempdb
> database...
> loosing these disk is not important, so no redundancy required.
> I read from an external database into the staging, I do some updates and
> copy into the staging himself, and finally I read the staging to load the
> datawarehouse.
> I execute more then 70 DTS packages, the sequence of these packages is
> optimized. So I could write a fact table in the staging while I read
> another to fill the datawarehouse at the same time.
> I have 4 "big" fact tables (from 1 millions of rows to 20 millions)
> There is "only" 4Gb to 5Gb used by the staging database during the loading
> process.
> how to configure these disks?
> Does it better to setup all disks in raid 0? (the system do everything)
> Does it better to keep the 4 disks separatly, and create multiple files &
> file groups and dedicating 1 disk by fact table?
> Creating 1 filegroup and 4 files in this file group (1 on each disk)? (SQL
> Server manage the sharing usage of the disks)
> Where to put log files?
> thanks for your feedback.
> Jerome.
>
|||the cost is 0 if we can't load data.
a downtime of 4hour to change a disk if we don't have one in the office is
not an issue.
I don't understand why having the tempdb database on a non-fault taulerant
disk is not a good idea from your point of view!!!
tempdb is the only database recreated when er restart SQL server, so the
downtime in case of a crash is ... 5 minutes?
Thanks for the guide about 1 disk by processor.
I'll plan this config.
"Danny" <someone@.nowhere.com> wrote in message
news:1oRwe.14848$Xr6.6230@.trnddc07...
> Jerome,
> What's the cost to the business if your DW is down for several days?
> Having nonredundant disk for tempdb is generally unacceptable. Having
> nonredundant disk for staging is a bit unusual. Push back to get more
> disk.
> If I were placed in this situation and could not get more disk, I'd mirror
> and strip with the controller and say that's all the performance she's
> got.
> If management and the business owners are ready to signoff on the
> potential of several days down time. The amount of down time depends on
> outside hardware availability and support staff availability. Put your
> logs for tempdb and staging on one disk. Use the disk controller to strip
> the other three and place one file per processor (create all the same
> size) within the file group. Normally I would break it out more but four
> disks isn't enough to do much with.
> "Jj" <willgart@.BBBhotmailAAA.com> wrote in message
> news:eT0XnuEfFHA.3692@.TK2MSFTNGP09.phx.gbl...
>
Sunday, February 19, 2012
Best way to read a paradox file?
We used to use DTS to read Paradox files. This is something we do regularly. DTS has a native Paradox driver. This seems to have disappeared in SSIS.
Can anyone recommend the best way to read Paradox Files with SSIS?
Click New OLE DB connection. Choose "Native OLE DB\Microsoft Jet 4.0 OLE DB Provider". On the "All" tab find "Extended properties" and type "Paradox 5.x". For the "Data source" type the directory where your tables are.|||Great, that helps. Is there a way to add such a connection to the SSIS Wizard? (DTSWizard.exe)
|||thanks a million Bog !
Thursday, February 16, 2012
Best way to move databse and tlog files from default location
I need to figure out the best way to move files with no data loss from the
default installed location, in my case ms crm database files. When you
install MS CRM it does not give you the option if pick an install location of
files (disk placement wise).
Thanks so much,
Most applications don't care where the underlying DB files are physically
located. It probably created them on the default location. Relocating the
files will require some down time. During a time when there is no user
activity or Alter your database and set it to DBO only (use rollback
immediate if necessary). Detach the database. Copy the files. Attach the
database back. Most of this can be done with Enterprise manager by changing
the properties of the database and running tasks.
"Seth at Sherwood" <SethatSherwood@.discussions.microsoft.com> wrote in
message news:6A7AA4A6-8ECF-4CCF-837B-AB9F45709904@.microsoft.com...
> Thanks in advance for all help guys;
> I need to figure out the best way to move files with no data loss from the
> default installed location, in my case ms crm database files. When you
> install MS CRM it does not give you the option if pick an install location
> of
> files (disk placement wise).
> Thanks so much,
|||In SQL Server 2005 you can also use 'alter database set offline' and 'alter
database modify file'. See BOL 'Moving Database Files', 'Planned Relocation
Procedure' section.
Ben Nevarez
"Danny" wrote:
> Most applications don't care where the underlying DB files are physically
> located. It probably created them on the default location. Relocating the
> files will require some down time. During a time when there is no user
> activity or Alter your database and set it to DBO only (use rollback
> immediate if necessary). Detach the database. Copy the files. Attach the
> database back. Most of this can be done with Enterprise manager by changing
> the properties of the database and running tasks.
>
> "Seth at Sherwood" <SethatSherwood@.discussions.microsoft.com> wrote in
> message news:6A7AA4A6-8ECF-4CCF-837B-AB9F45709904@.microsoft.com...
>
>
Best way to move databse and tlog files from default location
I need to figure out the best way to move files with no data loss from the
default installed location, in my case ms crm database files. When you
install MS CRM it does not give you the option if pick an install location of
files (disk placement wise).
Thanks so much,Most applications don't care where the underlying DB files are physically
located. It probably created them on the default location. Relocating the
files will require some down time. During a time when there is no user
activity or Alter your database and set it to DBO only (use rollback
immediate if necessary). Detach the database. Copy the files. Attach the
database back. Most of this can be done with Enterprise manager by changing
the properties of the database and running tasks.
"Seth at Sherwood" <SethatSherwood@.discussions.microsoft.com> wrote in
message news:6A7AA4A6-8ECF-4CCF-837B-AB9F45709904@.microsoft.com...
> Thanks in advance for all help guys;
> I need to figure out the best way to move files with no data loss from the
> default installed location, in my case ms crm database files. When you
> install MS CRM it does not give you the option if pick an install location
> of
> files (disk placement wise).
> Thanks so much,|||In SQL Server 2005 you can also use 'alter database set offline' and 'alter
database modify file'. See BOL 'Moving Database Files', 'Planned Relocation
Procedure' section.
Ben Nevarez
"Danny" wrote:
> Most applications don't care where the underlying DB files are physically
> located. It probably created them on the default location. Relocating the
> files will require some down time. During a time when there is no user
> activity or Alter your database and set it to DBO only (use rollback
> immediate if necessary). Detach the database. Copy the files. Attach the
> database back. Most of this can be done with Enterprise manager by changing
> the properties of the database and running tasks.
>
> "Seth at Sherwood" <SethatSherwood@.discussions.microsoft.com> wrote in
> message news:6A7AA4A6-8ECF-4CCF-837B-AB9F45709904@.microsoft.com...
> > Thanks in advance for all help guys;
> >
> > I need to figure out the best way to move files with no data loss from the
> > default installed location, in my case ms crm database files. When you
> > install MS CRM it does not give you the option if pick an install location
> > of
> > files (disk placement wise).
> >
> > Thanks so much,
>
>
Best way to move databse and tlog files from default location
I need to figure out the best way to move files with no data loss from the
default installed location, in my case ms crm database files. When you
install MS CRM it does not give you the option if pick an install location o
f
files (disk placement wise).
Thanks so much,Most applications don't care where the underlying DB files are physically
located. It probably created them on the default location. Relocating the
files will require some down time. During a time when there is no user
activity or Alter your database and set it to DBO only (use rollback
immediate if necessary). Detach the database. Copy the files. Attach the
database back. Most of this can be done with Enterprise manager by changing
the properties of the database and running tasks.
"Seth at Sherwood" <SethatSherwood@.discussions.microsoft.com> wrote in
message news:6A7AA4A6-8ECF-4CCF-837B-AB9F45709904@.microsoft.com...
> Thanks in advance for all help guys;
> I need to figure out the best way to move files with no data loss from the
> default installed location, in my case ms crm database files. When you
> install MS CRM it does not give you the option if pick an install location
> of
> files (disk placement wise).
> Thanks so much,|||In SQL Server 2005 you can also use 'alter database set offline' and 'alter
database modify file'. See BOL 'Moving Database Files', 'Planned Relocation
Procedure' section.
Ben Nevarez
"Danny" wrote:
> Most applications don't care where the underlying DB files are physically
> located. It probably created them on the default location. Relocating th
e
> files will require some down time. During a time when there is no user
> activity or Alter your database and set it to DBO only (use rollback
> immediate if necessary). Detach the database. Copy the files. Attach th
e
> database back. Most of this can be done with Enterprise manager by changi
ng
> the properties of the database and running tasks.
>
> "Seth at Sherwood" <SethatSherwood@.discussions.microsoft.com> wrote in
> message news:6A7AA4A6-8ECF-4CCF-837B-AB9F45709904@.microsoft.com...
>
>
Best way to load data from text files
I have problem I'm hoping someone can give me some pointers with.
I need to load data from several text files into one table. The format of the files are simple - each line is comma separated, with double quotes around each element e.g.
"parameter 1","value 1","parameter 2","value 2"...
"parameter 12","value 12","parameter 13","value 13"...
However, the files themselves will have different numbers of columns e.g file 1 may have 8 columns, file 2 may have 16 columns.
I'm going to load the data into a table that has at least as many columns as the longest file. The table columns are all varchar, and are named simply as [Col001] [Col002] [Col003] etc...
The first two columns of this table must be left empty during the load (I use these later on), so the data entry will start at [Col003].
My question is what is the best way to do this? I thought perhaps using a BULK INSERT in a stored procedure might do the trick, but I haven't used it before and haven't got very far. I gather another approach might be to use bcp utility. Someone has also suggested a DTS package, but the filenames will be suffixed with current date/time stamp, so i don't think that will work.
My preferred appraoch would be the BULK INSERT..but i'm open to any pointers.
Many Thanks
GregYou could use BCP or BULK INSERT but you would need to know, a priori, how many columns were in the current input file and then use a different format file that matched.|||I would bring them into a staging table as the first step. Then I would have a control proc count the number of commas in a row and then call a proc set up to handle that number of columns. The called proc would then put the columns into the table designed to hold the largest number of columns.
That way, if someone comes along later and adds another file with n more columns, you merely create another proc to handle n more columns, alter your table to hold n more columns, and modify your control proc to count up to n more columns and call the appropriate sproc.
Tuesday, February 14, 2012
Best way to install data files?
I posted this in the setup news group but haven't received and replies, so
perphaps someone here can help me.
I am developing a small ASP.NET web application that will use an SQL Server
2000 database. The database will start with 1,000 records in a main table
and about 100,000 records in a related table. How much this database will
grow is unknown.
I have one desktop machine that I am using for testing. It has one hard
drive, and nothing special about it.
Another test machine will pretend to be a production server. It is a real
server. It has the operating (Win Server 2003) system on a RAID 1 (2 drives)
setup. It also has 6 other drives set up as a RAID 5.
Should SQL Server be installed on the RAID 1, and then install the data
files only to the RAID 5? OR Should SQL Server and the data files be
installed on the RAID 5 drive?
The reason I ask is... From prior experience with Access I learned that
although attached data files can be worked with, they often required much
more code.
So... What is the best way to do this? And... why?
TIA.
With the disk setup that server has, install the SQL Server executables on
the RAID 1, and the user database on the RAID 5. You can create the master,
model and msdb system databases on the RAID 1 as well, and the tempdb on
RAID 1 as well, if there is enough space.
Jacco Schalkwijk
SQL Server MVP
"kvr901" <kvr901@.discussions.microsoft.com> wrote in message
news:E45F9F10-8B51-4644-A239-9A4F674BBFFD@.microsoft.com...
> Warning beginner here.
> I posted this in the setup news group but haven't received and replies, so
> perphaps someone here can help me.
> I am developing a small ASP.NET web application that will use an SQL
> Server
> 2000 database. The database will start with 1,000 records in a main table
> and about 100,000 records in a related table. How much this database will
> grow is unknown.
> I have one desktop machine that I am using for testing. It has one hard
> drive, and nothing special about it.
> Another test machine will pretend to be a production server. It is a real
> server. It has the operating (Win Server 2003) system on a RAID 1 (2
> drives)
> setup. It also has 6 other drives set up as a RAID 5.
> Should SQL Server be installed on the RAID 1, and then install the data
> files only to the RAID 5? OR Should SQL Server and the data files be
> installed on the RAID 5 drive?
> The reason I ask is... From prior experience with Access I learned that
> although attached data files can be worked with, they often required much
> more code.
> So... What is the best way to do this? And... why?
> TIA.
>
|||Thank you for your reply.
Another question:
**IF** some time in the future I decide to distribute the application
including the distributable version of SQL Server (I believe it is called the
"MSDE") would the installation procedure be the same?
In earlier versions of Access there was a considerable learning curve
(coding) to use attached Access secured "backend" databases, and I am
wondering if I should expect similar hurdles with the distributable version
of SQL Server.
Thank you.
"Jacco Schalkwijk" wrote:
> With the disk setup that server has, install the SQL Server executables on
> the RAID 1, and the user database on the RAID 5. You can create the master,
> model and msdb system databases on the RAID 1 as well, and the tempdb on
> RAID 1 as well, if there is enough space.
> --
> Jacco Schalkwijk
> SQL Server MVP
>
> "kvr901" <kvr901@.discussions.microsoft.com> wrote in message
> news:E45F9F10-8B51-4644-A239-9A4F674BBFFD@.microsoft.com...
>
>
|||kvr901 wrote:[vbcol=seagreen]
> Thank you for your reply.
> Another question:
> **IF** some time in the future I decide to distribute the application
> including the distributable version of SQL Server (I believe it is called the
> "MSDE") would the installation procedure be the same?
> In earlier versions of Access there was a considerable learning curve
> (coding) to use attached Access secured "backend" databases, and I am
> wondering if I should expect similar hurdles with the distributable version
> of SQL Server.
> Thank you.
>
> "Jacco Schalkwijk" wrote:
>
I am not sure I understand your concern about attached files. Once a
database is mounted by the server accessing is no different than any
other database on the server, regardless of the location of the actual
physical data file.
As to installing your database on a customers site. The issues around
RAID are more data integrity and performance related. Whatever their
configuration is will be dependent on the number of users and their
pocket books. But the system will work on any platform that supports SQL
Server.
Best way to install data files?
I posted this in the setup news group but haven't received and replies, so
perphaps someone here can help me.
I am developing a small ASP.NET web application that will use an SQL Server
2000 database. The database will start with 1,000 records in a main table
and about 100,000 records in a related table. How much this database will
grow is unknown.
I have one desktop machine that I am using for testing. It has one hard
drive, and nothing special about it.
Another test machine will pretend to be a production server. It is a real
server. It has the operating (Win Server 2003) system on a RAID 1 (2 drives
)
setup. It also has 6 other drives set up as a RAID 5.
Should SQL Server be installed on the RAID 1, and then install the data
files only to the RAID 5? OR Should SQL Server and the data files be
installed on the RAID 5 drive?
The reason I ask is... From prior experience with Access I learned that
although attached data files can be worked with, they often required much
more code.
So... What is the best way to do this? And... why?
TIA.With the disk setup that server has, install the SQL Server executables on
the RAID 1, and the user database on the RAID 5. You can create the master,
model and msdb system databases on the RAID 1 as well, and the tempdb on
RAID 1 as well, if there is enough space.
Jacco Schalkwijk
SQL Server MVP
"kvr901" <kvr901@.discussions.microsoft.com> wrote in message
news:E45F9F10-8B51-4644-A239-9A4F674BBFFD@.microsoft.com...
> Warning beginner here.
> I posted this in the setup news group but haven't received and replies, so
> perphaps someone here can help me.
> I am developing a small ASP.NET web application that will use an SQL
> Server
> 2000 database. The database will start with 1,000 records in a main table
> and about 100,000 records in a related table. How much this database will
> grow is unknown.
> I have one desktop machine that I am using for testing. It has one hard
> drive, and nothing special about it.
> Another test machine will pretend to be a production server. It is a real
> server. It has the operating (Win Server 2003) system on a RAID 1 (2
> drives)
> setup. It also has 6 other drives set up as a RAID 5.
> Should SQL Server be installed on the RAID 1, and then install the data
> files only to the RAID 5? OR Should SQL Server and the data files be
> installed on the RAID 5 drive?
> The reason I ask is... From prior experience with Access I learned that
> although attached data files can be worked with, they often required much
> more code.
> So... What is the best way to do this? And... why?
> TIA.
>|||Thank you for your reply.
Another question:
**IF** some time in the future I decide to distribute the application
including the distributable version of SQL Server (I believe it is called th
e
"MSDE") would the installation procedure be the same?
In earlier versions of Access there was a considerable learning curve
(coding) to use attached Access secured "backend" databases, and I am
wondering if I should expect similar hurdles with the distributable version
of SQL Server.
Thank you.
"Jacco Schalkwijk" wrote:
> With the disk setup that server has, install the SQL Server executables on
> the RAID 1, and the user database on the RAID 5. You can create the master
,
> model and msdb system databases on the RAID 1 as well, and the tempdb on
> RAID 1 as well, if there is enough space.
> --
> Jacco Schalkwijk
> SQL Server MVP
>
> "kvr901" <kvr901@.discussions.microsoft.com> wrote in message
> news:E45F9F10-8B51-4644-A239-9A4F674BBFFD@.microsoft.com...
>
>|||kvr901 wrote:[vbcol=seagreen]
> Thank you for your reply.
> Another question:
> **IF** some time in the future I decide to distribute the application
> including the distributable version of SQL Server (I believe it is called
the
> "MSDE") would the installation procedure be the same?
> In earlier versions of Access there was a considerable learning curve
> (coding) to use attached Access secured "backend" databases, and I am
> wondering if I should expect similar hurdles with the distributable versio
n
> of SQL Server.
> Thank you.
>
> "Jacco Schalkwijk" wrote:
>
I am not sure I understand your concern about attached files. Once a
database is mounted by the server accessing is no different than any
other database on the server, regardless of the location of the actual
physical data file.
As to installing your database on a customers site. The issues around
RAID are more data integrity and performance related. Whatever their
configuration is will be dependent on the number of users and their
pocket books. But the system will work on any platform that supports SQL
Server.
Sunday, February 12, 2012
Best way to export data.
I have some questions on my options available.
I have to export some tables to csv files to enable another department
to process the files. What I need is a way to do this in ms sql
though a stored proc with quoted identifiers and column names as
heads. I cannot figure out how to do this.
Can anybody give me some options that would be the best options.
I am using ms sql 2000.
Thank you for your time.On Apr 9, 7:27 am, "Designing Solutions WD"
<michael.grass...@.gmail.comwrote:
Quote:
Originally Posted by
Hello,
>
I have some questions on my options available.
>
I have to export some tables to csv files to enable another department
to process the files. What I need is a way to do this in ms sql
though a stored proc with quoted identifiers and column names as
heads. I cannot figure out how to do this.
>
Can anybody give me some options that would be the best options.
>
I am using ms sql 2000.
>
Thank you for your time.
The easiest solution that came to my head is to execute DTS package in
command shell. In DTS package you can define whatever format you want.
Create it. Debug it. Play with it. Then just add xp_cmdshell
'dtsrun.exe -S<server-N<dts-package-E -M<dts-password>' to your
procedure.
- Roman|||On Apr 9, 7:27 am, "Designing Solutions WD"
<michael.grass...@.gmail.comwrote:
Quote:
Originally Posted by
Hello,
>
I have some questions on my options available.
>
I have to export some tables to csv files to enable another department
to process the files. What I need is a way to do this in ms sql
though a stored proc with quoted identifiers and column names as
heads. I cannot figure out how to do this.
>
Can anybody give me some options that would be the best options.
>
I am using ms sql 2000.
>
Thank you for your time.
Straight forward solution is to UNION field names with data and use
BCP -
1. Create a SELECT statement that includes field names -
DECLARE @.names varchar(100), @.delimiter varchar(10)
SET @.delimiter = ','
SELECT @.names = COALESCE(@.names + @.delimiter, '') + '"' + name + '"'
FROM syscolumns where id = (select id from sysobjects where
name='TABLE_TO_EXPORT')
SELECT 'select ' + @.names
2. Concatenate it with UNION SELECT cast(FIELD1 as char), cast(FIELD2
as char), ... From TABLE_TO_EXPORT (which is ugly but it has to be
done to create union)
3. Then using UNION create a VIEW which can be used in BCP to export
data
4. Use BCP from command shell xp_cmdshell "BCP ""select * from
VIEW_TO_EXPORT"" out c:\results.csv -c -t, -T -S<servername>
- Roman
Friday, February 10, 2012
best way to clean up temp files
I have a custom Data Flow task that creates temp files to the system temp directory during processing. A lot of times, we'll use SSIS to do one data transformation, running and tweaking the package along the way... we do this in the designer ... if we notice something that's incorrect in the data view, we just hit the stop button and fix it. However, when we do this, the Cleanup() function isn't called, and my temp files are left in the temp directory, when they really ought to be disposed of.
Is there a method that gets called every time when the DtsDebugHost quits, whether it finished, didn't finish properly, or was stopped in the middle? What would be a good way (other than having some service that monitors what temp files are used by what processes) to clean up temp files after we don't need them?
~Steve
There is no way to do this in SSIS since stopping a package using the designer is just like stopping a process while using a debugger (no cleanup happens, generally speaking). However, there is a open mode called delete on close (in native code, I don't know if this maps the managed code too) and you could use that setting to have the OS perform the cleanup.
Thanks,
Matt
BEST UDF to handle char or varchar to smalldatatime converstion pr
I am importing data from text files, data is about 800GB, yeah thats right.
in different file and in different varchar format. like 01/01/2004 jan 17,
200, mm/dd/yy. it had bad data and good data, but there are millions and
millions records, and it has all kind of problems any body can think of.
is there any site or way to get any function which handle these problems,
or if any of you ever had chance to see this kinda function and pass to me.
thanking you in advace for your help..
-PermoodYou can use ISDATE to determine if the specified string can be converted to
a SQL Server datetime datatype: Examples:
SELECT ISDATE('1/01/2004')
SELECT ISDATE('jan 17, 200')
SELECT ISDATE('mm/dd/yy')
SELECT ISDATE('01/01/04')
You might consider performing data type validation and conversion as part of
your ETL process before importing data into SQL Server. This will likely
perform better than using an intermediate SQL Server staging table when you
have a lot of bad data and large volumes.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"-Permood" <Permood@.discussions.microsoft.com> wrote in message
news:FD3FF986-932F-4FA0-9AD1-E027327AA100@.microsoft.com...
> Hi Experts,
> I am importing data from text files, data is about 800GB, yeah thats
> right.
> in different file and in different varchar format. like 01/01/2004 jan 17,
> 200, mm/dd/yy. it had bad data and good data, but there are millions and
> millions records, and it has all kind of problems any body can think of.
> is there any site or way to get any function which handle these problems,
>
> or if any of you ever had chance to see this kinda function and pass to
> me.
>
> thanking you in advace for your help..
> -Permood
>|||Thanks Dan,
here is problem, i am very new in my company, some 'DEVELOPERS' don't want
to change their way of doing and they decided to use sql staging, and
transformationg. I dont' have much experience with SQL Server either. So I
wanted to see if some thing is already there, i means any function, which and
handle most date problems and then i and enhance it according to my needs.
but thank you for your time and suggestions.
"Dan Guzman" wrote:
> You can use ISDATE to determine if the specified string can be converted to
> a SQL Server datetime datatype: Examples:
> SELECT ISDATE('1/01/2004')
> SELECT ISDATE('jan 17, 200')
> SELECT ISDATE('mm/dd/yy')
> SELECT ISDATE('01/01/04')
> You might consider performing data type validation and conversion as part of
> your ETL process before importing data into SQL Server. This will likely
> perform better than using an intermediate SQL Server staging table when you
> have a lot of bad data and large volumes.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "-Permood" <Permood@.discussions.microsoft.com> wrote in message
> news:FD3FF986-932F-4FA0-9AD1-E027327AA100@.microsoft.com...
> > Hi Experts,
> >
> > I am importing data from text files, data is about 800GB, yeah thats
> > right.
> > in different file and in different varchar format. like 01/01/2004 jan 17,
> > 200, mm/dd/yy. it had bad data and good data, but there are millions and
> > millions records, and it has all kind of problems any body can think of.
> >
> > is there any site or way to get any function which handle these problems,
> >
> >
> > or if any of you ever had chance to see this kinda function and pass to
> > me.
> >
> >
> > thanking you in advace for your help..
> > -Permood
> >
>
>
BEST UDF to handle char or varchar to smalldatatime converstion pr
I am importing data from text files, data is about 800GB, yeah thats right.
in different file and in different varchar format. like 01/01/2004 jan 17,
200, mm/dd/yy. it had bad data and good data, but there are millions and
millions records, and it has all kind of problems any body can think of.
is there any site or way to get any function which handle these problems,
or if any of you ever had chance to see this kinda function and pass to me.
thanking you in advace for your help..
-Permood
You can use ISDATE to determine if the specified string can be converted to
a SQL Server datetime datatype: Examples:
SELECT ISDATE('1/01/2004')
SELECT ISDATE('jan 17, 200')
SELECT ISDATE('mm/dd/yy')
SELECT ISDATE('01/01/04')
You might consider performing data type validation and conversion as part of
your ETL process before importing data into SQL Server. This will likely
perform better than using an intermediate SQL Server staging table when you
have a lot of bad data and large volumes.
Hope this helps.
Dan Guzman
SQL Server MVP
"-Permood" <Permood@.discussions.microsoft.com> wrote in message
news:FD3FF986-932F-4FA0-9AD1-E027327AA100@.microsoft.com...
> Hi Experts,
> I am importing data from text files, data is about 800GB, yeah thats
> right.
> in different file and in different varchar format. like 01/01/2004 jan 17,
> 200, mm/dd/yy. it had bad data and good data, but there are millions and
> millions records, and it has all kind of problems any body can think of.
> is there any site or way to get any function which handle these problems,
>
> or if any of you ever had chance to see this kinda function and pass to
> me.
>
> thanking you in advace for your help..
> -Permood
>