Showing posts with label application. Show all posts
Showing posts with label application. Show all posts

Tuesday, March 27, 2012

Binding a SqlDataSource to a TextBox or label

I have a SQL database with 1 column in it.. it is a do not call registry for my office. I want to make a web application so our leads can go on the web and request to be put on the do not call list. problem is, if their number already exists, it gets a violation of primary key...

Normal handling for this would be to do a select query based on user input, and if it exists in the database, tell them that, else do the insert query...

Problem is, i cant seem to figure out how to bind the result of the select query to a textbox or label so i can compare the results to do that logic. Any ideas? This is in asp .net 2.0. Thanks!

-Dan

You can't directly bind to a Label or a TextBox, but you certainly can set the result of your DataSource to the Text property of your control. Here's an example:

ASPX

ProductName = <asp:label id="lblProductName" runat="server" /><asp:objectdatasource id="odsProducts" runat="server" selectmethod="GetProductByProductID"typename="NorthwindTableAdapters.ProductsTableAdapter"><selectparameters><asp:parameter defaultvalue="1" name="ProductID" type="Int32" /></selectparameters></asp:objectdatasource>

CODE-BEHIND

protected void Page_Load(object sender, EventArgs e){if (!this.IsPostBack){DataView dv = (DataView)odsProducts.Select();lblProductName.Text = dv[0]["ProductName"].ToString();}}

|||

Acctually... i figured out a way... check this out..

Dim dv As Data.DataView = SqlDataSource1.Select(DataSourceSelectArguments.Empty)

TextBox1.Text = dv.ToTable.Rows(0)("PHONE")

This works, but now i have another problem. If there is no row 0 (meaning if the number doesnt exist) the page fails. How can i check to see if the record doesnt exist, then do something else?

|||Just check to see if the DataView.Count property is greater than 0.|||Thanks, that worked great!

Binding a DataSource Table column to a form object (RadioButtons)

Hi,

I have a little question. I have an application which interfaces with a SQL Express Database. On a form, I want to bind a control which is made of several Radio buttons to a table column which is in fact a varchar(1). This is the picture:

Table column: OptVisualRpt varchar(1)

Screen control: 2 radio buttons

rb_VisRPTbImp_O for "yes"

rb_VisRPTbImp_N for "no"

I'm really scratching my head as how I can bind a single table column to my radio buttons controls. I think that I just can't this way but rather have to use an intermediate variable or control.

Solution 1?

I thought of using a local variable that I would set to "y" or "n" following "CheckedChanged" events fired from radio buttons but I don't know how to bind this variable to my table column.

Solution 2?

I thought of placing a hidden text control into my form, which one would be binded to my table column, that I would also set to "y" or "n" following "CheckedChanged" events fired from radio buttons.

Any of these solutions which would be feasible or any more neat way to do it?

Many thanks in advance,

Stphane

Hi again,

Finally sounds that last night I could solve my problem by myself. Let me explain.

1- I first got rid of the "Binding source". I removed following lines:

=> private: System::Windows::Forms::BindingSource ^ InfoBaseBindingSource;

=> this->InfoBaseBindingSource = (gcnew System::Windows::Forms::BindingSource(this->components));

=> (cli::safe_cast<System::ComponentModel::ISupportInitialize^ >(this->InfoBaseBindingSource))->BeginInit();

=> //
// InfoBaseBindingSource
//

this->InfoBaseBindingSource->DataMember = L"InfoBase";
this->InfoBaseBindingSource->DataSource = this->Test_ADODataSet;

=> (cli::safe_cast<System::ComponentModel::ISupportInitialize^ >(this->InfoBaseBindingSource))->EndInit();

2- At the same time, for every field control that were fed by it, I removed from code the following line (here is an example):

this->ID_Compagnie->DataBindings->Add((gcnew System::Windows::Forms::Binding(L"Text", this->InfoBaseBindingSource, L"Identification", true)));

3- I then changed the Form1_Load routine for the following:

this->InfoBaseTableAdapter->Fill(this->Test_ADODataSet->InfoBase);

DataTable ^ Dt_Infobase = this->Test_ADODataSet->Tables["InfoBase"];
if(Dt_Infobase != nullptr)
{
array<DataRow^> ^ Rw_InfoBase = Dt_Infobase->Select();
if(Rw_InfoBase->Length != 0)
{
this->ID_Compagnie->Text = Rw_InfoBase[0]["Identification"]->ToString();
this->Adr_Compagnie->Text = Rw_InfoBase[0]["Adresse"]->ToString();
...

==> Example of loading a masked text box

// Affichage et rectification du format du Code Postal/ZipCode

if(String::Compare(this->Pays_Compagnie->Text,"Canada") == 0)
{
this->CPZip_Compagnie->Mask = ">L0>L 0>L0";
this->Pnl_VSZCode->Hide();
}
else
{
this->Pnl_VSZCode->Show();
if(this->CPZip_Compagnie->Text->Length > 5)
{
this->VScrl_ZCode->Value = 1;
this->CPZip_Compagnie->Mask = "99999";
}
else
{
this->VScrl_ZCode->Value = 0;
this->CPZip_Compagnie->Mask = "99999-9999";
}
}

this->CPZip_Compagnie->Text = Rw_InfoBase[0]["CodePostal_Zip"]->ToString();

...

==> Example of "loading" info vs a set of radio buttons

String ^ Logo_ModeAff = Rw_InfoBase[0]["LogoRpt_ModeAff"]->ToString();
if(Logo_ModeAff == "C")
{
this->rb_ModeAffLCoord->Checked = true;
this->rb_ModeAffLOnly->Checked = false;
}
else
{
this->rb_ModeAffLCoord->Checked = false;
this->rb_ModeAffLOnly->Checked = true;
}
}

4- I then changed the B_Accept_Click routine for the following:

I removed following sentences:

this->InfoBaseBindingSource->EndEdit();
this->InfoBaseTableAdapter->Update(this->Test_ADODataSet->InfoBase);
this->Test_ADODataSet->AcceptChanges();

And replaced them with following:

DataTable ^ Dt_Infobase = this->Test_ADODataSet->Tables["InfoBase"];
if(Dt_Infobase != nullptr)
{
array<DataRow^> ^ Rw_InfoBase = Dt_Infobase->Select();
if(Rw_InfoBase->Length == 0)
{
DataRow ^ NRw_InfoBase = Dt_Infobase->NewRow();

NRw_InfoBase["Identification"] = this->ID_Compagnie->Text;
...
==> Example of a "saving" info vs a set of radio buttons

if(this->rb_ModeAffLCoord->Checked == true)
NRw_InfoBase["LogoRpt_ModeAff"] = "C";
else
NRw_InfoBase["LogoRpt_ModeAff"] = "L";
...

Dt_Infobase->Rows->Add(NRw_InfoBase);
}
else
{
Rw_InfoBase[0]["Identification"] = this->ID_Compagnie->Text;
...
==> Example of a "replacing" info vs a set of radio buttons

if(this->rb_ModeAffLCoord->Checked == true)
Rw_InfoBase[0]["LogoRpt_ModeAff"] = "C";
else
Rw_InfoBase[0]["LogoRpt_ModeAff"] = "L";
...
}

this->InfoBaseTableAdapter->Update(this->Test_ADODataSet);
this->Test_ADODataSet->AcceptChanges();
}

5- I finally changed the B_Cancel_Click routine for the following:

I removed following sentences:

this->InfoBaseBindingSource->EndEdit();

This code is maybe not the best way to do it. If someone has any more proper way to do it, I would certainly appreciate to learn. My conclusion is that doing this binding process manually gives better results than relying on a BindingSource object.

Stphane

Bind SQL reporting services (Reports) to ASP.NET (Urgent)

Hi all,

Currently I have a project that require me to use reporting services (Reports) together with ASP.net web application.

However, I have the following problem.

I have an asp.net web application. From there, I have a textbox whereby user can type a specific date. The specific date should then be incorporate with the sql reporting services. The report that is being generated will contain data that is on the specific date.

How can I do that?

I try to search through the internet but to no result. Can anyone help out?

Thank you so much.Read a book on Reporting Services ... you can pass everything to Reporting Services through the webservices it exposes ...
Microsoft has some good books about Reporting Services ... Search for William Vaugh ... He's very good

Sunday, March 25, 2012

Bind failed on TCP port 1433

I installed SP3a and now I am getting this error message in the application
event log. Also, I can only connect to my server using namepipes. But
TCP/IP won't connect, even using 127.0.0.1 or the IP of my machine. The
event logs says that SQL is listening on 127.0.0.1:1433.
Can someone please help. I have not seen the answer and I have been
searching for 2 days now.
Log:
SQL server listening on 127.0.0.1: 1433.
SuperSocket Info: Bind failed on TCP port 1433.
SuperSocket info: (SpnRegister) : Error 1355.
Thanks in advance!Dolph,
There are two scenarios that I can think of that will cause this situation.
1) You have bounced the SQL Server service programmatically (probably in
C++) and the service has been stopped and restarted too quickly. SQL
Server does not release the port quickly enough before it is restarted
and thinks that another service is using port 1433. The bind, then fails.
2) Another application is using port 1433.
You can check your ports using this on the commandline:
netstat -a
Look at the local address column for <machinename>:ms-sql-s or
<machinename>:1433
On my Inspiron laptop I get these results:
Active Connections
Proto Local Address Foreign Address State
TCP inspiron4100:ftp inspiron4100:0 LISTENING
TCP inspiron4100:http inspiron4100:0 LISTENING
TCP inspiron4100:epmap inspiron4100:0 LISTENING
TCP inspiron4100:https inspiron4100:0 LISTENING
TCP inspiron4100:microsoft-ds inspiron4100:0 LISTENING
TCP inspiron4100:1025 inspiron4100:0 LISTENING
TCP inspiron4100:ms-sql-s inspiron4100:0 LISTENING
TCP inspiron4100:2869 inspiron4100:0 LISTENING
TCP inspiron4100:3389 inspiron4100:0 LISTENING
TCP inspiron4100:1028 inspiron4100:0 LISTENING
TCP inspiron4100:1061 inspiron4100:0 LISTENING
TCP inspiron4100:1134 localhost:1135 ESTABLISHED
TCP inspiron4100:1135 localhost:1134 ESTABLISHED
TCP inspiron4100:netbios-ssn inspiron4100:0 LISTENING
TCP inspiron4100:1056 baym-cs91.msgr.hotmail.com:1863
ESTABLISHED
TCP inspiron4100:1130 82.152.34.166:50002 ESTABLISHED
TCP inspiron4100:1173 msnews.microsoft.com:nntp ESTABLISHED
TCP inspiron4100:1205 64.233.161.107:http ESTABLISHED
UDP inspiron4100:microsoft-ds *:*
UDP inspiron4100:isakmp *:*
UDP inspiron4100:1026 *:*
UDP inspiron4100:1069 *:*
UDP inspiron4100:1070 *:*
UDP inspiron4100:1131 *:*
UDP inspiron4100:ms-sql-m *:*
UDP inspiron4100:3456 *:*
UDP inspiron4100:4500 *:*
UDP inspiron4100:ntp *:*
UDP inspiron4100:1050 *:*
UDP inspiron4100:1057 *:*
UDP inspiron4100:1156 *:*
UDP inspiron4100:1900 *:*
UDP inspiron4100:ntp *:*
UDP inspiron4100:netbios-ns *:*
UDP inspiron4100:netbios-dgm *:*
UDP inspiron4100:1900 *:*
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602m.html
Dolph Larson wrote:
> I installed SP3a and now I am getting this error message in the application
> event log. Also, I can only connect to my server using namepipes. But
> TCP/IP won't connect, even using 127.0.0.1 or the IP of my machine. The
> event logs says that SQL is listening on 127.0.0.1:1433.
> Can someone please help. I have not seen the answer and I have been
> searching for 2 days now.
> Log:
> SQL server listening on 127.0.0.1: 1433.
> SuperSocket Info: Bind failed on TCP port 1433.
> SuperSocket info: (SpnRegister) : Error 1355.
> Thanks in advance!|||I didn't finish. :-/
If you still cannot find the cause of the problem, I suggest that you
reboot the server.
In the case of 1) the solution is to put a small pause in your code
before starting the MSSQLServer service again.
In the case of 2) you will need to ensure that your application uses
another port - perhaps there is a setting somewhere (maybe the registry)
where you can tweak this.
--
Mark Allison, SQL Server MVP
http://www.markallison.co.uk
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602m.html
Mark Allison wrote:
> Dolph,
> There are two scenarios that I can think of that will cause this situation.
> 1) You have bounced the SQL Server service programmatically (probably in
> C++) and the service has been stopped and restarted too quickly. SQL
> Server does not release the port quickly enough before it is restarted
> and thinks that another service is using port 1433. The bind, then fails.
> 2) Another application is using port 1433.
> You can check your ports using this on the commandline:
> netstat -a
> Look at the local address column for <machinename>:ms-sql-s or
> <machinename>:1433
> On my Inspiron laptop I get these results:
>
> Active Connections
> Proto Local Address Foreign Address State
> TCP inspiron4100:ftp inspiron4100:0 LISTENING
> TCP inspiron4100:http inspiron4100:0 LISTENING
> TCP inspiron4100:epmap inspiron4100:0 LISTENING
> TCP inspiron4100:https inspiron4100:0 LISTENING
> TCP inspiron4100:microsoft-ds inspiron4100:0 LISTENING
> TCP inspiron4100:1025 inspiron4100:0 LISTENING
> TCP inspiron4100:ms-sql-s inspiron4100:0 LISTENING
> TCP inspiron4100:2869 inspiron4100:0 LISTENING
> TCP inspiron4100:3389 inspiron4100:0 LISTENING
> TCP inspiron4100:1028 inspiron4100:0 LISTENING
> TCP inspiron4100:1061 inspiron4100:0 LISTENING
> TCP inspiron4100:1134 localhost:1135 ESTABLISHED
> TCP inspiron4100:1135 localhost:1134 ESTABLISHED
> TCP inspiron4100:netbios-ssn inspiron4100:0 LISTENING
> TCP inspiron4100:1056 baym-cs91.msgr.hotmail.com:1863 ESTABLISHED
> TCP inspiron4100:1130 82.152.34.166:50002 ESTABLISHED
> TCP inspiron4100:1173 msnews.microsoft.com:nntp ESTABLISHED
> TCP inspiron4100:1205 64.233.161.107:http ESTABLISHED
> UDP inspiron4100:microsoft-ds *:*
> UDP inspiron4100:isakmp *:*
> UDP inspiron4100:1026 *:*
> UDP inspiron4100:1069 *:*
> UDP inspiron4100:1070 *:*
> UDP inspiron4100:1131 *:*
> UDP inspiron4100:ms-sql-m *:*
> UDP inspiron4100:3456 *:*
> UDP inspiron4100:4500 *:*
> UDP inspiron4100:ntp *:*
> UDP inspiron4100:1050 *:*
> UDP inspiron4100:1057 *:*
> UDP inspiron4100:1156 *:*
> UDP inspiron4100:1900 *:*
> UDP inspiron4100:ntp *:*
> UDP inspiron4100:netbios-ns *:*
> UDP inspiron4100:netbios-dgm *:*
> UDP inspiron4100:1900 *:*
>
>
>|||Thanks for trying to help. Sadly, it can not be either case. I am not
doing anything with C++ and I have not downloaded any application lately that
is build on it. Esp. working with SQL server. I looked at the ports with
netstat -an and I see the port being listened by SQL Server. The server just
wont bind to that port. If I can't find a solution soon, I will be left to
do a re-install. :(
Dolph
"Mark Allison" wrote:
> I didn't finish. :-/
> If you still cannot find the cause of the problem, I suggest that you
> reboot the server.
> In the case of 1) the solution is to put a small pause in your code
> before starting the MSSQLServer service again.
> In the case of 2) you will need to ensure that your application uses
> another port - perhaps there is a setting somewhere (maybe the registry)
> where you can tweak this.
> --
> Mark Allison, SQL Server MVP
> http://www.markallison.co.uk
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602m.html
>
> Mark Allison wrote:
> > Dolph,
> >
> > There are two scenarios that I can think of that will cause this situation.
> >
> > 1) You have bounced the SQL Server service programmatically (probably in
> > C++) and the service has been stopped and restarted too quickly. SQL
> > Server does not release the port quickly enough before it is restarted
> > and thinks that another service is using port 1433. The bind, then fails.
> >
> > 2) Another application is using port 1433.
> >
> > You can check your ports using this on the commandline:
> >
> > netstat -a
> >
> > Look at the local address column for <machinename>:ms-sql-s or
> > <machinename>:1433
> >
> > On my Inspiron laptop I get these results:
> >
> >
> > Active Connections
> >
> > Proto Local Address Foreign Address State
> > TCP inspiron4100:ftp inspiron4100:0 LISTENING
> > TCP inspiron4100:http inspiron4100:0 LISTENING
> > TCP inspiron4100:epmap inspiron4100:0 LISTENING
> > TCP inspiron4100:https inspiron4100:0 LISTENING
> > TCP inspiron4100:microsoft-ds inspiron4100:0 LISTENING
> > TCP inspiron4100:1025 inspiron4100:0 LISTENING
> > TCP inspiron4100:ms-sql-s inspiron4100:0 LISTENING
> > TCP inspiron4100:2869 inspiron4100:0 LISTENING
> > TCP inspiron4100:3389 inspiron4100:0 LISTENING
> > TCP inspiron4100:1028 inspiron4100:0 LISTENING
> > TCP inspiron4100:1061 inspiron4100:0 LISTENING
> > TCP inspiron4100:1134 localhost:1135 ESTABLISHED
> > TCP inspiron4100:1135 localhost:1134 ESTABLISHED
> > TCP inspiron4100:netbios-ssn inspiron4100:0 LISTENING
> > TCP inspiron4100:1056 baym-cs91.msgr.hotmail.com:1863 ESTABLISHED
> > TCP inspiron4100:1130 82.152.34.166:50002 ESTABLISHED
> > TCP inspiron4100:1173 msnews.microsoft.com:nntp ESTABLISHED
> > TCP inspiron4100:1205 64.233.161.107:http ESTABLISHED
> > UDP inspiron4100:microsoft-ds *:*
> > UDP inspiron4100:isakmp *:*
> > UDP inspiron4100:1026 *:*
> > UDP inspiron4100:1069 *:*
> > UDP inspiron4100:1070 *:*
> > UDP inspiron4100:1131 *:*
> > UDP inspiron4100:ms-sql-m *:*
> > UDP inspiron4100:3456 *:*
> > UDP inspiron4100:4500 *:*
> > UDP inspiron4100:ntp *:*
> > UDP inspiron4100:1050 *:*
> > UDP inspiron4100:1057 *:*
> > UDP inspiron4100:1156 *:*
> > UDP inspiron4100:1900 *:*
> > UDP inspiron4100:ntp *:*
> > UDP inspiron4100:netbios-ns *:*
> > UDP inspiron4100:netbios-dgm *:*
> > UDP inspiron4100:1900 *:*
> >
> >
> >
> >
> >
>|||Dolph Larson wrote:
> Can someone please help. I have not seen the answer and I have been
> searching for 2 days now.
For the benefit of any other poor ***s trying to work this one out,
here's what I just found...
We got the "SuperSocket Info: Bind failed on TCP port 1433" error, yet
netstat says the socket _is_ listening. TCPView (from the excellent
sysinternals site) confirms that it's listening, and that it's SQL
server that doing it.
Stumped despite much googling, I suddenly realised we'd recently
installed a new remote access _client_ on this server, Windows Secure
Application Manager from netscreen. This seems to work by hooking the
TCP stack with an LSP. Needless to say, it was this that was messing
up SQL server - using the diagnostics tool to remove the LSP sorted it
out straight away.
Thinking back, I've seen problems with an LSP before: some weird
problem with Norton Antivirus left a machine in a state where ipconfig
said it was dialled up OK, it could ping, but it couldn't do email or
http... Zapping the LSP sorted it (not that I can remember how I
removed it!)
Cheers,
Martin.

Thursday, March 22, 2012

Binary File

Can someone firecct me to instructions on how to download a Binary file in a asp.net application.

I have a sql 2000 db that has a field that contains files and my users need access to them via my web app.At the end of this article it tells you how to retrieve binary data from the database and then shows you code to actually start the download:

http://www.developer.com/net/asp/article.php/3097661|||That works great thanx.

Tuesday, March 20, 2012

bigint field vs int

i am creating an application which the number of records will be beyond 100000 so i need to set the primary keys to bigint instead of int

i need to know about the performance and difference

will it affect the performance much and second will i have to change any code like delete,update

A standard Int ranges from -2,147,483,648 through 2,147,483,647 so you can use them to store values much larger than 100000.

If you do use bigints, you would need to use Int64 in your code

|||

HI,

There will be some performance difference in using int and bigint. If you use bigint the performance is slower in an 32 bit processer. But in case of a 64 bit processor the performance should not be a problem

|||

Int will give you up to 2,147,483,647 while Bigint gives up to 9,223,372,036,854,775,807. The only difference in your stored procedure is that you will need to change the parameter declarations from INT to BIGINT.

>i need to know about the performance and difference
Obviously more working memory will be required for BIGINT instead of INT (8 bytes instead of 4 for for each value), however SQL's will still work very efficiently provided there is enough RAM. If you only have enough RAM for INT, changing to BIGINT will put you at a disadvantage - so make sure you have enough RAM.

sql

big update problem!

I have a mini .net application that I have created to make some pretty
complex transformationsdatabase.
All the code, .net datatables and logic etc takes 3 minutes to run on the
entire database. However, when I put in the update statements the same
process takes
3 1/2 hours!
So I know its not becuase I am using datatables or that my select statements
are poor etc etc etc. Its 100% the update statements.
Any clues? is this normal? are updates statements the most expensive in time?ADDED:
I am also able to create and populate a fairly large table in a matter of
minutes however updating to this one table is costing a lot of time.
Now granted this table in question is much larger then most tables in the DB
and larger then the tables I create but not 10 times larger!
Also, this table gets a lot of activity. Could our problem be the table
itself?
"Sean" wrote:
> I have a mini .net application that I have created to make some pretty
> complex transformationsdatabase.
> All the code, .net datatables and logic etc takes 3 minutes to run on the
> entire database. However, when I put in the update statements the same
> process takes
> 3 1/2 hours!
> So I know its not becuase I am using datatables or that my select statements
> are poor etc etc etc. Its 100% the update statements.
> Any clues? is this normal? are updates statements the most expensive in time?|||Yes, and it could be a hardware problem or the phase of the moon. With no
information about what the table looks like, what you are trying to do and
what else is running, there's no way to tell. This is like calling your
mechanic and saying it takes to long for me to get to work, what should I
do? The main difference between update statements and select statements is
they take update locks and write to disk. I would look at blocking waiting
to lock something that's in use first but that's just idle speculation.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"Sean" <Sean@.discussions.microsoft.com> wrote in message
news:6E7394B4-A544-4111-A35B-4661DF0C459C@.microsoft.com...
> ADDED:
> I am also able to create and populate a fairly large table in a matter of
> minutes however updating to this one table is costing a lot of time.
> Now granted this table in question is much larger then most tables in the
> DB
> and larger then the tables I create but not 10 times larger!
> Also, this table gets a lot of activity. Could our problem be the table
> itself?
> "Sean" wrote:
>> I have a mini .net application that I have created to make some pretty
>> complex transformationsdatabase.
>> All the code, .net datatables and logic etc takes 3 minutes to run on the
>> entire database. However, when I put in the update statements the same
>> process takes
>> 3 1/2 hours!
>> So I know its not becuase I am using datatables or that my select
>> statements
>> are poor etc etc etc. Its 100% the update statements.
>> Any clues? is this normal? are updates statements the most expensive in
>> time?|||Well there is currently only 3 people accessing the server and only one (me)
accessing this particular database.
by my estimations it is taking about 1-3 seconds to update a row and in this
context each row has to be updated seperately.
Having said all that I think the functionality we are gaining might not be
worth the dev time which is a subject I will have for our meeting.
"Roger Wolter[MSFT]" wrote:
> Yes, and it could be a hardware problem or the phase of the moon. With no
> information about what the table looks like, what you are trying to do and
> what else is running, there's no way to tell. This is like calling your
> mechanic and saying it takes to long for me to get to work, what should I
> do? The main difference between update statements and select statements is
> they take update locks and write to disk. I would look at blocking waiting
> to lock something that's in use first but that's just idle speculation.
> --
> This posting is provided "AS IS" with no warranties, and confers no rights.
> Use of included script samples are subject to the terms specified at
> http://www.microsoft.com/info/cpyright.htm
> "Sean" <Sean@.discussions.microsoft.com> wrote in message
> news:6E7394B4-A544-4111-A35B-4661DF0C459C@.microsoft.com...
> > ADDED:
> >
> > I am also able to create and populate a fairly large table in a matter of
> > minutes however updating to this one table is costing a lot of time.
> >
> > Now granted this table in question is much larger then most tables in the
> > DB
> > and larger then the tables I create but not 10 times larger!
> >
> > Also, this table gets a lot of activity. Could our problem be the table
> > itself?
> >
> > "Sean" wrote:
> >
> >> I have a mini .net application that I have created to make some pretty
> >> complex transformationsdatabase.
> >> All the code, .net datatables and logic etc takes 3 minutes to run on the
> >> entire database. However, when I put in the update statements the same
> >> process takes
> >> 3 1/2 hours!
> >> So I know its not becuase I am using datatables or that my select
> >> statements
> >> are poor etc etc etc. Its 100% the update statements.
> >>
> >> Any clues? is this normal? are updates statements the most expensive in
> >> time?
>
>|||Sean wrote:
> Well there is currently only 3 people accessing the server and only one (me)
> accessing this particular database.
> by my estimations it is taking about 1-3 seconds to update a row and in this
> context each row has to be updated seperately.
> Having said all that I think the functionality we are gaining might not be
> worth the dev time which is a subject I will have for our meeting.
>
1-3 seconds to update a record? How many indexes are on this table? Is
there a clustered index? Is the column you're updating part of the
clustered index key? Are there update triggers on the table?|||Basically none of the above and its safe to assume no optimization strategies
have been done for this database.
We have a primary key on this table but that is it really.
I have studied some on clustering because I wanted to go from MCAD to MCSD
but I dropped that database study so in short I am database stupid outside of
basic T-SQL statements.
If you guys think clustering etc will help a lot I will look into it because
this table is getting big.
Thanks
"Tracy McKibben" wrote:
> Sean wrote:
> > Well there is currently only 3 people accessing the server and only one (me)
> > accessing this particular database.
> > by my estimations it is taking about 1-3 seconds to update a row and in this
> > context each row has to be updated seperately.
> >
> > Having said all that I think the functionality we are gaining might not be
> > worth the dev time which is a subject I will have for our meeting.
> >
> 1-3 seconds to update a record? How many indexes are on this table? Is
> there a clustered index? Is the column you're updating part of the
> clustered index key? Are there update triggers on the table?
>|||and correction, its only .4 a second for updates per record.
Its just that at the point of production we will have
157,000 records which if this was all we were doing it would be fine. but
this one item is taking up more then 50% of the time for our data conversion.
"Sean" wrote:
> Basically none of the above and its safe to assume no optimization strategies
> have been done for this database.
> We have a primary key on this table but that is it really.
> I have studied some on clustering because I wanted to go from MCAD to MCSD
> but I dropped that database study so in short I am database stupid outside of
> basic T-SQL statements.
> If you guys think clustering etc will help a lot I will look into it because
> this table is getting big.
> Thanks
>
>
>
> "Tracy McKibben" wrote:
> > Sean wrote:
> > > Well there is currently only 3 people accessing the server and only one (me)
> > > accessing this particular database.
> > > by my estimations it is taking about 1-3 seconds to update a row and in this
> > > context each row has to be updated seperately.
> > >
> > > Having said all that I think the functionality we are gaining might not be
> > > worth the dev time which is a subject I will have for our meeting.
> > >
> >
> > 1-3 seconds to update a record? How many indexes are on this table? Is
> > there a clustered index? Is the column you're updating part of the
> > clustered index key? Are there update triggers on the table?
> >

Big table(?) or split between tables?

Hi Guys

I have an application that runs on several sites that has a table with 36 columns mostly ints och small varchars.

I currently have only one table that stores the data and five indexes and since the table on one location (and others soon) has about 18 million rows I have been trying to come up with a better solution (but only if needed, I dont think I have to tell you that I am a programmer and not an dba).
The db file size with all the indexes is more then 10gb, in it self is not an problem but is it a bad solution to have it that way?

The questions are:

Are there any big benefits if i split it into several smaller tables or even smaler databases and make the SPs that gets the data aware that say 2006 years data is in table a and so on?
Its quite important that there are fast SELECTS and that need is far more important then to decrease the size of the database file and so on.

How many rows is okay to have in one table (with 25 columns) before its too big?

Thanks in advance.

Best regards
Johan, Sweden.

CREATE TABLE [dbo].[Cdr](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Abandon] [varchar](7) NULL,
[Bcap] [varchar](2) NULL,
[BlId] [varchar](16) NULL,
[CallChg] [varchar](6) NULL,
[CallIdentifier] [uniqueidentifier] NULL,
[ChgInfo] [varchar](5) NULL,
[ClId] [varchar](16) NULL,
[CustNo] [smallint] NULL,
[Digits] [varchar](32) NULL,
[DigitType] [varchar](1) NULL,
[Dnis1] [varchar](6) NULL,
[Dnis2] [varchar](6) NULL,
[Duration] [int] NULL,
[FgDani] [varchar](13) NULL,
[HoundredHourDuration] [varchar](3) NULL,
[Name] [varchar](40) NULL,
[NameId] [int] NOT NULL,
[Npi] [varchar](2) NULL,
[OrigAuxId] [varchar](11) NULL,
[OrigId] [varchar](7) NULL,
[OrigMin] [varchar](16) NULL,
[Origten0] [varchar](3) NULL,
[RecNo] [int] NULL,
[RecType] [varchar](1) NOT NULL,
[Redir] [varchar](1) NULL,
[TerId] [varchar](7) NOT NULL,
[TermAuxId] [varchar](11) NULL,
[TermMin] [varchar](16) NULL,
[Termten0] [varchar](3) NULL,
[Timestamp] [datetime] NOT NULL,
[Ton] [varchar](1) NULL,
[Tta] [int] NULL,
[Twt] [int] NULL,
[DateValue] [int] NULL,
[TimeValue] [int] NULL,
[Level] [varchar](50) NOT NULL CONSTRAINT [DF_Cdr_Level] DEFAULT ('x:'),
CONSTRAINT [PK_Cdr] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 10) ON [PRIMARY]
) ON [PRIMARY]http://databases.about.com/od/specificproducts/a/firstnormalform.htm

http://en.wikipedia.org/wiki/First_normal_form|||Thanks for the quick reply.
Okay Ive read the articles, but the data in this table is records from a machine and are not duplicate, the name column is duplicated but I figured that it was a overkill to move one column out, all the other fields are diffrent from time to time. And all column always only has one value or null, so there is no need to have a parent-child relation to another table, or?

The acctual question was not if I would split the columns in multiple tables but if I should split the rows in multiple tables.|||What's the justification for splitting the table data into multiple tables of the exact structure?
EDIT: That should probably read - "What are your thoughts behind doing this?"|||it's called horizontal table partitioning and it can be used to create partitioned views and this works best to increase performance if you are doing it over multiple file groups on multiple disks.|||The question is does that increase performance in a single disk/filegroup scenario.

If I ask it like this, is it any problems related with a table with 18 million rows?
If not I am quite happy with the current solution, but if I were to split the data in several tables maby one for the last months data in one "active" table (since most querys are on the most recent data) and all the other in an "archive" table.

And If a split would be a good chooise, when to query a several months whats the best way to look in the two tables? Is it to have a SP that handels all this or should my data layer handle the access. Ex two SPs one for active and one for historic data and combine them with two calls if needed?|||If I ask it like this, is it any problems related with a table with 18 million rows?No - the number of rows is irrelevent. You need to see how things perform and act accordingly based on that. And although partitioning is an option there would be hundreds of things you'd want to consider first. Just to give you perspective, I work with non-partitioned tables with 1/2 billion rows and I expect a lot of the other posters here do also.

Also Don is right, your table would benefit from normalisation and not just first normal form. Remember that it is the number of pages read\ written from disk not the number of rows that count. So if you want a performance justification (rather than logical justification) for normalising then a normalised database will distribute your data across multiple tables, reducing the row size and leading to more rows per page. YMMV of course depending on your queries.

What sort of queries are running on it? Single row lookups or reports pulling in lots of data or a combination?|||Woah - this is probably terrible:
CONSTRAINT [PK_Cdr] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 10) ON [PRIMARY]
) ON [PRIMARY]Unless your rows get a lot of updates making the data in the columns larger after the initial insert then the fillfactor on a monotonically increasing clustered index should be approaching 100. All your clustered index leaf pages are 90% empty (depending on data modifications).

This is the sort of thing I mean by partitioning being one of your last options - lots of things to consider first :)|||Fancy scripting out the other indexes?|||Thanks guys seem like I have to really need to sit down and study some database architecture.

I currently only have one SP that handels the querys that always are used in reporting, soo when a row is in the db its "never" changes. But since this tables store telephone call data records that are collected at runtime, it gets alot of small inserts quite often. Say 10-200 records every five minutes. The insert interval is different at diffrent sites. But as a I said once its in there its "never" gonna change.
So the queries gets everyting from a couple of million rows to about 100 based on how long period your report is covering.

How would you suggest I should normalize this table then, or point me to some good resource where I can find out.

I'm sorry that this maby is newbe questions but thats really what I am so:)
Really appreciates your help.

Here are the rest of the indexes:

/****** Object: Index [IX_DateValue] Script Date: 11/21/2007 15:22:20 ******/
CREATE NONCLUSTERED INDEX [IX_DateValue] ON [dbo].[Cdr]
(
[DateValue] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = ON, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]

/****** Object: Index [IX_Level] Script Date: 11/21/2007 15:22:38 ******/
CREATE NONCLUSTERED INDEX [IX_Level] ON [dbo].[Cdr]
(
[Level] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = ON, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]

/****** Object: Index [IX_OrigId] Script Date: 11/21/2007 15:22:48 ******/
CREATE NONCLUSTERED INDEX [IX_OrigId] ON [dbo].[Cdr]
(
[OrigId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = ON, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]

/****** Object: Index [IX_TerId] Script Date: 11/21/2007 15:22:56 ******/
CREATE NONCLUSTERED INDEX [IX_TerId] ON [dbo].[Cdr]
(
[TerId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = ON, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]

/****** Object: Index [IX_TimeValue] Script Date: 11/21/2007 15:23:08 ******/
CREATE NONCLUSTERED INDEX [IX_TimeValue] ON [dbo].[Cdr]
(
[TimeValue] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]|||Bump fillfactor for the clustered index to 100.

This is my fave db design article at the mo:
http://www.tonymarston.net/php-mysql/database-design.html

Please post your proc. If that is literally all you do with this table then we can tune it rather well. I suspect we will want to change your clustered index....|||I have some reading to do:)

This is the quite simple SP the table function SplitDelimitedVarChar takes a carchar and a split char and creates a table that I use to get the correct dates and [level]'s (called exchanges in the SP).

And the way that I have come up with this code is by testing diffrent solutions and this has been the fastest, and as you can se its a quite simple select query so the columsn [DateValue] and [TimeValue] are ints that are created at insert based on the DATETIME column [timestamp] and are just the int representation of the date and time, and is there because its easier.

CREATE PROCEDURE [dbo].[spStudio_Get_Cdr]
@.beginDate DATETIME,
@.endDate DATETIME,
@.beginTime INT,
@.endTime INT,
@.subscribers VARCHAR(MAX),
@.exchanges VARCHAR(MAX) = '1:'
AS
BEGIN
SET NOCOUNT ON;

DECLARE @.exch TABLE(Item Varchar(50))
INSERT INTO @.exch
SELECT Item FROM [SplitDelimitedVarChar] (@.exchanges, '|') ORDER BY Item

DECLARE @.subs TABLE(Item Varchar(19))
INSERT INTO @.subs
SELECT Item FROM [SplitDelimitedVarChar] (@.subscribers, '|') ORDER BY Item

SELECT [id]
,[Abandon]
,[Bcap]
,[BlId]
,[CallChg]
,[CallIdentifier]
,[ChgInfo]
,[ClId]
,[CustNo]
,[Digits]
,[DigitType]
,[Dnis1]
,[Dnis2]
,[Duration]
,[FgDani]
,[HoundredHourDuration]
,[Name]
,[NameId]
,[Npi]
,[OrigAuxId]
,[OrigId]
,[OrigMin]
,[Origten0]
,[RecNo]
,[RecType]
,[Redir]
,[TerId]
,[TermAuxId]
,[TermMin]
,[Termten0]
,[Timestamp]
,[Ton]
,[Tta]
,[Twt]
,[Level]
FROM
[dbo].[Cdr] AS C
INNER JOIN @.exch AS E
ON
C.[Level] = E.[Item]
WHERE
(C.[DateValue] BETWEEN FLOOR(CAST(@.beginDate AS FLOAT)) AND FLOOR(CAST(@.endDate AS FLOAT)))
AND
(C.[TimeValue] BETWEEN @.beginTime AND @.endTime)
AND
(EXISTS(SELECT * FROM @.subs WHERE [Item] = C.[OrigId])
OR
EXISTS(SELECT * FROM @.subs WHERE [Item] = C.[TerId]))

END

thanks in advance|||Changing from a datetime to 2 different float values is NOT easier than doing a datediff or a dateadd.|||Agreed.

Jeff has some pertinent comments here - at least two or three articles apply. http://weblogs.sqlteam.com/jeffs/category/283.aspx
More reading ;)|||The idea of your sproc then is that you don't search for a range of "from 2:00pm 1st of Jan 2007 to 4:00pm 8th of April" but "1st of Jan 2007 to 8th of April, between the times of 2:00pm to 4:00 pm only". Correct?|||Yes that correct, the report can be say the whole year of 2006 but only between 07:00 and 16:00, not 20060101 07:00 o 20061231 16:00.
Exactly as you described.

Is there a better way to get this result Im ofcource going to try it.|||You can use Display Estimated Execution Plan to analyze your query performance.

cheers
iful|||if you do decide to partition (which probably is not necessary) and you are on 2005, have a look at partition functions and partition schemes. you don't need to use partitioned views anymore.

This approach to horizontal partitioning is more convenient IMO because you only have one set of indexes, etc, to manage. It behaves as a single table spread among multiple filegroups.

Big problem connecting SQLEXPRESS within a vb net application

Hello.
I've an application that has to do a bunch of "SELECT" and INSERT into a
couple of tables. On my machine everything works fine, but if I try to
deploy the app on a VM for a test i have many probs.
The most common is "(Named Pipes Provider, error: 40 - Could not open a
connection to SQL
Server)", but using (local)\SQLEXPRESS i've "Request for the permission of
type
'System.Data.SqlClient.SqlClientPermission, System.Data, version=2.0.0.0,
Culture=neutral, PublicKeyToken=b77a5c561934e89'" too...I think it's a
connectionstring problem. I used "Data Source=(local)\SQLEXPRESS;Initial
Catalog=Clienti;Integrated
Security=True", where "Clienti" is the DB, but i tried the string I found on
http://www.connectionstrings.com/?carrier=sqlserver2005 too.
It's being absurd...please give me an help...thanks...Ah, did you enable TCP/IP on the target server? Why are you using named
pipes? If you do, you must enable additional ports in the firewall (if there
is one).
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker
www.betav.com/blog/billva
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
Visit www.hitchhikerguides.net to get more information on my latest book:
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
----
---
"THE JOK3R" <a@.a.a> wrote in message
news:%23GL$eDNjHHA.4520@.TK2MSFTNGP02.phx.gbl...
> Hello.
> I've an application that has to do a bunch of "SELECT" and INSERT into a
> couple of tables. On my machine everything works fine, but if I try to
> deploy the app on a VM for a test i have many probs.
> The most common is "(Named Pipes Provider, error: 40 - Could not open a
> connection to SQL
> Server)", but using (local)\SQLEXPRESS i've "Request for the permission of
> type
> 'System.Data.SqlClient.SqlClientPermission, System.Data, version=2.0.0.0,
> Culture=neutral, PublicKeyToken=b77a5c561934e89'" too...I think it's a
> connectionstring problem. I used "Data Source=(local)\SQLEXPRESS;Initial
> Catalog=Clienti;Integrated
> Security=True", where "Clienti" is the DB, but i tried the string I found
> on http://www.connectionstrings.com/?carrier=sqlserver2005 too.
> It's being absurd...please give me an help...thanks...|||"William (Bill) Vaughn" <billvaRemoveThis@.betav.com> wrote in message
news:OpGW39NjHHA.1624@.TK2MSFTNGP06.phx.gbl...
> Ah, did you enable TCP/IP on the target server? Why are you using named
> pipes? If you do, you must enable additional ports in the firewall (if
> there is one).
I both did everything...enabled TCP/IP and Remote Connection, opened port
1431 default port...|||I suggest you walk through the list of issues posted in my blog--there is a
whitepaper there that discusses these
issues.http://betav.com/blog/billva/2006/0...com/blog/billva
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
Visit www.hitchhikerguides.net to get more information on my latest book:
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
----
---
"THE JOK3R" <a@.a.a> wrote in message
news:O8v9ZKOjHHA.4936@.TK2MSFTNGP03.phx.gbl...
>
> "William (Bill) Vaughn" <billvaRemoveThis@.betav.com> wrote in message
> news:OpGW39NjHHA.1624@.TK2MSFTNGP06.phx.gbl...
> I both did everything...enabled TCP/IP and Remote Connection, opened port
> 1431 default port...

Big problem connecting SQLEXPRESS within a vb net application

Hello.
I've an application that has to do a bunch of "SELECT" and INSERT into a
couple of tables. On my machine everything works fine, but if I try to
deploy the app on a VM for a test i have many probs.
The most common is "(Named Pipes Provider, error: 40 - Could not open a
connection to SQL
Server)", but using (local)\SQLEXPRESS i've "Request for the permission of
type
'System.Data.SqlClient.SqlClientPermission, System.Data, version=2.0.0.0,
Culture=neutral, PublicKeyToken=b77a5c561934e89'" too...I think it's a
connectionstring problem. I used "Data Source=(local)\SQLEXPRESS;Initial
Catalog=Clienti;Integrated
Security=True", where "Clienti" is the DB, but i tried the string I found on
http://www.connectionstrings.com/?carrier=sqlserver2005 too.
It's being absurd...please give me an help...thanks...
Ah, did you enable TCP/IP on the target server? Why are you using named
pipes? If you do, you must enable additional ports in the firewall (if there
is one).
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
INETA Speaker
www.betav.com/blog/billva
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
Visit www.hitchhikerguides.net to get more information on my latest book:
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
and Hitchhiker's Guide to SQL Server 2005 Compact Edition (EBook)
------
"THE JOK3R" <a@.a.a> wrote in message
news:%23GL$eDNjHHA.4520@.TK2MSFTNGP02.phx.gbl...
> Hello.
> I've an application that has to do a bunch of "SELECT" and INSERT into a
> couple of tables. On my machine everything works fine, but if I try to
> deploy the app on a VM for a test i have many probs.
> The most common is "(Named Pipes Provider, error: 40 - Could not open a
> connection to SQL
> Server)", but using (local)\SQLEXPRESS i've "Request for the permission of
> type
> 'System.Data.SqlClient.SqlClientPermission, System.Data, version=2.0.0.0,
> Culture=neutral, PublicKeyToken=b77a5c561934e89'" too...I think it's a
> connectionstring problem. I used "Data Source=(local)\SQLEXPRESS;Initial
> Catalog=Clienti;Integrated
> Security=True", where "Clienti" is the DB, but i tried the string I found
> on http://www.connectionstrings.com/?carrier=sqlserver2005 too.
> It's being absurd...please give me an help...thanks...
|||"William (Bill) Vaughn" <billvaRemoveThis@.betav.com> wrote in message
news:OpGW39NjHHA.1624@.TK2MSFTNGP06.phx.gbl...
> Ah, did you enable TCP/IP on the target server? Why are you using named
> pipes? If you do, you must enable additional ports in the firewall (if
> there is one).
I both did everything...enabled TCP/IP and Remote Connection, opened port
1431 default port...
sql

Sunday, March 11, 2012

Bidirectional Synchronization for SQL Server Mobile?

I create a distributed database for mobile application. I replicate a table that distribute on mobile device. I follow instruction how to create distributor, publication, replication, web synchronization, and subscriber database. I have done fine for synchronization between mobile database into desktop database (in this case SQL Server 2005 Standard Edition). But the problem is how can setup publication so it can bidirectional, not only from mobile database into desktop database, but also from desktop database into mobile database. So in the mobile database can have same data with desktop database even on mobile database lost some old data.

Its like data exchange between both engine. Desktop and mobile have same data. For filtering I can put filter on the desktop server for replicated table, so don't worry how I split the data.

Thanks a lot.

Look into Merge replication, or transactional replication with updatable subscriptions.

Martin

Thursday, March 8, 2012

BI Dev Studio - Calculations - Error in Application

Hi!

New Windows Server 2003 R2 64 Bit EN installation with new SQL Server 2005 DEV SP1 DE installation. While we open BI Development Studio and goto Cube and Calculations we do not get the Calculations Designer - instead we have an "Error in the application" error message. In the Source XML we can view the calculations, but cannot use the designer.

Is this a known issue?

Does someone know an solution to solve this issue? workarround?

Best regards

HANNES MAYER

Hello

You can probably find the answer in this post: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=756982&SiteID=1, I had the same problem, I think, and it's solved in that thread.

BI Application & SQL Server Express Edition

I was wondering whether anyone could enlighten me:
We currently build DB apps with a jet engine DB and c# Windows Form front end.
We wish to develop client windows form apps that can be installed on client PCs, which will have an SQL Server back-end and windows form Front End. However the apps we want to build are BI application using cubes we generate (in Analysis Services).
Now am I right in saying that we cannot develop an application that can be installed on a customer site that would include SQL Server Expres AND have the BI functionality? We wish to build a tool for customers that allows them to slice and dice data we provide

It is a pre-requisite that we must not assume that they have access to the internet and we cannot assume that they have an SQL Server which we could automatically install our product onto (to allow for BI functionailty).
So far all the tools we seen and the end-to-end applications, tend to refer to a pre-exisitng installation of SQL Server (with Analysis Services) e.g. Programmming SQL Server 2005 - Microsoft Press.
We need to create custom apps that can "stand alone". Is there any solution?

First. Analysis Services component of the Microsoft SQL Server 2005 product is not part of the Express Edition. It is part of any other edition of SQL Server ( except may be for the mobile edition )

This doesnt mean you cannot install Express and Standard editions of SQL Server side.

Now there is wide range of functionality supported by Analysis Services. You have to figure out what you would like to provide your users with.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

Saturday, February 25, 2012

Better to backup up all server databases or backup system databases separately from databa

Gurus,
Running SQL Server 2005 SP2 and using named instances. I am having lots of
important application databases on my server these days. My question is
this. Is it better to back up all server databases in one maintenance plan
or backup system databases separately from databases?
Spin
I don't think it matters one way or another. But then it depends on what
issues you may be struggling with.
If you have scheduling issues running a single maintenance plan, break it up
into multiple plans. Personally, I prefer dealing system databases
separately. In particular, when you are dealing relatively large user
databases, you may not want to use the MS maintenance plans and may choose to
schedule their backups in your own schduler for better flexibility.
Linchi
Linchi
"Spin" wrote:

> Gurus,
> Running SQL Server 2005 SP2 and using named instances. I am having lots of
> important application databases on my server these days. My question is
> this. Is it better to back up all server databases in one maintenance plan
> or backup system databases separately from databases?
> --
> Spin
>

Better to backup up all server databases or backup system databases separately from databa

Gurus,
Running SQL Server 2005 SP2 and using named instances. I am having lots of
important application databases on my server these days. My question is
this. Is it better to back up all server databases in one maintenance plan
or backup system databases separately from databases?
--
SpinI don't think it matters one way or another. But then it depends on what
issues you may be struggling with.
If you have scheduling issues running a single maintenance plan, break it up
into multiple plans. Personally, I prefer dealing system databases
separately. In particular, when you are dealing relatively large user
databases, you may not want to use the MS maintenance plans and may choose to
schedule their backups in your own schduler for better flexibility.
Linchi
Linchi
"Spin" wrote:
> Gurus,
> Running SQL Server 2005 SP2 and using named instances. I am having lots of
> important application databases on my server these days. My question is
> this. Is it better to back up all server databases in one maintenance plan
> or backup system databases separately from databases?
> --
> Spin
>

Better query?

I have a table from a 3rd party application that stores the time that
events occurred as the number of seconds since Jan 1 1970. I want to
perform an aggregate function on the values stored in each event, grouped
by date and present the sorted results. Here's what I have:
SELECT
DateAdd(dd, Time/86400, '1/1/1970 12:00:00 AM') AS PointDate,
Avg(Value)
FROM Data
GROUP BY DateAdd(dd, Time/86400, '1/1/1970 12:00:00 AM')
ORDER BY DateAdd(dd, Time/86400, '1/1/1970 12:00:00 AM') ASC
Is there an easier way to write this without duplicating the DateAdd
calculation three times' I can't use Stored Procedures (I'm not allowed
to modify the database in any way.)
-mdb>> have a table from a 3rd party application that stores the time that even
ts occurred as the number of seconds since 1970-01-01 <<
Oh yeah, that makes sense :) !! I always hated the term "legacy data"
when what we meant was "family curse from Hell!" instead.
Build a table like this:
CREATE TABLE FoobarDates
(foobar_date CHAR (10) NOT NULL,
start_second INTEGER NOT NULL,
finish_second INTEGER NOT NULL
PRIMARY KEY(start_second, finish_second ));
1) Avoid math in SQL-- it is designed to do JOINs. not algebra.
2) An ORDER BY on a SELECT is **meaningless** -- read any basic SQL
book. Indexes matter.
3) Use a spreadsheet to do the computations for the table. Easy and
faster than code.|||you can use a derived table Michael...
SELECT PointDate, Avg_Value = Avg( Value )
FROM (
SELECT PointDate, DateAdd(dd, Time/86400, '1/1/1970 12:00:00 AM'),
Value
FROM Data ) AS dt
GROUP BY PointDate
ORDER BY PointDate ASC
Tony
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials
"Michael Bray" <mbray@.makeDIntoDot_ctiusaDcom> wrote in message
news:Xns9739BF2454556mbrayctiusacom@.207.46.248.16...
>I have a table from a 3rd party application that stores the time that
> events occurred as the number of seconds since Jan 1 1970. I want to
> perform an aggregate function on the values stored in each event, grouped
> by date and present the sorted results. Here's what I have:
> SELECT
> DateAdd(dd, Time/86400, '1/1/1970 12:00:00 AM') AS PointDate,
> Avg(Value)
> FROM Data
> GROUP BY DateAdd(dd, Time/86400, '1/1/1970 12:00:00 AM')
> ORDER BY DateAdd(dd, Time/86400, '1/1/1970 12:00:00 AM') ASC
> Is there an easier way to write this without duplicating the DateAdd
> calculation three times' I can't use Stored Procedures (I'm not allowed
> to modify the database in any way.)
> -mdb|||Sorry, that should be...
SELECT PointDate, Avg_Value = Avg( Value )
FROM (
SELECT PointDate = DateAdd(dd, Time/86400, '1/1/1970 12:00:00 AM'),
Value
FROM Data ) AS dt
GROUP BY PointDate
ORDER BY PointDate ASC
Tony.
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials
"Tony Rogerson" <tonyrogerson@.sqlserverfaq.com> wrote in message
news:eaoeBd4CGHA.532@.TK2MSFTNGP15.phx.gbl...
> you can use a derived table Michael...
> SELECT PointDate, Avg_Value = Avg( Value )
> FROM (
> SELECT PointDate, DateAdd(dd, Time/86400, '1/1/1970 12:00:00 AM'),
> Value
> FROM Data ) AS dt
> GROUP BY PointDate
> ORDER BY PointDate ASC
> Tony
> --
> Tony Rogerson
> SQL Server MVP
> http://sqlserverfaq.com - free video tutorials
>
> "Michael Bray" <mbray@.makeDIntoDot_ctiusaDcom> wrote in message
> news:Xns9739BF2454556mbrayctiusacom@.207.46.248.16...
>|||"--CELKO--" <jcelko212@.earthlink.net> wrote in
news:1135732983.997588.94360@.o13g2000cwo.googlegroups.com:
> Oh yeah, that makes sense :) !! I always hated the term "legacy data"
> when what we meant was "family curse from Hell!" instead.
As a matter of fact, this is NOT legacy Data. In fact, it is an
application that would not have existed 10 years ago, because the
technology that is represented by the data in the database didn't exist.
(I'm not going to mention the vendor, but it is an IP Telephony monitoring
system.)
The rest of your post sounds like a troll, so I'm not going to comment,
except for this comment.
-mdb|||"Tony Rogerson" <tonyrogerson@.sqlserverfaq.com> wrote in
news:eaoeBd4CGHA.532@.TK2MSFTNGP15.phx.gbl:

> you can use a derived table Michael...
>
Wow... you know I had always tried to do something similar, but I never
could get it to work because I didn't know that I had to include the 'AS
<alias>'. Thanks! Now I get to go back and re-write a bunch of stored
procedures in other databases I've worked on that I used memory tables in!
:)
-mdb

Friday, February 24, 2012

Best way to track who is accessing a record

I have an application that has a SQL back end, and I want to be able to track who is accessing a record so that no one else can access it at the same time. I was going to do this with a table or application state, but how can I avoid keeping files locked when someone abandons a session? Any Ideas?There are third party tools to do it but try the links below for code to track updates and links to the third party tools. Hope this helps.
http://www.aspfaq.com/show.asp?id=2448

Sunday, February 19, 2012

Best way to setup data structure

I have an online application that works and functions like an accounting,
payroll, scheduliing and billing system. In the past, each center that
signed up would be setup in thier own database on SQL 2000. As this grows
more and more multisite organizations are using this service. It will
possible to have one organization with 2,000 centers using this service. My
question is: Should I continue setting up individual databases for each
center or should I combine the 2,000 centers for the one company into one
database and have a center key to distinguse the data. The only issue with
the mulit sites are reporting across the 2,000 or so centers. 95% of the
application is center specific the only time the data needs to be combined is
for some reports that are company wide.
Thanks,
Mark
Hi
I would do one database, but it depends on how well normalised the Db is and
what a stress test would show up.
Managing 2000 DB's becomes a nightmare.
How big ar the DB's and how big do you expect them to grow?
Regards
Mike
"Mark" wrote:

> I have an online application that works and functions like an accounting,
> payroll, scheduliing and billing system. In the past, each center that
> signed up would be setup in thier own database on SQL 2000. As this grows
> more and more multisite organizations are using this service. It will
> possible to have one organization with 2,000 centers using this service. My
> question is: Should I continue setting up individual databases for each
> center or should I combine the 2,000 centers for the one company into one
> database and have a center key to distinguse the data. The only issue with
> the mulit sites are reporting across the 2,000 or so centers. 95% of the
> application is center specific the only time the data needs to be combined is
> for some reports that are company wide.
> Thanks,
> Mark
|||The databse is normalized and where it is not, we are in the process of
fixing that now. The database size after 2 years of use is about 3 gigs - I
would imagine each database could grow to 10 gigs a piece at which time we
will get rid of some of the data. The size of the database is really
dependant on the activity of the centers.
Thanks,
Mark
"Mike Epprecht (SQL MVP)" wrote:
[vbcol=seagreen]
> Hi
> I would do one database, but it depends on how well normalised the Db is and
> what a stress test would show up.
> Managing 2000 DB's becomes a nightmare.
> How big ar the DB's and how big do you expect them to grow?
> Regards
> Mike
> "Mark" wrote:
|||I would put everything in one DB. one of our companies has around 400
lawfirms and everything is in one database. The main thing that you have to
think about is security. In our case each lawfirm has a role and many tables
contain a field that specifies to which law firm the row belongs (row level
security). This is basically the idea but it gets more complex when you have
multiple groups with each law firm. This is based on the requirements and
something that you have to analyze). The
process might be painful but you have a lot to win as a DBA:
-amount of work for you and the developers ($$$)
-more simplified disaster recovery
-For reporting you can replicate your database on a different server and let
your clients hit the your reporting box instead of your prod.
-You will not need as many connection strings!
-Avoiding an administrative nightmare
Sasan Saidi, MSc in CS
Senior DBA
Brascan Business Services
"I saw it work in a cartoon once so I am pretty sure I can do it."
"Mark" wrote:
[vbcol=seagreen]
> The databse is normalized and where it is not, we are in the process of
> fixing that now. The database size after 2 years of use is about 3 gigs - I
> would imagine each database could grow to 10 gigs a piece at which time we
> will get rid of some of the data. The size of the database is really
> dependant on the activity of the centers.
> Thanks,
> Mark
> "Mike Epprecht (SQL MVP)" wrote:

Best way to setup data structure

I have an online application that works and functions like an accounting,
payroll, scheduliing and billing system. In the past, each center that
signed up would be setup in thier own database on SQL 2000. As this grows
more and more multisite organizations are using this service. It will
possible to have one organization with 2,000 centers using this service. My
question is: Should I continue setting up individual databases for each
center or should I combine the 2,000 centers for the one company into one
database and have a center key to distinguse the data. The only issue with
the mulit sites are reporting across the 2,000 or so centers. 95% of the
application is center specific the only time the data needs to be combined is
for some reports that are company wide.
Thanks,
MarkHi
I would do one database, but it depends on how well normalised the Db is and
what a stress test would show up.
Managing 2000 DB's becomes a nightmare.
How big ar the DB's and how big do you expect them to grow?
Regards
Mike
"Mark" wrote:
> I have an online application that works and functions like an accounting,
> payroll, scheduliing and billing system. In the past, each center that
> signed up would be setup in thier own database on SQL 2000. As this grows
> more and more multisite organizations are using this service. It will
> possible to have one organization with 2,000 centers using this service. My
> question is: Should I continue setting up individual databases for each
> center or should I combine the 2,000 centers for the one company into one
> database and have a center key to distinguse the data. The only issue with
> the mulit sites are reporting across the 2,000 or so centers. 95% of the
> application is center specific the only time the data needs to be combined is
> for some reports that are company wide.
> Thanks,
> Mark|||The databse is normalized and where it is not, we are in the process of
fixing that now. The database size after 2 years of use is about 3 gigs - I
would imagine each database could grow to 10 gigs a piece at which time we
will get rid of some of the data. The size of the database is really
dependant on the activity of the centers.
Thanks,
Mark
"Mike Epprecht (SQL MVP)" wrote:
> Hi
> I would do one database, but it depends on how well normalised the Db is and
> what a stress test would show up.
> Managing 2000 DB's becomes a nightmare.
> How big ar the DB's and how big do you expect them to grow?
> Regards
> Mike
> "Mark" wrote:
> > I have an online application that works and functions like an accounting,
> > payroll, scheduliing and billing system. In the past, each center that
> > signed up would be setup in thier own database on SQL 2000. As this grows
> > more and more multisite organizations are using this service. It will
> > possible to have one organization with 2,000 centers using this service. My
> > question is: Should I continue setting up individual databases for each
> > center or should I combine the 2,000 centers for the one company into one
> > database and have a center key to distinguse the data. The only issue with
> > the mulit sites are reporting across the 2,000 or so centers. 95% of the
> > application is center specific the only time the data needs to be combined is
> > for some reports that are company wide.
> >
> > Thanks,
> > Mark|||I would put everything in one DB. one of our companies has around 400
lawfirms and everything is in one database. The main thing that you have to
think about is security. In our case each lawfirm has a role and many tables
contain a field that specifies to which law firm the row belongs (row level
security). This is basically the idea but it gets more complex when you have
multiple groups with each law firm. This is based on the requirements and
something that you have to analyze). The
process might be painful but you have a lot to win as a DBA:
-amount of work for you and the developers ($$$)
-more simplified disaster recovery
-For reporting you can replicate your database on a different server and let
your clients hit the your reporting box instead of your prod.
-You will not need as many connection strings!
-Avoiding an administrative nightmare
--
Sasan Saidi, MSc in CS
Senior DBA
Brascan Business Services
"I saw it work in a cartoon once so I am pretty sure I can do it."
"Mark" wrote:
> The databse is normalized and where it is not, we are in the process of
> fixing that now. The database size after 2 years of use is about 3 gigs - I
> would imagine each database could grow to 10 gigs a piece at which time we
> will get rid of some of the data. The size of the database is really
> dependant on the activity of the centers.
> Thanks,
> Mark
> "Mike Epprecht (SQL MVP)" wrote:
> > Hi
> >
> > I would do one database, but it depends on how well normalised the Db is and
> > what a stress test would show up.
> >
> > Managing 2000 DB's becomes a nightmare.
> >
> > How big ar the DB's and how big do you expect them to grow?
> >
> > Regards
> > Mike
> >
> > "Mark" wrote:
> >
> > > I have an online application that works and functions like an accounting,
> > > payroll, scheduliing and billing system. In the past, each center that
> > > signed up would be setup in thier own database on SQL 2000. As this grows
> > > more and more multisite organizations are using this service. It will
> > > possible to have one organization with 2,000 centers using this service. My
> > > question is: Should I continue setting up individual databases for each
> > > center or should I combine the 2,000 centers for the one company into one
> > > database and have a center key to distinguse the data. The only issue with
> > > the mulit sites are reporting across the 2,000 or so centers. 95% of the
> > > application is center specific the only time the data needs to be combined is
> > > for some reports that are company wide.
> > >
> > > Thanks,
> > > Mark

Best way to setup data structure

I have an online application that works and functions like an accounting,
payroll, scheduliing and billing system. In the past, each center that
signed up would be setup in thier own database on SQL 2000. As this grows
more and more multisite organizations are using this service. It will
possible to have one organization with 2,000 centers using this service. My
question is: Should I continue setting up individual databases for each
center or should I combine the 2,000 centers for the one company into one
database and have a center key to distinguse the data. The only issue with
the mulit sites are reporting across the 2,000 or so centers. 95% of the
application is center specific the only time the data needs to be combined i
s
for some reports that are company wide.
Thanks,
MarkHi
I would do one database, but it depends on how well normalised the Db is and
what a stress test would show up.
Managing 2000 DB's becomes a nightmare.
How big ar the DB's and how big do you expect them to grow?
Regards
Mike
"Mark" wrote:

> I have an online application that works and functions like an accounting,
> payroll, scheduliing and billing system. In the past, each center that
> signed up would be setup in thier own database on SQL 2000. As this grows
> more and more multisite organizations are using this service. It will
> possible to have one organization with 2,000 centers using this service.
My
> question is: Should I continue setting up individual databases for each
> center or should I combine the 2,000 centers for the one company into one
> database and have a center key to distinguse the data. The only issue wit
h
> the mulit sites are reporting across the 2,000 or so centers. 95% of the
> application is center specific the only time the data needs to be combined
is
> for some reports that are company wide.
> Thanks,
> Mark|||The databse is normalized and where it is not, we are in the process of
fixing that now. The database size after 2 years of use is about 3 gigs - I
would imagine each database could grow to 10 gigs a piece at which time we
will get rid of some of the data. The size of the database is really
dependant on the activity of the centers.
Thanks,
Mark
"Mike Epprecht (SQL MVP)" wrote:
[vbcol=seagreen]
> Hi
> I would do one database, but it depends on how well normalised the Db is a
nd
> what a stress test would show up.
> Managing 2000 DB's becomes a nightmare.
> How big ar the DB's and how big do you expect them to grow?
> Regards
> Mike
> "Mark" wrote:
>|||I would put everything in one DB. one of our companies has around 400
lawfirms and everything is in one database. The main thing that you have to
think about is security. In our case each lawfirm has a role and many tables
contain a field that specifies to which law firm the row belongs (row level
security). This is basically the idea but it gets more complex when you have
multiple groups with each law firm. This is based on the requirements and
something that you have to analyze). The
process might be painful but you have a lot to win as a DBA:
-amount of work for you and the developers ($$$)
-more simplified disaster recovery
-For reporting you can replicate your database on a different server and let
your clients hit the your reporting box instead of your prod.
-You will not need as many connection strings!
-Avoiding an administrative nightmare
Sasan Saidi, MSc in CS
Senior DBA
Brascan Business Services
"I saw it work in a cartoon once so I am pretty sure I can do it."
"Mark" wrote:
[vbcol=seagreen]
> The databse is normalized and where it is not, we are in the process of
> fixing that now. The database size after 2 years of use is about 3 gigs -
I
> would imagine each database could grow to 10 gigs a piece at which time we
> will get rid of some of the data. The size of the database is really
> dependant on the activity of the centers.
> Thanks,
> Mark
> "Mike Epprecht (SQL MVP)" wrote:
>